Nov 25 22:53:09 localhost kernel: Linux version 5.14.0-642.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-68.el9) #1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025
Nov 25 22:53:09 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Nov 25 22:53:09 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=b277050f-8ace-464d-abb6-4c46d4c45253 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 25 22:53:09 localhost kernel: BIOS-provided physical RAM map:
Nov 25 22:53:09 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Nov 25 22:53:09 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Nov 25 22:53:09 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Nov 25 22:53:09 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Nov 25 22:53:09 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Nov 25 22:53:09 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Nov 25 22:53:09 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Nov 25 22:53:09 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Nov 25 22:53:09 localhost kernel: NX (Execute Disable) protection: active
Nov 25 22:53:09 localhost kernel: APIC: Static calls initialized
Nov 25 22:53:09 localhost kernel: SMBIOS 2.8 present.
Nov 25 22:53:09 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Nov 25 22:53:09 localhost kernel: Hypervisor detected: KVM
Nov 25 22:53:09 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Nov 25 22:53:09 localhost kernel: kvm-clock: using sched offset of 4746104976 cycles
Nov 25 22:53:09 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Nov 25 22:53:09 localhost kernel: tsc: Detected 2799.998 MHz processor
Nov 25 22:53:09 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Nov 25 22:53:09 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Nov 25 22:53:09 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Nov 25 22:53:09 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Nov 25 22:53:09 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Nov 25 22:53:09 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Nov 25 22:53:09 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Nov 25 22:53:09 localhost kernel: Using GB pages for direct mapping
Nov 25 22:53:09 localhost kernel: RAMDISK: [mem 0x2d83a000-0x32c14fff]
Nov 25 22:53:09 localhost kernel: ACPI: Early table checksum verification disabled
Nov 25 22:53:09 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Nov 25 22:53:09 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 25 22:53:09 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 25 22:53:09 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 25 22:53:09 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Nov 25 22:53:09 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 25 22:53:09 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 25 22:53:09 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Nov 25 22:53:09 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Nov 25 22:53:09 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Nov 25 22:53:09 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Nov 25 22:53:09 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Nov 25 22:53:09 localhost kernel: No NUMA configuration found
Nov 25 22:53:09 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Nov 25 22:53:09 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd3000-0x23fffdfff]
Nov 25 22:53:09 localhost kernel: crashkernel reserved: 0x00000000a9000000 - 0x00000000b9000000 (256 MB)
Nov 25 22:53:09 localhost kernel: Zone ranges:
Nov 25 22:53:09 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Nov 25 22:53:09 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Nov 25 22:53:09 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Nov 25 22:53:09 localhost kernel:   Device   empty
Nov 25 22:53:09 localhost kernel: Movable zone start for each node
Nov 25 22:53:09 localhost kernel: Early memory node ranges
Nov 25 22:53:09 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Nov 25 22:53:09 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Nov 25 22:53:09 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Nov 25 22:53:09 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Nov 25 22:53:09 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Nov 25 22:53:09 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Nov 25 22:53:09 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Nov 25 22:53:09 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Nov 25 22:53:09 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Nov 25 22:53:09 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Nov 25 22:53:09 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Nov 25 22:53:09 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Nov 25 22:53:09 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Nov 25 22:53:09 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Nov 25 22:53:09 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Nov 25 22:53:09 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Nov 25 22:53:09 localhost kernel: TSC deadline timer available
Nov 25 22:53:09 localhost kernel: CPU topo: Max. logical packages:   8
Nov 25 22:53:09 localhost kernel: CPU topo: Max. logical dies:       8
Nov 25 22:53:09 localhost kernel: CPU topo: Max. dies per package:   1
Nov 25 22:53:09 localhost kernel: CPU topo: Max. threads per core:   1
Nov 25 22:53:09 localhost kernel: CPU topo: Num. cores per package:     1
Nov 25 22:53:09 localhost kernel: CPU topo: Num. threads per package:   1
Nov 25 22:53:09 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Nov 25 22:53:09 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Nov 25 22:53:09 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Nov 25 22:53:09 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Nov 25 22:53:09 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Nov 25 22:53:09 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Nov 25 22:53:09 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Nov 25 22:53:09 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Nov 25 22:53:09 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Nov 25 22:53:09 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Nov 25 22:53:09 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Nov 25 22:53:09 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Nov 25 22:53:09 localhost kernel: Booting paravirtualized kernel on KVM
Nov 25 22:53:09 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Nov 25 22:53:09 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Nov 25 22:53:09 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Nov 25 22:53:09 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Nov 25 22:53:09 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Nov 25 22:53:09 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Nov 25 22:53:09 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=b277050f-8ace-464d-abb6-4c46d4c45253 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 25 22:53:09 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64", will be passed to user space.
Nov 25 22:53:09 localhost kernel: random: crng init done
Nov 25 22:53:09 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Nov 25 22:53:09 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Nov 25 22:53:09 localhost kernel: Fallback order for Node 0: 0 
Nov 25 22:53:09 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Nov 25 22:53:09 localhost kernel: Policy zone: Normal
Nov 25 22:53:09 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Nov 25 22:53:09 localhost kernel: software IO TLB: area num 8.
Nov 25 22:53:09 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Nov 25 22:53:09 localhost kernel: ftrace: allocating 49313 entries in 193 pages
Nov 25 22:53:09 localhost kernel: ftrace: allocated 193 pages with 3 groups
Nov 25 22:53:09 localhost kernel: Dynamic Preempt: voluntary
Nov 25 22:53:09 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Nov 25 22:53:09 localhost kernel: rcu:         RCU event tracing is enabled.
Nov 25 22:53:09 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Nov 25 22:53:09 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Nov 25 22:53:09 localhost kernel:         Rude variant of Tasks RCU enabled.
Nov 25 22:53:09 localhost kernel:         Tracing variant of Tasks RCU enabled.
Nov 25 22:53:09 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Nov 25 22:53:09 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Nov 25 22:53:09 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 25 22:53:09 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 25 22:53:09 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 25 22:53:09 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Nov 25 22:53:09 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Nov 25 22:53:09 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Nov 25 22:53:09 localhost kernel: Console: colour VGA+ 80x25
Nov 25 22:53:09 localhost kernel: printk: console [ttyS0] enabled
Nov 25 22:53:09 localhost kernel: ACPI: Core revision 20230331
Nov 25 22:53:09 localhost kernel: APIC: Switch to symmetric I/O mode setup
Nov 25 22:53:09 localhost kernel: x2apic enabled
Nov 25 22:53:09 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Nov 25 22:53:09 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Nov 25 22:53:09 localhost kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998)
Nov 25 22:53:09 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Nov 25 22:53:09 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Nov 25 22:53:09 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Nov 25 22:53:09 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Nov 25 22:53:09 localhost kernel: Spectre V2 : Mitigation: Retpolines
Nov 25 22:53:09 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Nov 25 22:53:09 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Nov 25 22:53:09 localhost kernel: RETBleed: Mitigation: untrained return thunk
Nov 25 22:53:09 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Nov 25 22:53:09 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Nov 25 22:53:09 localhost kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Nov 25 22:53:09 localhost kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Nov 25 22:53:09 localhost kernel: x86/bugs: return thunk changed
Nov 25 22:53:09 localhost kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Nov 25 22:53:09 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Nov 25 22:53:09 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Nov 25 22:53:09 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Nov 25 22:53:09 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Nov 25 22:53:09 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Nov 25 22:53:09 localhost kernel: Freeing SMP alternatives memory: 40K
Nov 25 22:53:09 localhost kernel: pid_max: default: 32768 minimum: 301
Nov 25 22:53:09 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Nov 25 22:53:09 localhost kernel: landlock: Up and running.
Nov 25 22:53:09 localhost kernel: Yama: becoming mindful.
Nov 25 22:53:09 localhost kernel: SELinux:  Initializing.
Nov 25 22:53:09 localhost kernel: LSM support for eBPF active
Nov 25 22:53:09 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Nov 25 22:53:09 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Nov 25 22:53:09 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Nov 25 22:53:09 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Nov 25 22:53:09 localhost kernel: ... version:                0
Nov 25 22:53:09 localhost kernel: ... bit width:              48
Nov 25 22:53:09 localhost kernel: ... generic registers:      6
Nov 25 22:53:09 localhost kernel: ... value mask:             0000ffffffffffff
Nov 25 22:53:09 localhost kernel: ... max period:             00007fffffffffff
Nov 25 22:53:09 localhost kernel: ... fixed-purpose events:   0
Nov 25 22:53:09 localhost kernel: ... event mask:             000000000000003f
Nov 25 22:53:09 localhost kernel: signal: max sigframe size: 1776
Nov 25 22:53:09 localhost kernel: rcu: Hierarchical SRCU implementation.
Nov 25 22:53:09 localhost kernel: rcu:         Max phase no-delay instances is 400.
Nov 25 22:53:09 localhost kernel: smp: Bringing up secondary CPUs ...
Nov 25 22:53:09 localhost kernel: smpboot: x86: Booting SMP configuration:
Nov 25 22:53:09 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Nov 25 22:53:09 localhost kernel: smp: Brought up 1 node, 8 CPUs
Nov 25 22:53:09 localhost kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS)
Nov 25 22:53:09 localhost kernel: node 0 deferred pages initialised in 10ms
Nov 25 22:53:09 localhost kernel: Memory: 7765840K/8388068K available (16384K kernel code, 5787K rwdata, 13900K rodata, 4192K init, 7172K bss, 616272K reserved, 0K cma-reserved)
Nov 25 22:53:09 localhost kernel: devtmpfs: initialized
Nov 25 22:53:09 localhost kernel: x86/mm: Memory block size: 128MB
Nov 25 22:53:09 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Nov 25 22:53:09 localhost kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
Nov 25 22:53:09 localhost kernel: pinctrl core: initialized pinctrl subsystem
Nov 25 22:53:09 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Nov 25 22:53:09 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Nov 25 22:53:09 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Nov 25 22:53:09 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Nov 25 22:53:09 localhost kernel: audit: initializing netlink subsys (disabled)
Nov 25 22:53:09 localhost kernel: audit: type=2000 audit(1764111187.269:1): state=initialized audit_enabled=0 res=1
Nov 25 22:53:09 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Nov 25 22:53:09 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Nov 25 22:53:09 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Nov 25 22:53:09 localhost kernel: cpuidle: using governor menu
Nov 25 22:53:09 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Nov 25 22:53:09 localhost kernel: PCI: Using configuration type 1 for base access
Nov 25 22:53:09 localhost kernel: PCI: Using configuration type 1 for extended access
Nov 25 22:53:09 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Nov 25 22:53:09 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Nov 25 22:53:09 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Nov 25 22:53:09 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Nov 25 22:53:09 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Nov 25 22:53:09 localhost kernel: Demotion targets for Node 0: null
Nov 25 22:53:09 localhost kernel: cryptd: max_cpu_qlen set to 1000
Nov 25 22:53:09 localhost kernel: ACPI: Added _OSI(Module Device)
Nov 25 22:53:09 localhost kernel: ACPI: Added _OSI(Processor Device)
Nov 25 22:53:09 localhost kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Nov 25 22:53:09 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Nov 25 22:53:09 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Nov 25 22:53:09 localhost kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Nov 25 22:53:09 localhost kernel: ACPI: Interpreter enabled
Nov 25 22:53:09 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Nov 25 22:53:09 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Nov 25 22:53:09 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Nov 25 22:53:09 localhost kernel: PCI: Using E820 reservations for host bridge windows
Nov 25 22:53:09 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Nov 25 22:53:09 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Nov 25 22:53:09 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Nov 25 22:53:09 localhost kernel: acpiphp: Slot [3] registered
Nov 25 22:53:09 localhost kernel: acpiphp: Slot [4] registered
Nov 25 22:53:09 localhost kernel: acpiphp: Slot [5] registered
Nov 25 22:53:09 localhost kernel: acpiphp: Slot [6] registered
Nov 25 22:53:09 localhost kernel: acpiphp: Slot [7] registered
Nov 25 22:53:09 localhost kernel: acpiphp: Slot [8] registered
Nov 25 22:53:09 localhost kernel: acpiphp: Slot [9] registered
Nov 25 22:53:09 localhost kernel: acpiphp: Slot [10] registered
Nov 25 22:53:09 localhost kernel: acpiphp: Slot [11] registered
Nov 25 22:53:09 localhost kernel: acpiphp: Slot [12] registered
Nov 25 22:53:09 localhost kernel: acpiphp: Slot [13] registered
Nov 25 22:53:09 localhost kernel: acpiphp: Slot [14] registered
Nov 25 22:53:09 localhost kernel: acpiphp: Slot [15] registered
Nov 25 22:53:09 localhost kernel: acpiphp: Slot [16] registered
Nov 25 22:53:09 localhost kernel: acpiphp: Slot [17] registered
Nov 25 22:53:09 localhost kernel: acpiphp: Slot [18] registered
Nov 25 22:53:09 localhost kernel: acpiphp: Slot [19] registered
Nov 25 22:53:09 localhost kernel: acpiphp: Slot [20] registered
Nov 25 22:53:09 localhost kernel: acpiphp: Slot [21] registered
Nov 25 22:53:09 localhost kernel: acpiphp: Slot [22] registered
Nov 25 22:53:09 localhost kernel: acpiphp: Slot [23] registered
Nov 25 22:53:09 localhost kernel: acpiphp: Slot [24] registered
Nov 25 22:53:09 localhost kernel: acpiphp: Slot [25] registered
Nov 25 22:53:09 localhost kernel: acpiphp: Slot [26] registered
Nov 25 22:53:09 localhost kernel: acpiphp: Slot [27] registered
Nov 25 22:53:09 localhost kernel: acpiphp: Slot [28] registered
Nov 25 22:53:09 localhost kernel: acpiphp: Slot [29] registered
Nov 25 22:53:09 localhost kernel: acpiphp: Slot [30] registered
Nov 25 22:53:09 localhost kernel: acpiphp: Slot [31] registered
Nov 25 22:53:09 localhost kernel: PCI host bridge to bus 0000:00
Nov 25 22:53:09 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Nov 25 22:53:09 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Nov 25 22:53:09 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Nov 25 22:53:09 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Nov 25 22:53:09 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Nov 25 22:53:09 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Nov 25 22:53:09 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Nov 25 22:53:09 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Nov 25 22:53:09 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Nov 25 22:53:09 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Nov 25 22:53:09 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Nov 25 22:53:09 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Nov 25 22:53:09 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Nov 25 22:53:09 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Nov 25 22:53:09 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Nov 25 22:53:09 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Nov 25 22:53:09 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Nov 25 22:53:09 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Nov 25 22:53:09 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Nov 25 22:53:09 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Nov 25 22:53:09 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Nov 25 22:53:09 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Nov 25 22:53:09 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Nov 25 22:53:09 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Nov 25 22:53:09 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Nov 25 22:53:09 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Nov 25 22:53:09 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Nov 25 22:53:09 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Nov 25 22:53:09 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Nov 25 22:53:09 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Nov 25 22:53:09 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Nov 25 22:53:09 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Nov 25 22:53:09 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Nov 25 22:53:09 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Nov 25 22:53:09 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Nov 25 22:53:09 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Nov 25 22:53:09 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Nov 25 22:53:09 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Nov 25 22:53:09 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Nov 25 22:53:09 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Nov 25 22:53:09 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Nov 25 22:53:09 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Nov 25 22:53:09 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Nov 25 22:53:09 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Nov 25 22:53:09 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Nov 25 22:53:09 localhost kernel: iommu: Default domain type: Translated
Nov 25 22:53:09 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Nov 25 22:53:09 localhost kernel: SCSI subsystem initialized
Nov 25 22:53:09 localhost kernel: ACPI: bus type USB registered
Nov 25 22:53:09 localhost kernel: usbcore: registered new interface driver usbfs
Nov 25 22:53:09 localhost kernel: usbcore: registered new interface driver hub
Nov 25 22:53:09 localhost kernel: usbcore: registered new device driver usb
Nov 25 22:53:09 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Nov 25 22:53:09 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Nov 25 22:53:09 localhost kernel: PTP clock support registered
Nov 25 22:53:09 localhost kernel: EDAC MC: Ver: 3.0.0
Nov 25 22:53:09 localhost kernel: NetLabel: Initializing
Nov 25 22:53:09 localhost kernel: NetLabel:  domain hash size = 128
Nov 25 22:53:09 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Nov 25 22:53:09 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Nov 25 22:53:09 localhost kernel: PCI: Using ACPI for IRQ routing
Nov 25 22:53:09 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Nov 25 22:53:09 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Nov 25 22:53:09 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Nov 25 22:53:09 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Nov 25 22:53:09 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Nov 25 22:53:09 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Nov 25 22:53:09 localhost kernel: vgaarb: loaded
Nov 25 22:53:09 localhost kernel: clocksource: Switched to clocksource kvm-clock
Nov 25 22:53:09 localhost kernel: VFS: Disk quotas dquot_6.6.0
Nov 25 22:53:09 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Nov 25 22:53:09 localhost kernel: pnp: PnP ACPI init
Nov 25 22:53:09 localhost kernel: pnp 00:03: [dma 2]
Nov 25 22:53:09 localhost kernel: pnp: PnP ACPI: found 5 devices
Nov 25 22:53:09 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Nov 25 22:53:09 localhost kernel: NET: Registered PF_INET protocol family
Nov 25 22:53:09 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Nov 25 22:53:09 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Nov 25 22:53:09 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Nov 25 22:53:09 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Nov 25 22:53:09 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Nov 25 22:53:09 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Nov 25 22:53:09 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Nov 25 22:53:09 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Nov 25 22:53:09 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Nov 25 22:53:09 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Nov 25 22:53:09 localhost kernel: NET: Registered PF_XDP protocol family
Nov 25 22:53:09 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Nov 25 22:53:09 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Nov 25 22:53:09 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Nov 25 22:53:09 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Nov 25 22:53:09 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Nov 25 22:53:09 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Nov 25 22:53:09 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Nov 25 22:53:09 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Nov 25 22:53:09 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 80882 usecs
Nov 25 22:53:09 localhost kernel: PCI: CLS 0 bytes, default 64
Nov 25 22:53:09 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Nov 25 22:53:09 localhost kernel: software IO TLB: mapped [mem 0x00000000a5000000-0x00000000a9000000] (64MB)
Nov 25 22:53:09 localhost kernel: ACPI: bus type thunderbolt registered
Nov 25 22:53:09 localhost kernel: Trying to unpack rootfs image as initramfs...
Nov 25 22:53:09 localhost kernel: Initialise system trusted keyrings
Nov 25 22:53:09 localhost kernel: Key type blacklist registered
Nov 25 22:53:09 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Nov 25 22:53:09 localhost kernel: zbud: loaded
Nov 25 22:53:09 localhost kernel: integrity: Platform Keyring initialized
Nov 25 22:53:09 localhost kernel: integrity: Machine keyring initialized
Nov 25 22:53:09 localhost kernel: Freeing initrd memory: 85868K
Nov 25 22:53:09 localhost kernel: NET: Registered PF_ALG protocol family
Nov 25 22:53:09 localhost kernel: xor: automatically using best checksumming function   avx       
Nov 25 22:53:09 localhost kernel: Key type asymmetric registered
Nov 25 22:53:09 localhost kernel: Asymmetric key parser 'x509' registered
Nov 25 22:53:09 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Nov 25 22:53:09 localhost kernel: io scheduler mq-deadline registered
Nov 25 22:53:09 localhost kernel: io scheduler kyber registered
Nov 25 22:53:09 localhost kernel: io scheduler bfq registered
Nov 25 22:53:09 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Nov 25 22:53:09 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Nov 25 22:53:09 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Nov 25 22:53:09 localhost kernel: ACPI: button: Power Button [PWRF]
Nov 25 22:53:09 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Nov 25 22:53:09 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Nov 25 22:53:09 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Nov 25 22:53:09 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Nov 25 22:53:09 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Nov 25 22:53:09 localhost kernel: Non-volatile memory driver v1.3
Nov 25 22:53:09 localhost kernel: rdac: device handler registered
Nov 25 22:53:09 localhost kernel: hp_sw: device handler registered
Nov 25 22:53:09 localhost kernel: emc: device handler registered
Nov 25 22:53:09 localhost kernel: alua: device handler registered
Nov 25 22:53:09 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Nov 25 22:53:09 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Nov 25 22:53:09 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Nov 25 22:53:09 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Nov 25 22:53:09 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Nov 25 22:53:09 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Nov 25 22:53:09 localhost kernel: usb usb1: Product: UHCI Host Controller
Nov 25 22:53:09 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-642.el9.x86_64 uhci_hcd
Nov 25 22:53:09 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Nov 25 22:53:09 localhost kernel: hub 1-0:1.0: USB hub found
Nov 25 22:53:09 localhost kernel: hub 1-0:1.0: 2 ports detected
Nov 25 22:53:09 localhost kernel: usbcore: registered new interface driver usbserial_generic
Nov 25 22:53:09 localhost kernel: usbserial: USB Serial support registered for generic
Nov 25 22:53:09 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Nov 25 22:53:09 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Nov 25 22:53:09 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Nov 25 22:53:09 localhost kernel: mousedev: PS/2 mouse device common for all mice
Nov 25 22:53:09 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Nov 25 22:53:09 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Nov 25 22:53:09 localhost kernel: rtc_cmos 00:04: registered as rtc0
Nov 25 22:53:09 localhost kernel: rtc_cmos 00:04: setting system clock to 2025-11-25T22:53:08 UTC (1764111188)
Nov 25 22:53:09 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Nov 25 22:53:09 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Nov 25 22:53:09 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Nov 25 22:53:09 localhost kernel: usbcore: registered new interface driver usbhid
Nov 25 22:53:09 localhost kernel: usbhid: USB HID core driver
Nov 25 22:53:09 localhost kernel: drop_monitor: Initializing network drop monitor service
Nov 25 22:53:09 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Nov 25 22:53:09 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Nov 25 22:53:09 localhost kernel: Initializing XFRM netlink socket
Nov 25 22:53:09 localhost kernel: NET: Registered PF_INET6 protocol family
Nov 25 22:53:09 localhost kernel: Segment Routing with IPv6
Nov 25 22:53:09 localhost kernel: NET: Registered PF_PACKET protocol family
Nov 25 22:53:09 localhost kernel: mpls_gso: MPLS GSO support
Nov 25 22:53:09 localhost kernel: IPI shorthand broadcast: enabled
Nov 25 22:53:09 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Nov 25 22:53:09 localhost kernel: AES CTR mode by8 optimization enabled
Nov 25 22:53:09 localhost kernel: sched_clock: Marking stable (1258007193, 153474960)->(1545379434, -133897281)
Nov 25 22:53:09 localhost kernel: registered taskstats version 1
Nov 25 22:53:09 localhost kernel: Loading compiled-in X.509 certificates
Nov 25 22:53:09 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8ec4bd273f582f9a9b9a494ae677ca1f1488f19e'
Nov 25 22:53:09 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Nov 25 22:53:09 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Nov 25 22:53:09 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Nov 25 22:53:09 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Nov 25 22:53:09 localhost kernel: Demotion targets for Node 0: null
Nov 25 22:53:09 localhost kernel: page_owner is disabled
Nov 25 22:53:09 localhost kernel: Key type .fscrypt registered
Nov 25 22:53:09 localhost kernel: Key type fscrypt-provisioning registered
Nov 25 22:53:09 localhost kernel: Key type big_key registered
Nov 25 22:53:09 localhost kernel: Key type encrypted registered
Nov 25 22:53:09 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Nov 25 22:53:09 localhost kernel: Loading compiled-in module X.509 certificates
Nov 25 22:53:09 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8ec4bd273f582f9a9b9a494ae677ca1f1488f19e'
Nov 25 22:53:09 localhost kernel: ima: Allocated hash algorithm: sha256
Nov 25 22:53:09 localhost kernel: ima: No architecture policies found
Nov 25 22:53:09 localhost kernel: evm: Initialising EVM extended attributes:
Nov 25 22:53:09 localhost kernel: evm: security.selinux
Nov 25 22:53:09 localhost kernel: evm: security.SMACK64 (disabled)
Nov 25 22:53:09 localhost kernel: evm: security.SMACK64EXEC (disabled)
Nov 25 22:53:09 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Nov 25 22:53:09 localhost kernel: evm: security.SMACK64MMAP (disabled)
Nov 25 22:53:09 localhost kernel: evm: security.apparmor (disabled)
Nov 25 22:53:09 localhost kernel: evm: security.ima
Nov 25 22:53:09 localhost kernel: evm: security.capability
Nov 25 22:53:09 localhost kernel: evm: HMAC attrs: 0x1
Nov 25 22:53:09 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Nov 25 22:53:09 localhost kernel: Running certificate verification RSA selftest
Nov 25 22:53:09 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Nov 25 22:53:09 localhost kernel: Running certificate verification ECDSA selftest
Nov 25 22:53:09 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Nov 25 22:53:09 localhost kernel: clk: Disabling unused clocks
Nov 25 22:53:09 localhost kernel: Freeing unused decrypted memory: 2028K
Nov 25 22:53:09 localhost kernel: Freeing unused kernel image (initmem) memory: 4192K
Nov 25 22:53:09 localhost kernel: Write protecting the kernel read-only data: 30720k
Nov 25 22:53:09 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 436K
Nov 25 22:53:09 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Nov 25 22:53:09 localhost kernel: Run /init as init process
Nov 25 22:53:09 localhost kernel:   with arguments:
Nov 25 22:53:09 localhost kernel:     /init
Nov 25 22:53:09 localhost kernel:   with environment:
Nov 25 22:53:09 localhost kernel:     HOME=/
Nov 25 22:53:09 localhost kernel:     TERM=linux
Nov 25 22:53:09 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64
Nov 25 22:53:09 localhost systemd[1]: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Nov 25 22:53:09 localhost systemd[1]: Detected virtualization kvm.
Nov 25 22:53:09 localhost systemd[1]: Detected architecture x86-64.
Nov 25 22:53:09 localhost systemd[1]: Running in initrd.
Nov 25 22:53:09 localhost systemd[1]: No hostname configured, using default hostname.
Nov 25 22:53:09 localhost systemd[1]: Hostname set to <localhost>.
Nov 25 22:53:09 localhost systemd[1]: Initializing machine ID from VM UUID.
Nov 25 22:53:09 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Nov 25 22:53:09 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Nov 25 22:53:09 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Nov 25 22:53:09 localhost kernel: usb 1-1: Manufacturer: QEMU
Nov 25 22:53:09 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Nov 25 22:53:09 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Nov 25 22:53:09 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Nov 25 22:53:09 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Nov 25 22:53:09 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Nov 25 22:53:09 localhost systemd[1]: Reached target Local Encrypted Volumes.
Nov 25 22:53:09 localhost systemd[1]: Reached target Initrd /usr File System.
Nov 25 22:53:09 localhost systemd[1]: Reached target Local File Systems.
Nov 25 22:53:09 localhost systemd[1]: Reached target Path Units.
Nov 25 22:53:09 localhost systemd[1]: Reached target Slice Units.
Nov 25 22:53:09 localhost systemd[1]: Reached target Swaps.
Nov 25 22:53:09 localhost systemd[1]: Reached target Timer Units.
Nov 25 22:53:09 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Nov 25 22:53:09 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Nov 25 22:53:09 localhost systemd[1]: Listening on Journal Socket.
Nov 25 22:53:09 localhost systemd[1]: Listening on udev Control Socket.
Nov 25 22:53:09 localhost systemd[1]: Listening on udev Kernel Socket.
Nov 25 22:53:09 localhost systemd[1]: Reached target Socket Units.
Nov 25 22:53:09 localhost systemd[1]: Starting Create List of Static Device Nodes...
Nov 25 22:53:09 localhost systemd[1]: Starting Journal Service...
Nov 25 22:53:09 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Nov 25 22:53:09 localhost systemd[1]: Starting Apply Kernel Variables...
Nov 25 22:53:09 localhost systemd[1]: Starting Create System Users...
Nov 25 22:53:09 localhost systemd[1]: Starting Setup Virtual Console...
Nov 25 22:53:09 localhost systemd[1]: Finished Create List of Static Device Nodes.
Nov 25 22:53:09 localhost systemd[1]: Finished Apply Kernel Variables.
Nov 25 22:53:09 localhost systemd[1]: Finished Create System Users.
Nov 25 22:53:09 localhost systemd-journald[303]: Journal started
Nov 25 22:53:09 localhost systemd-journald[303]: Runtime Journal (/run/log/journal/99edd01fcb884b88a56d15f374f9d1d0) is 8.0M, max 153.6M, 145.6M free.
Nov 25 22:53:09 localhost systemd-sysusers[307]: Creating group 'users' with GID 100.
Nov 25 22:53:09 localhost systemd-sysusers[307]: Creating group 'dbus' with GID 81.
Nov 25 22:53:09 localhost systemd-sysusers[307]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Nov 25 22:53:09 localhost systemd[1]: Started Journal Service.
Nov 25 22:53:09 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Nov 25 22:53:09 localhost systemd[1]: Starting Create Volatile Files and Directories...
Nov 25 22:53:09 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Nov 25 22:53:09 localhost systemd[1]: Finished Create Volatile Files and Directories.
Nov 25 22:53:09 localhost systemd[1]: Finished Setup Virtual Console.
Nov 25 22:53:09 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Nov 25 22:53:09 localhost systemd[1]: Starting dracut cmdline hook...
Nov 25 22:53:09 localhost dracut-cmdline[324]: dracut-9 dracut-057-102.git20250818.el9
Nov 25 22:53:09 localhost dracut-cmdline[324]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=b277050f-8ace-464d-abb6-4c46d4c45253 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 25 22:53:09 localhost systemd[1]: Finished dracut cmdline hook.
Nov 25 22:53:09 localhost systemd[1]: Starting dracut pre-udev hook...
Nov 25 22:53:09 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Nov 25 22:53:09 localhost kernel: device-mapper: uevent: version 1.0.3
Nov 25 22:53:09 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Nov 25 22:53:09 localhost kernel: RPC: Registered named UNIX socket transport module.
Nov 25 22:53:09 localhost kernel: RPC: Registered udp transport module.
Nov 25 22:53:09 localhost kernel: RPC: Registered tcp transport module.
Nov 25 22:53:09 localhost kernel: RPC: Registered tcp-with-tls transport module.
Nov 25 22:53:09 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Nov 25 22:53:09 localhost rpc.statd[442]: Version 2.5.4 starting
Nov 25 22:53:09 localhost rpc.statd[442]: Initializing NSM state
Nov 25 22:53:09 localhost rpc.idmapd[447]: Setting log level to 0
Nov 25 22:53:09 localhost systemd[1]: Finished dracut pre-udev hook.
Nov 25 22:53:09 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Nov 25 22:53:09 localhost systemd-udevd[460]: Using default interface naming scheme 'rhel-9.0'.
Nov 25 22:53:09 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Nov 25 22:53:09 localhost systemd[1]: Starting dracut pre-trigger hook...
Nov 25 22:53:09 localhost systemd[1]: Finished dracut pre-trigger hook.
Nov 25 22:53:09 localhost systemd[1]: Starting Coldplug All udev Devices...
Nov 25 22:53:09 localhost systemd[1]: Created slice Slice /system/modprobe.
Nov 25 22:53:09 localhost systemd[1]: Starting Load Kernel Module configfs...
Nov 25 22:53:09 localhost systemd[1]: Finished Coldplug All udev Devices.
Nov 25 22:53:09 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 25 22:53:09 localhost systemd[1]: Finished Load Kernel Module configfs.
Nov 25 22:53:09 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 25 22:53:09 localhost systemd[1]: Reached target Network.
Nov 25 22:53:09 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 25 22:53:09 localhost systemd[1]: Starting dracut initqueue hook...
Nov 25 22:53:09 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Nov 25 22:53:09 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Nov 25 22:53:10 localhost kernel: libata version 3.00 loaded.
Nov 25 22:53:10 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Nov 25 22:53:10 localhost kernel:  vda: vda1
Nov 25 22:53:10 localhost kernel: scsi host0: ata_piix
Nov 25 22:53:10 localhost kernel: scsi host1: ata_piix
Nov 25 22:53:10 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Nov 25 22:53:10 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Nov 25 22:53:10 localhost systemd[1]: Found device /dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253.
Nov 25 22:53:10 localhost systemd[1]: Reached target Initrd Root Device.
Nov 25 22:53:10 localhost systemd[1]: Mounting Kernel Configuration File System...
Nov 25 22:53:10 localhost systemd[1]: Mounted Kernel Configuration File System.
Nov 25 22:53:10 localhost kernel: ata1: found unknown device (class 0)
Nov 25 22:53:10 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Nov 25 22:53:10 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Nov 25 22:53:10 localhost systemd[1]: Reached target System Initialization.
Nov 25 22:53:10 localhost systemd[1]: Reached target Basic System.
Nov 25 22:53:10 localhost systemd-udevd[490]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 22:53:10 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Nov 25 22:53:10 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Nov 25 22:53:10 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Nov 25 22:53:10 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Nov 25 22:53:10 localhost systemd[1]: Finished dracut initqueue hook.
Nov 25 22:53:10 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Nov 25 22:53:10 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Nov 25 22:53:10 localhost systemd[1]: Reached target Remote File Systems.
Nov 25 22:53:10 localhost systemd[1]: Starting dracut pre-mount hook...
Nov 25 22:53:10 localhost systemd[1]: Finished dracut pre-mount hook.
Nov 25 22:53:10 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253...
Nov 25 22:53:10 localhost systemd-fsck[552]: /usr/sbin/fsck.xfs: XFS file system.
Nov 25 22:53:10 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253.
Nov 25 22:53:10 localhost systemd[1]: Mounting /sysroot...
Nov 25 22:53:10 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Nov 25 22:53:10 localhost kernel: XFS (vda1): Mounting V5 Filesystem b277050f-8ace-464d-abb6-4c46d4c45253
Nov 25 22:53:10 localhost kernel: XFS (vda1): Ending clean mount
Nov 25 22:53:10 localhost systemd[1]: Mounted /sysroot.
Nov 25 22:53:10 localhost systemd[1]: Reached target Initrd Root File System.
Nov 25 22:53:10 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Nov 25 22:53:11 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Nov 25 22:53:11 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Nov 25 22:53:11 localhost systemd[1]: Reached target Initrd File Systems.
Nov 25 22:53:11 localhost systemd[1]: Reached target Initrd Default Target.
Nov 25 22:53:11 localhost systemd[1]: Starting dracut mount hook...
Nov 25 22:53:11 localhost systemd[1]: Finished dracut mount hook.
Nov 25 22:53:11 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Nov 25 22:53:11 localhost rpc.idmapd[447]: exiting on signal 15
Nov 25 22:53:11 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Nov 25 22:53:11 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Nov 25 22:53:11 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Nov 25 22:53:11 localhost systemd[1]: Stopped target Network.
Nov 25 22:53:11 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Nov 25 22:53:11 localhost systemd[1]: Stopped target Timer Units.
Nov 25 22:53:11 localhost systemd[1]: dbus.socket: Deactivated successfully.
Nov 25 22:53:11 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Nov 25 22:53:11 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Nov 25 22:53:11 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Nov 25 22:53:11 localhost systemd[1]: Stopped target Initrd Default Target.
Nov 25 22:53:11 localhost systemd[1]: Stopped target Basic System.
Nov 25 22:53:11 localhost systemd[1]: Stopped target Initrd Root Device.
Nov 25 22:53:11 localhost systemd[1]: Stopped target Initrd /usr File System.
Nov 25 22:53:11 localhost systemd[1]: Stopped target Path Units.
Nov 25 22:53:11 localhost systemd[1]: Stopped target Remote File Systems.
Nov 25 22:53:11 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Nov 25 22:53:11 localhost systemd[1]: Stopped target Slice Units.
Nov 25 22:53:11 localhost systemd[1]: Stopped target Socket Units.
Nov 25 22:53:11 localhost systemd[1]: Stopped target System Initialization.
Nov 25 22:53:11 localhost systemd[1]: Stopped target Local File Systems.
Nov 25 22:53:11 localhost systemd[1]: Stopped target Swaps.
Nov 25 22:53:11 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Nov 25 22:53:11 localhost systemd[1]: Stopped dracut mount hook.
Nov 25 22:53:11 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Nov 25 22:53:11 localhost systemd[1]: Stopped dracut pre-mount hook.
Nov 25 22:53:11 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Nov 25 22:53:11 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Nov 25 22:53:11 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Nov 25 22:53:11 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Nov 25 22:53:11 localhost systemd[1]: Stopped dracut initqueue hook.
Nov 25 22:53:11 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Nov 25 22:53:11 localhost systemd[1]: Stopped Apply Kernel Variables.
Nov 25 22:53:11 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Nov 25 22:53:11 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Nov 25 22:53:11 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Nov 25 22:53:11 localhost systemd[1]: Stopped Coldplug All udev Devices.
Nov 25 22:53:11 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Nov 25 22:53:11 localhost systemd[1]: Stopped dracut pre-trigger hook.
Nov 25 22:53:11 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Nov 25 22:53:11 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Nov 25 22:53:11 localhost systemd[1]: Stopped Setup Virtual Console.
Nov 25 22:53:11 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Nov 25 22:53:11 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Nov 25 22:53:11 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Nov 25 22:53:11 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Nov 25 22:53:11 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Nov 25 22:53:11 localhost systemd[1]: Closed udev Control Socket.
Nov 25 22:53:11 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Nov 25 22:53:11 localhost systemd[1]: Closed udev Kernel Socket.
Nov 25 22:53:11 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Nov 25 22:53:11 localhost systemd[1]: Stopped dracut pre-udev hook.
Nov 25 22:53:11 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Nov 25 22:53:11 localhost systemd[1]: Stopped dracut cmdline hook.
Nov 25 22:53:11 localhost systemd[1]: Starting Cleanup udev Database...
Nov 25 22:53:11 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Nov 25 22:53:11 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Nov 25 22:53:11 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Nov 25 22:53:11 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Nov 25 22:53:11 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Nov 25 22:53:11 localhost systemd[1]: Stopped Create System Users.
Nov 25 22:53:11 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Nov 25 22:53:11 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Nov 25 22:53:11 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Nov 25 22:53:11 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Nov 25 22:53:11 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Nov 25 22:53:11 localhost systemd[1]: Finished Cleanup udev Database.
Nov 25 22:53:11 localhost systemd[1]: Reached target Switch Root.
Nov 25 22:53:11 localhost systemd[1]: Starting Switch Root...
Nov 25 22:53:11 localhost systemd[1]: Switching root.
Nov 25 22:53:11 localhost systemd-journald[303]: Received SIGTERM from PID 1 (systemd).
Nov 25 22:53:11 localhost systemd-journald[303]: Journal stopped
Nov 25 22:53:12 localhost kernel: audit: type=1404 audit(1764111191.605:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Nov 25 22:53:12 localhost kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 22:53:12 localhost kernel: SELinux:  policy capability open_perms=1
Nov 25 22:53:12 localhost kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 22:53:12 localhost kernel: SELinux:  policy capability always_check_network=0
Nov 25 22:53:12 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 22:53:12 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 22:53:12 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 22:53:12 localhost kernel: audit: type=1403 audit(1764111191.753:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Nov 25 22:53:12 localhost systemd[1]: Successfully loaded SELinux policy in 153.539ms.
Nov 25 22:53:12 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 27.864ms.
Nov 25 22:53:12 localhost systemd[1]: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Nov 25 22:53:12 localhost systemd[1]: Detected virtualization kvm.
Nov 25 22:53:12 localhost systemd[1]: Detected architecture x86-64.
Nov 25 22:53:12 localhost systemd-rc-local-generator[634]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 22:53:12 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully.
Nov 25 22:53:12 localhost systemd[1]: Stopped Switch Root.
Nov 25 22:53:12 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Nov 25 22:53:12 localhost systemd[1]: Created slice Slice /system/getty.
Nov 25 22:53:12 localhost systemd[1]: Created slice Slice /system/serial-getty.
Nov 25 22:53:12 localhost systemd[1]: Created slice Slice /system/sshd-keygen.
Nov 25 22:53:12 localhost systemd[1]: Created slice User and Session Slice.
Nov 25 22:53:12 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Nov 25 22:53:12 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Nov 25 22:53:12 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Nov 25 22:53:12 localhost systemd[1]: Reached target Local Encrypted Volumes.
Nov 25 22:53:12 localhost systemd[1]: Stopped target Switch Root.
Nov 25 22:53:12 localhost systemd[1]: Stopped target Initrd File Systems.
Nov 25 22:53:12 localhost systemd[1]: Stopped target Initrd Root File System.
Nov 25 22:53:12 localhost systemd[1]: Reached target Local Integrity Protected Volumes.
Nov 25 22:53:12 localhost systemd[1]: Reached target Path Units.
Nov 25 22:53:12 localhost systemd[1]: Reached target rpc_pipefs.target.
Nov 25 22:53:12 localhost systemd[1]: Reached target Slice Units.
Nov 25 22:53:12 localhost systemd[1]: Reached target Swaps.
Nov 25 22:53:12 localhost systemd[1]: Reached target Local Verity Protected Volumes.
Nov 25 22:53:12 localhost systemd[1]: Listening on RPCbind Server Activation Socket.
Nov 25 22:53:12 localhost systemd[1]: Reached target RPC Port Mapper.
Nov 25 22:53:12 localhost systemd[1]: Listening on Process Core Dump Socket.
Nov 25 22:53:12 localhost systemd[1]: Listening on initctl Compatibility Named Pipe.
Nov 25 22:53:12 localhost systemd[1]: Listening on udev Control Socket.
Nov 25 22:53:12 localhost systemd[1]: Listening on udev Kernel Socket.
Nov 25 22:53:12 localhost systemd[1]: Mounting Huge Pages File System...
Nov 25 22:53:12 localhost systemd[1]: Mounting POSIX Message Queue File System...
Nov 25 22:53:12 localhost systemd[1]: Mounting Kernel Debug File System...
Nov 25 22:53:12 localhost systemd[1]: Mounting Kernel Trace File System...
Nov 25 22:53:12 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Nov 25 22:53:12 localhost systemd[1]: Starting Create List of Static Device Nodes...
Nov 25 22:53:12 localhost systemd[1]: Starting Load Kernel Module configfs...
Nov 25 22:53:12 localhost systemd[1]: Starting Load Kernel Module drm...
Nov 25 22:53:12 localhost systemd[1]: Starting Load Kernel Module efi_pstore...
Nov 25 22:53:12 localhost systemd[1]: Starting Load Kernel Module fuse...
Nov 25 22:53:12 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Nov 25 22:53:12 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Nov 25 22:53:12 localhost systemd[1]: Stopped File System Check on Root Device.
Nov 25 22:53:12 localhost systemd[1]: Stopped Journal Service.
Nov 25 22:53:12 localhost systemd[1]: Starting Journal Service...
Nov 25 22:53:12 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Nov 25 22:53:12 localhost systemd[1]: Starting Generate network units from Kernel command line...
Nov 25 22:53:12 localhost systemd[1]: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 25 22:53:12 localhost systemd[1]: Starting Remount Root and Kernel File Systems...
Nov 25 22:53:12 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Nov 25 22:53:12 localhost systemd[1]: Starting Apply Kernel Variables...
Nov 25 22:53:12 localhost systemd-journald[675]: Journal started
Nov 25 22:53:12 localhost systemd-journald[675]: Runtime Journal (/run/log/journal/1f988c78c563e12389ab342aced42dbb) is 8.0M, max 153.6M, 145.6M free.
Nov 25 22:53:12 localhost systemd[1]: Queued start job for default target Multi-User System.
Nov 25 22:53:12 localhost systemd[1]: systemd-journald.service: Deactivated successfully.
Nov 25 22:53:12 localhost kernel: fuse: init (API version 7.37)
Nov 25 22:53:12 localhost systemd[1]: Starting Coldplug All udev Devices...
Nov 25 22:53:12 localhost systemd[1]: Started Journal Service.
Nov 25 22:53:12 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Nov 25 22:53:12 localhost systemd[1]: Mounted Huge Pages File System.
Nov 25 22:53:12 localhost systemd[1]: Mounted POSIX Message Queue File System.
Nov 25 22:53:12 localhost systemd[1]: Mounted Kernel Debug File System.
Nov 25 22:53:12 localhost systemd[1]: Mounted Kernel Trace File System.
Nov 25 22:53:12 localhost systemd[1]: Finished Create List of Static Device Nodes.
Nov 25 22:53:12 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 25 22:53:12 localhost systemd[1]: Finished Load Kernel Module configfs.
Nov 25 22:53:12 localhost systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Nov 25 22:53:12 localhost systemd[1]: Finished Load Kernel Module efi_pstore.
Nov 25 22:53:12 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully.
Nov 25 22:53:12 localhost systemd[1]: Finished Load Kernel Module fuse.
Nov 25 22:53:12 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Nov 25 22:53:12 localhost systemd[1]: Finished Generate network units from Kernel command line.
Nov 25 22:53:12 localhost systemd[1]: Finished Remount Root and Kernel File Systems.
Nov 25 22:53:12 localhost systemd[1]: Finished Apply Kernel Variables.
Nov 25 22:53:12 localhost kernel: ACPI: bus type drm_connector registered
Nov 25 22:53:12 localhost systemd[1]: Mounting FUSE Control File System...
Nov 25 22:53:12 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Nov 25 22:53:12 localhost systemd[1]: Starting Rebuild Hardware Database...
Nov 25 22:53:12 localhost systemd[1]: Starting Flush Journal to Persistent Storage...
Nov 25 22:53:12 localhost systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Nov 25 22:53:12 localhost systemd[1]: Starting Load/Save OS Random Seed...
Nov 25 22:53:12 localhost systemd[1]: Starting Create System Users...
Nov 25 22:53:12 localhost systemd[1]: modprobe@drm.service: Deactivated successfully.
Nov 25 22:53:12 localhost systemd[1]: Finished Load Kernel Module drm.
Nov 25 22:53:12 localhost systemd-journald[675]: Runtime Journal (/run/log/journal/1f988c78c563e12389ab342aced42dbb) is 8.0M, max 153.6M, 145.6M free.
Nov 25 22:53:12 localhost systemd-journald[675]: Received client request to flush runtime journal.
Nov 25 22:53:12 localhost systemd[1]: Mounted FUSE Control File System.
Nov 25 22:53:12 localhost systemd[1]: Finished Flush Journal to Persistent Storage.
Nov 25 22:53:12 localhost systemd[1]: Finished Load/Save OS Random Seed.
Nov 25 22:53:12 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Nov 25 22:53:12 localhost systemd[1]: Finished Create System Users.
Nov 25 22:53:12 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Nov 25 22:53:12 localhost systemd[1]: Finished Coldplug All udev Devices.
Nov 25 22:53:12 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Nov 25 22:53:12 localhost systemd[1]: Reached target Preparation for Local File Systems.
Nov 25 22:53:12 localhost systemd[1]: Reached target Local File Systems.
Nov 25 22:53:12 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache...
Nov 25 22:53:12 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Nov 25 22:53:12 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Nov 25 22:53:12 localhost systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Nov 25 22:53:12 localhost systemd[1]: Starting Automatic Boot Loader Update...
Nov 25 22:53:12 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Nov 25 22:53:12 localhost systemd[1]: Starting Create Volatile Files and Directories...
Nov 25 22:53:12 localhost bootctl[692]: Couldn't find EFI system partition, skipping.
Nov 25 22:53:12 localhost systemd[1]: Finished Automatic Boot Loader Update.
Nov 25 22:53:12 localhost systemd[1]: Finished Create Volatile Files and Directories.
Nov 25 22:53:12 localhost systemd[1]: Starting Security Auditing Service...
Nov 25 22:53:12 localhost systemd[1]: Starting RPC Bind...
Nov 25 22:53:12 localhost systemd[1]: Starting Rebuild Journal Catalog...
Nov 25 22:53:12 localhost auditd[698]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Nov 25 22:53:12 localhost auditd[698]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Nov 25 22:53:12 localhost systemd[1]: Finished Rebuild Journal Catalog.
Nov 25 22:53:12 localhost systemd[1]: Started RPC Bind.
Nov 25 22:53:12 localhost augenrules[703]: /sbin/augenrules: No change
Nov 25 22:53:12 localhost augenrules[718]: No rules
Nov 25 22:53:12 localhost augenrules[718]: enabled 1
Nov 25 22:53:12 localhost augenrules[718]: failure 1
Nov 25 22:53:12 localhost augenrules[718]: pid 698
Nov 25 22:53:12 localhost augenrules[718]: rate_limit 0
Nov 25 22:53:12 localhost augenrules[718]: backlog_limit 8192
Nov 25 22:53:12 localhost augenrules[718]: lost 0
Nov 25 22:53:12 localhost augenrules[718]: backlog 4
Nov 25 22:53:12 localhost augenrules[718]: backlog_wait_time 60000
Nov 25 22:53:12 localhost augenrules[718]: backlog_wait_time_actual 0
Nov 25 22:53:12 localhost augenrules[718]: enabled 1
Nov 25 22:53:12 localhost augenrules[718]: failure 1
Nov 25 22:53:12 localhost augenrules[718]: pid 698
Nov 25 22:53:12 localhost augenrules[718]: rate_limit 0
Nov 25 22:53:12 localhost augenrules[718]: backlog_limit 8192
Nov 25 22:53:12 localhost augenrules[718]: lost 0
Nov 25 22:53:12 localhost augenrules[718]: backlog 4
Nov 25 22:53:12 localhost augenrules[718]: backlog_wait_time 60000
Nov 25 22:53:12 localhost augenrules[718]: backlog_wait_time_actual 0
Nov 25 22:53:12 localhost augenrules[718]: enabled 1
Nov 25 22:53:12 localhost augenrules[718]: failure 1
Nov 25 22:53:12 localhost augenrules[718]: pid 698
Nov 25 22:53:12 localhost augenrules[718]: rate_limit 0
Nov 25 22:53:12 localhost augenrules[718]: backlog_limit 8192
Nov 25 22:53:12 localhost augenrules[718]: lost 0
Nov 25 22:53:12 localhost augenrules[718]: backlog 4
Nov 25 22:53:12 localhost augenrules[718]: backlog_wait_time 60000
Nov 25 22:53:12 localhost augenrules[718]: backlog_wait_time_actual 0
Nov 25 22:53:12 localhost systemd[1]: Started Security Auditing Service.
Nov 25 22:53:12 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Nov 25 22:53:12 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Nov 25 22:53:13 localhost systemd[1]: Finished Rebuild Hardware Database.
Nov 25 22:53:13 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Nov 25 22:53:13 localhost systemd-udevd[726]: Using default interface naming scheme 'rhel-9.0'.
Nov 25 22:53:13 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache.
Nov 25 22:53:13 localhost systemd[1]: Starting Update is Completed...
Nov 25 22:53:13 localhost systemd[1]: Finished Update is Completed.
Nov 25 22:53:13 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Nov 25 22:53:13 localhost systemd[1]: Reached target System Initialization.
Nov 25 22:53:13 localhost systemd[1]: Started dnf makecache --timer.
Nov 25 22:53:13 localhost systemd[1]: Started Daily rotation of log files.
Nov 25 22:53:13 localhost systemd[1]: Started Daily Cleanup of Temporary Directories.
Nov 25 22:53:13 localhost systemd[1]: Reached target Timer Units.
Nov 25 22:53:13 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Nov 25 22:53:13 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Nov 25 22:53:13 localhost systemd[1]: Reached target Socket Units.
Nov 25 22:53:13 localhost systemd[1]: Starting D-Bus System Message Bus...
Nov 25 22:53:13 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 25 22:53:13 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Nov 25 22:53:13 localhost systemd[1]: Starting Load Kernel Module configfs...
Nov 25 22:53:13 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 25 22:53:13 localhost systemd[1]: Finished Load Kernel Module configfs.
Nov 25 22:53:13 localhost systemd-udevd[747]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 22:53:13 localhost systemd[1]: Started D-Bus System Message Bus.
Nov 25 22:53:13 localhost systemd[1]: Reached target Basic System.
Nov 25 22:53:13 localhost dbus-broker-lau[758]: Ready
Nov 25 22:53:13 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Nov 25 22:53:13 localhost kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Nov 25 22:53:13 localhost kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Nov 25 22:53:13 localhost systemd[1]: Starting NTP client/server...
Nov 25 22:53:13 localhost systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Nov 25 22:53:13 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Nov 25 22:53:13 localhost systemd[1]: Starting Restore /run/initramfs on shutdown...
Nov 25 22:53:13 localhost systemd[1]: Starting IPv4 firewall with iptables...
Nov 25 22:53:13 localhost systemd[1]: Started irqbalance daemon.
Nov 25 22:53:13 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Nov 25 22:53:13 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 25 22:53:13 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 25 22:53:13 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 25 22:53:13 localhost systemd[1]: Reached target sshd-keygen.target.
Nov 25 22:53:13 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Nov 25 22:53:13 localhost systemd[1]: Reached target User and Group Name Lookups.
Nov 25 22:53:13 localhost chronyd[791]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Nov 25 22:53:13 localhost systemd[1]: Starting User Login Management...
Nov 25 22:53:13 localhost chronyd[791]: Loaded 0 symmetric keys
Nov 25 22:53:13 localhost systemd[1]: Finished Restore /run/initramfs on shutdown.
Nov 25 22:53:13 localhost chronyd[791]: Using right/UTC timezone to obtain leap second data
Nov 25 22:53:13 localhost chronyd[791]: Loaded seccomp filter (level 2)
Nov 25 22:53:13 localhost systemd[1]: Started NTP client/server.
Nov 25 22:53:13 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Nov 25 22:53:13 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Nov 25 22:53:13 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Nov 25 22:53:13 localhost kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Nov 25 22:53:13 localhost kernel: kvm_amd: TSC scaling supported
Nov 25 22:53:13 localhost kernel: kvm_amd: Nested Virtualization enabled
Nov 25 22:53:13 localhost kernel: kvm_amd: Nested Paging enabled
Nov 25 22:53:13 localhost kernel: kvm_amd: LBR virtualization supported
Nov 25 22:53:13 localhost kernel: Console: switching to colour dummy device 80x25
Nov 25 22:53:13 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Nov 25 22:53:13 localhost kernel: [drm] features: -context_init
Nov 25 22:53:13 localhost kernel: [drm] number of scanouts: 1
Nov 25 22:53:13 localhost kernel: [drm] number of cap sets: 0
Nov 25 22:53:13 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Nov 25 22:53:13 localhost systemd-logind[789]: New seat seat0.
Nov 25 22:53:13 localhost systemd-logind[789]: Watching system buttons on /dev/input/event0 (Power Button)
Nov 25 22:53:13 localhost systemd-logind[789]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Nov 25 22:53:13 localhost systemd[1]: Started User Login Management.
Nov 25 22:53:13 localhost kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Nov 25 22:53:13 localhost kernel: Console: switching to colour frame buffer device 128x48
Nov 25 22:53:13 localhost kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Nov 25 22:53:13 localhost iptables.init[781]: iptables: Applying firewall rules: [  OK  ]
Nov 25 22:53:13 localhost systemd[1]: Finished IPv4 firewall with iptables.
Nov 25 22:53:14 localhost cloud-init[837]: Cloud-init v. 24.4-7.el9 running 'init-local' at Tue, 25 Nov 2025 22:53:14 +0000. Up 6.87 seconds.
Nov 25 22:53:14 localhost kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Nov 25 22:53:14 localhost kernel: ISO 9660 Extensions: RRIP_1991A
Nov 25 22:53:14 localhost systemd[1]: run-cloud\x2dinit-tmp-tmppsbkfyv6.mount: Deactivated successfully.
Nov 25 22:53:14 localhost systemd[1]: Starting Hostname Service...
Nov 25 22:53:14 localhost systemd[1]: Started Hostname Service.
Nov 25 22:53:14 np0005535838.novalocal systemd-hostnamed[851]: Hostname set to <np0005535838.novalocal> (static)
Nov 25 22:53:14 np0005535838.novalocal systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Nov 25 22:53:14 np0005535838.novalocal systemd[1]: Reached target Preparation for Network.
Nov 25 22:53:14 np0005535838.novalocal systemd[1]: Starting Network Manager...
Nov 25 22:53:14 np0005535838.novalocal NetworkManager[855]: <info>  [1764111194.8034] NetworkManager (version 1.54.1-1.el9) is starting... (boot:3edafa6c-db49-405c-9758-42faad226154)
Nov 25 22:53:14 np0005535838.novalocal NetworkManager[855]: <info>  [1764111194.8039] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 25 22:53:14 np0005535838.novalocal NetworkManager[855]: <info>  [1764111194.8192] manager[0x55b627b65080]: monitoring kernel firmware directory '/lib/firmware'.
Nov 25 22:53:14 np0005535838.novalocal NetworkManager[855]: <info>  [1764111194.8249] hostname: hostname: using hostnamed
Nov 25 22:53:14 np0005535838.novalocal NetworkManager[855]: <info>  [1764111194.8250] hostname: static hostname changed from (none) to "np0005535838.novalocal"
Nov 25 22:53:14 np0005535838.novalocal NetworkManager[855]: <info>  [1764111194.8255] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 25 22:53:14 np0005535838.novalocal NetworkManager[855]: <info>  [1764111194.8397] manager[0x55b627b65080]: rfkill: Wi-Fi hardware radio set enabled
Nov 25 22:53:14 np0005535838.novalocal NetworkManager[855]: <info>  [1764111194.8397] manager[0x55b627b65080]: rfkill: WWAN hardware radio set enabled
Nov 25 22:53:14 np0005535838.novalocal systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Nov 25 22:53:14 np0005535838.novalocal NetworkManager[855]: <info>  [1764111194.8490] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 25 22:53:14 np0005535838.novalocal NetworkManager[855]: <info>  [1764111194.8490] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 25 22:53:14 np0005535838.novalocal NetworkManager[855]: <info>  [1764111194.8491] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 25 22:53:14 np0005535838.novalocal NetworkManager[855]: <info>  [1764111194.8492] manager: Networking is enabled by state file
Nov 25 22:53:14 np0005535838.novalocal NetworkManager[855]: <info>  [1764111194.8494] settings: Loaded settings plugin: keyfile (internal)
Nov 25 22:53:14 np0005535838.novalocal NetworkManager[855]: <info>  [1764111194.8524] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 25 22:53:14 np0005535838.novalocal NetworkManager[855]: <info>  [1764111194.8552] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 25 22:53:14 np0005535838.novalocal NetworkManager[855]: <info>  [1764111194.8581] dhcp: init: Using DHCP client 'internal'
Nov 25 22:53:14 np0005535838.novalocal NetworkManager[855]: <info>  [1764111194.8584] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 25 22:53:14 np0005535838.novalocal NetworkManager[855]: <info>  [1764111194.8598] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 22:53:14 np0005535838.novalocal NetworkManager[855]: <info>  [1764111194.8611] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 25 22:53:14 np0005535838.novalocal NetworkManager[855]: <info>  [1764111194.8619] device (lo): Activation: starting connection 'lo' (8a2e98f0-f5c9-4e09-92f1-2bf0997fed4f)
Nov 25 22:53:14 np0005535838.novalocal NetworkManager[855]: <info>  [1764111194.8630] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 25 22:53:14 np0005535838.novalocal NetworkManager[855]: <info>  [1764111194.8633] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 22:53:14 np0005535838.novalocal NetworkManager[855]: <info>  [1764111194.8664] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 25 22:53:14 np0005535838.novalocal NetworkManager[855]: <info>  [1764111194.8669] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 25 22:53:14 np0005535838.novalocal NetworkManager[855]: <info>  [1764111194.8672] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 25 22:53:14 np0005535838.novalocal NetworkManager[855]: <info>  [1764111194.8674] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 25 22:53:14 np0005535838.novalocal NetworkManager[855]: <info>  [1764111194.8677] device (eth0): carrier: link connected
Nov 25 22:53:14 np0005535838.novalocal NetworkManager[855]: <info>  [1764111194.8682] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 25 22:53:14 np0005535838.novalocal NetworkManager[855]: <info>  [1764111194.8689] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Nov 25 22:53:14 np0005535838.novalocal NetworkManager[855]: <info>  [1764111194.8696] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 25 22:53:14 np0005535838.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 25 22:53:14 np0005535838.novalocal NetworkManager[855]: <info>  [1764111194.8700] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 25 22:53:14 np0005535838.novalocal NetworkManager[855]: <info>  [1764111194.8701] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 22:53:14 np0005535838.novalocal NetworkManager[855]: <info>  [1764111194.8703] manager: NetworkManager state is now CONNECTING
Nov 25 22:53:14 np0005535838.novalocal NetworkManager[855]: <info>  [1764111194.8704] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 22:53:14 np0005535838.novalocal NetworkManager[855]: <info>  [1764111194.8712] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 22:53:14 np0005535838.novalocal NetworkManager[855]: <info>  [1764111194.8715] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 25 22:53:14 np0005535838.novalocal systemd[1]: Started Network Manager.
Nov 25 22:53:14 np0005535838.novalocal systemd[1]: Reached target Network.
Nov 25 22:53:14 np0005535838.novalocal NetworkManager[855]: <info>  [1764111194.8762] dhcp4 (eth0): state changed new lease, address=38.102.83.77
Nov 25 22:53:14 np0005535838.novalocal NetworkManager[855]: <info>  [1764111194.8772] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 25 22:53:14 np0005535838.novalocal NetworkManager[855]: <info>  [1764111194.8792] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 22:53:14 np0005535838.novalocal systemd[1]: Starting Network Manager Wait Online...
Nov 25 22:53:14 np0005535838.novalocal systemd[1]: Starting GSSAPI Proxy Daemon...
Nov 25 22:53:14 np0005535838.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 25 22:53:14 np0005535838.novalocal NetworkManager[855]: <info>  [1764111194.8909] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 25 22:53:14 np0005535838.novalocal NetworkManager[855]: <info>  [1764111194.8912] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 25 22:53:14 np0005535838.novalocal NetworkManager[855]: <info>  [1764111194.8919] device (lo): Activation: successful, device activated.
Nov 25 22:53:14 np0005535838.novalocal NetworkManager[855]: <info>  [1764111194.8926] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 22:53:14 np0005535838.novalocal NetworkManager[855]: <info>  [1764111194.8928] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 22:53:14 np0005535838.novalocal NetworkManager[855]: <info>  [1764111194.8932] manager: NetworkManager state is now CONNECTED_SITE
Nov 25 22:53:14 np0005535838.novalocal NetworkManager[855]: <info>  [1764111194.8936] device (eth0): Activation: successful, device activated.
Nov 25 22:53:14 np0005535838.novalocal NetworkManager[855]: <info>  [1764111194.8943] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 25 22:53:14 np0005535838.novalocal NetworkManager[855]: <info>  [1764111194.8946] manager: startup complete
Nov 25 22:53:14 np0005535838.novalocal systemd[1]: Finished Network Manager Wait Online.
Nov 25 22:53:14 np0005535838.novalocal systemd[1]: Starting Cloud-init: Network Stage...
Nov 25 22:53:14 np0005535838.novalocal systemd[1]: Started GSSAPI Proxy Daemon.
Nov 25 22:53:14 np0005535838.novalocal systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Nov 25 22:53:14 np0005535838.novalocal systemd[1]: Reached target NFS client services.
Nov 25 22:53:14 np0005535838.novalocal systemd[1]: Reached target Preparation for Remote File Systems.
Nov 25 22:53:14 np0005535838.novalocal systemd[1]: Reached target Remote File Systems.
Nov 25 22:53:14 np0005535838.novalocal systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 25 22:53:15 np0005535838.novalocal cloud-init[918]: Cloud-init v. 24.4-7.el9 running 'init' at Tue, 25 Nov 2025 22:53:15 +0000. Up 7.91 seconds.
Nov 25 22:53:15 np0005535838.novalocal cloud-init[918]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Nov 25 22:53:15 np0005535838.novalocal cloud-init[918]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 25 22:53:15 np0005535838.novalocal cloud-init[918]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Nov 25 22:53:15 np0005535838.novalocal cloud-init[918]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 25 22:53:15 np0005535838.novalocal cloud-init[918]: ci-info: |  eth0  | True |         38.102.83.77         | 255.255.255.0 | global | fa:16:3e:1e:be:4b |
Nov 25 22:53:15 np0005535838.novalocal cloud-init[918]: ci-info: |  eth0  | True | fe80::f816:3eff:fe1e:be4b/64 |       .       |  link  | fa:16:3e:1e:be:4b |
Nov 25 22:53:15 np0005535838.novalocal cloud-init[918]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Nov 25 22:53:15 np0005535838.novalocal cloud-init[918]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Nov 25 22:53:15 np0005535838.novalocal cloud-init[918]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 25 22:53:15 np0005535838.novalocal cloud-init[918]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Nov 25 22:53:15 np0005535838.novalocal cloud-init[918]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 25 22:53:15 np0005535838.novalocal cloud-init[918]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Nov 25 22:53:15 np0005535838.novalocal cloud-init[918]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 25 22:53:15 np0005535838.novalocal cloud-init[918]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Nov 25 22:53:15 np0005535838.novalocal cloud-init[918]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Nov 25 22:53:15 np0005535838.novalocal cloud-init[918]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Nov 25 22:53:15 np0005535838.novalocal cloud-init[918]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 25 22:53:15 np0005535838.novalocal cloud-init[918]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Nov 25 22:53:15 np0005535838.novalocal cloud-init[918]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 25 22:53:15 np0005535838.novalocal cloud-init[918]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Nov 25 22:53:15 np0005535838.novalocal cloud-init[918]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 25 22:53:15 np0005535838.novalocal cloud-init[918]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Nov 25 22:53:15 np0005535838.novalocal cloud-init[918]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Nov 25 22:53:15 np0005535838.novalocal cloud-init[918]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 25 22:53:16 np0005535838.novalocal useradd[984]: new group: name=cloud-user, GID=1001
Nov 25 22:53:16 np0005535838.novalocal useradd[984]: new user: name=cloud-user, UID=1001, GID=1001, home=/home/cloud-user, shell=/bin/bash, from=none
Nov 25 22:53:16 np0005535838.novalocal useradd[984]: add 'cloud-user' to group 'adm'
Nov 25 22:53:16 np0005535838.novalocal useradd[984]: add 'cloud-user' to group 'systemd-journal'
Nov 25 22:53:16 np0005535838.novalocal useradd[984]: add 'cloud-user' to shadow group 'adm'
Nov 25 22:53:16 np0005535838.novalocal useradd[984]: add 'cloud-user' to shadow group 'systemd-journal'
Nov 25 22:53:16 np0005535838.novalocal cloud-init[918]: Generating public/private rsa key pair.
Nov 25 22:53:16 np0005535838.novalocal cloud-init[918]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Nov 25 22:53:16 np0005535838.novalocal cloud-init[918]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Nov 25 22:53:16 np0005535838.novalocal cloud-init[918]: The key fingerprint is:
Nov 25 22:53:16 np0005535838.novalocal cloud-init[918]: SHA256:uEpSwMNUTYaT/3AuVFyqJzt6B7TTBd/+gaZ64cpg6yc root@np0005535838.novalocal
Nov 25 22:53:16 np0005535838.novalocal cloud-init[918]: The key's randomart image is:
Nov 25 22:53:16 np0005535838.novalocal cloud-init[918]: +---[RSA 3072]----+
Nov 25 22:53:16 np0005535838.novalocal cloud-init[918]: |  ...=o. ..      |
Nov 25 22:53:16 np0005535838.novalocal cloud-init[918]: | +  +.. +.       |
Nov 25 22:53:16 np0005535838.novalocal cloud-init[918]: |  =  o ..o .     |
Nov 25 22:53:16 np0005535838.novalocal cloud-init[918]: |   o  =o. o .    |
Nov 25 22:53:16 np0005535838.novalocal cloud-init[918]: |    .o+BS. . .   |
Nov 25 22:53:16 np0005535838.novalocal cloud-init[918]: |   .  ==+ . + .  |
Nov 25 22:53:16 np0005535838.novalocal cloud-init[918]: |  . . *+ . + . . |
Nov 25 22:53:16 np0005535838.novalocal cloud-init[918]: |   o +E=o +   .  |
Nov 25 22:53:16 np0005535838.novalocal cloud-init[918]: |    ooo+++       |
Nov 25 22:53:16 np0005535838.novalocal cloud-init[918]: +----[SHA256]-----+
Nov 25 22:53:16 np0005535838.novalocal cloud-init[918]: Generating public/private ecdsa key pair.
Nov 25 22:53:16 np0005535838.novalocal cloud-init[918]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Nov 25 22:53:16 np0005535838.novalocal cloud-init[918]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Nov 25 22:53:16 np0005535838.novalocal cloud-init[918]: The key fingerprint is:
Nov 25 22:53:16 np0005535838.novalocal cloud-init[918]: SHA256:kISfC0IPO3Dpvg8AVjVav5bdtdd7Gtun1kd8uYY1YAo root@np0005535838.novalocal
Nov 25 22:53:16 np0005535838.novalocal cloud-init[918]: The key's randomart image is:
Nov 25 22:53:16 np0005535838.novalocal cloud-init[918]: +---[ECDSA 256]---+
Nov 25 22:53:16 np0005535838.novalocal cloud-init[918]: |   o.=.          |
Nov 25 22:53:16 np0005535838.novalocal cloud-init[918]: |. * +.o.         |
Nov 25 22:53:16 np0005535838.novalocal cloud-init[918]: |.* = .oo     .   |
Nov 25 22:53:16 np0005535838.novalocal cloud-init[918]: |o = o o.+E. .o. .|
Nov 25 22:53:16 np0005535838.novalocal cloud-init[918]: |.. o . =S...o..oo|
Nov 25 22:53:16 np0005535838.novalocal cloud-init[918]: | ..   o    .  .+=|
Nov 25 22:53:16 np0005535838.novalocal cloud-init[918]: |  ..          ++=|
Nov 25 22:53:16 np0005535838.novalocal cloud-init[918]: |  ..         ..B=|
Nov 25 22:53:16 np0005535838.novalocal cloud-init[918]: |   ..        .=.+|
Nov 25 22:53:16 np0005535838.novalocal cloud-init[918]: +----[SHA256]-----+
Nov 25 22:53:16 np0005535838.novalocal cloud-init[918]: Generating public/private ed25519 key pair.
Nov 25 22:53:16 np0005535838.novalocal cloud-init[918]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Nov 25 22:53:16 np0005535838.novalocal cloud-init[918]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Nov 25 22:53:16 np0005535838.novalocal cloud-init[918]: The key fingerprint is:
Nov 25 22:53:16 np0005535838.novalocal cloud-init[918]: SHA256:gnRg88rLdICZl6A/nNSr5lFwcw6juAeEoZujCKJjfIQ root@np0005535838.novalocal
Nov 25 22:53:16 np0005535838.novalocal cloud-init[918]: The key's randomart image is:
Nov 25 22:53:16 np0005535838.novalocal cloud-init[918]: +--[ED25519 256]--+
Nov 25 22:53:16 np0005535838.novalocal cloud-init[918]: |. . +            |
Nov 25 22:53:16 np0005535838.novalocal cloud-init[918]: |oo B =           |
Nov 25 22:53:16 np0005535838.novalocal cloud-init[918]: |+.* @ +          |
Nov 25 22:53:16 np0005535838.novalocal cloud-init[918]: |.B.O %           |
Nov 25 22:53:16 np0005535838.novalocal cloud-init[918]: |OE*.B + S        |
Nov 25 22:53:16 np0005535838.novalocal cloud-init[918]: |B+.* o .         |
Nov 25 22:53:16 np0005535838.novalocal cloud-init[918]: |*o=.o            |
Nov 25 22:53:16 np0005535838.novalocal cloud-init[918]: |.=..             |
Nov 25 22:53:16 np0005535838.novalocal cloud-init[918]: |  .              |
Nov 25 22:53:16 np0005535838.novalocal cloud-init[918]: +----[SHA256]-----+
Nov 25 22:53:16 np0005535838.novalocal systemd[1]: Finished Cloud-init: Network Stage.
Nov 25 22:53:16 np0005535838.novalocal systemd[1]: Reached target Cloud-config availability.
Nov 25 22:53:16 np0005535838.novalocal systemd[1]: Reached target Network is Online.
Nov 25 22:53:16 np0005535838.novalocal systemd[1]: Starting Cloud-init: Config Stage...
Nov 25 22:53:16 np0005535838.novalocal systemd[1]: Starting Crash recovery kernel arming...
Nov 25 22:53:16 np0005535838.novalocal systemd[1]: Starting Notify NFS peers of a restart...
Nov 25 22:53:16 np0005535838.novalocal systemd[1]: Starting System Logging Service...
Nov 25 22:53:16 np0005535838.novalocal systemd[1]: Starting OpenSSH server daemon...
Nov 25 22:53:16 np0005535838.novalocal sm-notify[1000]: Version 2.5.4 starting
Nov 25 22:53:16 np0005535838.novalocal systemd[1]: Starting Permit User Sessions...
Nov 25 22:53:16 np0005535838.novalocal systemd[1]: Started Notify NFS peers of a restart.
Nov 25 22:53:16 np0005535838.novalocal systemd[1]: Finished Permit User Sessions.
Nov 25 22:53:16 np0005535838.novalocal sshd[1002]: Server listening on 0.0.0.0 port 22.
Nov 25 22:53:16 np0005535838.novalocal sshd[1002]: Server listening on :: port 22.
Nov 25 22:53:16 np0005535838.novalocal systemd[1]: Started Command Scheduler.
Nov 25 22:53:16 np0005535838.novalocal systemd[1]: Started Getty on tty1.
Nov 25 22:53:16 np0005535838.novalocal systemd[1]: Started Serial Getty on ttyS0.
Nov 25 22:53:16 np0005535838.novalocal crond[1005]: (CRON) STARTUP (1.5.7)
Nov 25 22:53:16 np0005535838.novalocal crond[1005]: (CRON) INFO (Syslog will be used instead of sendmail.)
Nov 25 22:53:16 np0005535838.novalocal crond[1005]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 40% if used.)
Nov 25 22:53:16 np0005535838.novalocal crond[1005]: (CRON) INFO (running with inotify support)
Nov 25 22:53:16 np0005535838.novalocal systemd[1]: Reached target Login Prompts.
Nov 25 22:53:16 np0005535838.novalocal systemd[1]: Started OpenSSH server daemon.
Nov 25 22:53:16 np0005535838.novalocal systemd[1]: Started System Logging Service.
Nov 25 22:53:16 np0005535838.novalocal rsyslogd[1001]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1001" x-info="https://www.rsyslog.com"] start
Nov 25 22:53:16 np0005535838.novalocal rsyslogd[1001]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Nov 25 22:53:16 np0005535838.novalocal systemd[1]: Reached target Multi-User System.
Nov 25 22:53:16 np0005535838.novalocal systemd[1]: Starting Record Runlevel Change in UTMP...
Nov 25 22:53:16 np0005535838.novalocal systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Nov 25 22:53:16 np0005535838.novalocal systemd[1]: Finished Record Runlevel Change in UTMP.
Nov 25 22:53:16 np0005535838.novalocal rsyslogd[1001]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 22:53:16 np0005535838.novalocal sshd-session[1043]: Connection reset by 38.102.83.114 port 49118 [preauth]
Nov 25 22:53:16 np0005535838.novalocal sshd-session[1061]: Unable to negotiate with 38.102.83.114 port 49126: no matching host key type found. Their offer: ssh-ed25519,ssh-ed25519-cert-v01@openssh.com [preauth]
Nov 25 22:53:16 np0005535838.novalocal sshd-session[1063]: Connection reset by 38.102.83.114 port 49130 [preauth]
Nov 25 22:53:16 np0005535838.novalocal kdumpctl[1015]: kdump: No kdump initial ramdisk found.
Nov 25 22:53:16 np0005535838.novalocal kdumpctl[1015]: kdump: Rebuilding /boot/initramfs-5.14.0-642.el9.x86_64kdump.img
Nov 25 22:53:16 np0005535838.novalocal sshd-session[1068]: Unable to negotiate with 38.102.83.114 port 49144: no matching host key type found. Their offer: ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@openssh.com [preauth]
Nov 25 22:53:16 np0005535838.novalocal sshd-session[1074]: Unable to negotiate with 38.102.83.114 port 49148: no matching host key type found. Their offer: ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com [preauth]
Nov 25 22:53:17 np0005535838.novalocal sshd-session[1078]: Connection reset by 38.102.83.114 port 49156 [preauth]
Nov 25 22:53:17 np0005535838.novalocal sshd-session[1092]: Connection closed by 38.102.83.114 port 49168 [preauth]
Nov 25 22:53:17 np0005535838.novalocal sshd-session[1104]: Unable to negotiate with 38.102.83.114 port 49176: no matching host key type found. Their offer: ssh-rsa,ssh-rsa-cert-v01@openssh.com [preauth]
Nov 25 22:53:17 np0005535838.novalocal sshd-session[1113]: Unable to negotiate with 38.102.83.114 port 49190: no matching host key type found. Their offer: ssh-dss,ssh-dss-cert-v01@openssh.com [preauth]
Nov 25 22:53:17 np0005535838.novalocal cloud-init[1122]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Tue, 25 Nov 2025 22:53:17 +0000. Up 9.75 seconds.
Nov 25 22:53:17 np0005535838.novalocal systemd[1]: Finished Cloud-init: Config Stage.
Nov 25 22:53:17 np0005535838.novalocal systemd[1]: Starting Cloud-init: Final Stage...
Nov 25 22:53:17 np0005535838.novalocal cloud-init[1278]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Tue, 25 Nov 2025 22:53:17 +0000. Up 10.14 seconds.
Nov 25 22:53:17 np0005535838.novalocal dracut[1283]: dracut-057-102.git20250818.el9
Nov 25 22:53:17 np0005535838.novalocal cloud-init[1292]: #############################################################
Nov 25 22:53:17 np0005535838.novalocal cloud-init[1299]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Nov 25 22:53:17 np0005535838.novalocal cloud-init[1303]: 256 SHA256:kISfC0IPO3Dpvg8AVjVav5bdtdd7Gtun1kd8uYY1YAo root@np0005535838.novalocal (ECDSA)
Nov 25 22:53:17 np0005535838.novalocal cloud-init[1305]: 256 SHA256:gnRg88rLdICZl6A/nNSr5lFwcw6juAeEoZujCKJjfIQ root@np0005535838.novalocal (ED25519)
Nov 25 22:53:17 np0005535838.novalocal cloud-init[1307]: 3072 SHA256:uEpSwMNUTYaT/3AuVFyqJzt6B7TTBd/+gaZ64cpg6yc root@np0005535838.novalocal (RSA)
Nov 25 22:53:17 np0005535838.novalocal cloud-init[1308]: -----END SSH HOST KEY FINGERPRINTS-----
Nov 25 22:53:17 np0005535838.novalocal cloud-init[1309]: #############################################################
Nov 25 22:53:17 np0005535838.novalocal cloud-init[1278]: Cloud-init v. 24.4-7.el9 finished at Tue, 25 Nov 2025 22:53:17 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 10.38 seconds
Nov 25 22:53:17 np0005535838.novalocal dracut[1285]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-642.el9.x86_64kdump.img 5.14.0-642.el9.x86_64
Nov 25 22:53:17 np0005535838.novalocal systemd[1]: Finished Cloud-init: Final Stage.
Nov 25 22:53:17 np0005535838.novalocal systemd[1]: Reached target Cloud-init target.
Nov 25 22:53:18 np0005535838.novalocal dracut[1285]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Nov 25 22:53:18 np0005535838.novalocal dracut[1285]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Nov 25 22:53:18 np0005535838.novalocal dracut[1285]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Nov 25 22:53:18 np0005535838.novalocal dracut[1285]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Nov 25 22:53:18 np0005535838.novalocal dracut[1285]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Nov 25 22:53:18 np0005535838.novalocal dracut[1285]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Nov 25 22:53:18 np0005535838.novalocal dracut[1285]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Nov 25 22:53:18 np0005535838.novalocal dracut[1285]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Nov 25 22:53:18 np0005535838.novalocal dracut[1285]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Nov 25 22:53:18 np0005535838.novalocal dracut[1285]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Nov 25 22:53:18 np0005535838.novalocal dracut[1285]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Nov 25 22:53:18 np0005535838.novalocal dracut[1285]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Nov 25 22:53:18 np0005535838.novalocal dracut[1285]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Nov 25 22:53:18 np0005535838.novalocal dracut[1285]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Nov 25 22:53:18 np0005535838.novalocal dracut[1285]: Module 'ifcfg' will not be installed, because it's in the list to be omitted!
Nov 25 22:53:18 np0005535838.novalocal dracut[1285]: Module 'plymouth' will not be installed, because it's in the list to be omitted!
Nov 25 22:53:18 np0005535838.novalocal dracut[1285]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Nov 25 22:53:18 np0005535838.novalocal dracut[1285]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Nov 25 22:53:18 np0005535838.novalocal dracut[1285]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Nov 25 22:53:18 np0005535838.novalocal dracut[1285]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Nov 25 22:53:18 np0005535838.novalocal dracut[1285]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Nov 25 22:53:18 np0005535838.novalocal dracut[1285]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Nov 25 22:53:18 np0005535838.novalocal dracut[1285]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Nov 25 22:53:18 np0005535838.novalocal dracut[1285]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Nov 25 22:53:18 np0005535838.novalocal dracut[1285]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Nov 25 22:53:18 np0005535838.novalocal dracut[1285]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Nov 25 22:53:18 np0005535838.novalocal dracut[1285]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Nov 25 22:53:18 np0005535838.novalocal dracut[1285]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Nov 25 22:53:18 np0005535838.novalocal dracut[1285]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Nov 25 22:53:18 np0005535838.novalocal dracut[1285]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Nov 25 22:53:19 np0005535838.novalocal dracut[1285]: Module 'resume' will not be installed, because it's in the list to be omitted!
Nov 25 22:53:19 np0005535838.novalocal dracut[1285]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Nov 25 22:53:19 np0005535838.novalocal dracut[1285]: Module 'earlykdump' will not be installed, because it's in the list to be omitted!
Nov 25 22:53:19 np0005535838.novalocal dracut[1285]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Nov 25 22:53:19 np0005535838.novalocal dracut[1285]: memstrack is not available
Nov 25 22:53:19 np0005535838.novalocal dracut[1285]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Nov 25 22:53:19 np0005535838.novalocal dracut[1285]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Nov 25 22:53:19 np0005535838.novalocal dracut[1285]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Nov 25 22:53:19 np0005535838.novalocal dracut[1285]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Nov 25 22:53:19 np0005535838.novalocal dracut[1285]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Nov 25 22:53:19 np0005535838.novalocal dracut[1285]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Nov 25 22:53:19 np0005535838.novalocal dracut[1285]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Nov 25 22:53:19 np0005535838.novalocal dracut[1285]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Nov 25 22:53:19 np0005535838.novalocal dracut[1285]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Nov 25 22:53:19 np0005535838.novalocal dracut[1285]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Nov 25 22:53:19 np0005535838.novalocal dracut[1285]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Nov 25 22:53:19 np0005535838.novalocal dracut[1285]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Nov 25 22:53:19 np0005535838.novalocal dracut[1285]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Nov 25 22:53:19 np0005535838.novalocal dracut[1285]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Nov 25 22:53:19 np0005535838.novalocal dracut[1285]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Nov 25 22:53:19 np0005535838.novalocal dracut[1285]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Nov 25 22:53:19 np0005535838.novalocal dracut[1285]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Nov 25 22:53:19 np0005535838.novalocal dracut[1285]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Nov 25 22:53:19 np0005535838.novalocal dracut[1285]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Nov 25 22:53:19 np0005535838.novalocal dracut[1285]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Nov 25 22:53:19 np0005535838.novalocal dracut[1285]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Nov 25 22:53:19 np0005535838.novalocal dracut[1285]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Nov 25 22:53:19 np0005535838.novalocal dracut[1285]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Nov 25 22:53:19 np0005535838.novalocal dracut[1285]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Nov 25 22:53:19 np0005535838.novalocal dracut[1285]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Nov 25 22:53:19 np0005535838.novalocal dracut[1285]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Nov 25 22:53:19 np0005535838.novalocal dracut[1285]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Nov 25 22:53:19 np0005535838.novalocal dracut[1285]: memstrack is not available
Nov 25 22:53:19 np0005535838.novalocal dracut[1285]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Nov 25 22:53:19 np0005535838.novalocal dracut[1285]: *** Including module: systemd ***
Nov 25 22:53:19 np0005535838.novalocal chronyd[791]: Selected source 149.56.19.163 (2.centos.pool.ntp.org)
Nov 25 22:53:19 np0005535838.novalocal chronyd[791]: System clock TAI offset set to 37 seconds
Nov 25 22:53:20 np0005535838.novalocal dracut[1285]: *** Including module: fips ***
Nov 25 22:53:20 np0005535838.novalocal dracut[1285]: *** Including module: systemd-initrd ***
Nov 25 22:53:20 np0005535838.novalocal dracut[1285]: *** Including module: i18n ***
Nov 25 22:53:20 np0005535838.novalocal dracut[1285]: *** Including module: drm ***
Nov 25 22:53:21 np0005535838.novalocal dracut[1285]: *** Including module: prefixdevname ***
Nov 25 22:53:21 np0005535838.novalocal dracut[1285]: *** Including module: kernel-modules ***
Nov 25 22:53:21 np0005535838.novalocal kernel: block vda: the capability attribute has been deprecated.
Nov 25 22:53:22 np0005535838.novalocal dracut[1285]: *** Including module: kernel-modules-extra ***
Nov 25 22:53:22 np0005535838.novalocal dracut[1285]:   kernel-modules-extra: configuration source "/run/depmod.d" does not exist
Nov 25 22:53:22 np0005535838.novalocal dracut[1285]:   kernel-modules-extra: configuration source "/lib/depmod.d" does not exist
Nov 25 22:53:22 np0005535838.novalocal dracut[1285]:   kernel-modules-extra: parsing configuration file "/etc/depmod.d/dist.conf"
Nov 25 22:53:22 np0005535838.novalocal dracut[1285]:   kernel-modules-extra: /etc/depmod.d/dist.conf: added "updates extra built-in weak-updates" to the list of search directories
Nov 25 22:53:22 np0005535838.novalocal dracut[1285]: *** Including module: qemu ***
Nov 25 22:53:22 np0005535838.novalocal dracut[1285]: *** Including module: fstab-sys ***
Nov 25 22:53:22 np0005535838.novalocal dracut[1285]: *** Including module: rootfs-block ***
Nov 25 22:53:22 np0005535838.novalocal dracut[1285]: *** Including module: terminfo ***
Nov 25 22:53:22 np0005535838.novalocal dracut[1285]: *** Including module: udev-rules ***
Nov 25 22:53:23 np0005535838.novalocal dracut[1285]: Skipping udev rule: 91-permissions.rules
Nov 25 22:53:23 np0005535838.novalocal dracut[1285]: Skipping udev rule: 80-drivers-modprobe.rules
Nov 25 22:53:23 np0005535838.novalocal dracut[1285]: *** Including module: virtiofs ***
Nov 25 22:53:23 np0005535838.novalocal dracut[1285]: *** Including module: dracut-systemd ***
Nov 25 22:53:23 np0005535838.novalocal irqbalance[783]: Cannot change IRQ 25 affinity: Operation not permitted
Nov 25 22:53:23 np0005535838.novalocal irqbalance[783]: IRQ 25 affinity is now unmanaged
Nov 25 22:53:23 np0005535838.novalocal irqbalance[783]: Cannot change IRQ 31 affinity: Operation not permitted
Nov 25 22:53:23 np0005535838.novalocal irqbalance[783]: IRQ 31 affinity is now unmanaged
Nov 25 22:53:23 np0005535838.novalocal irqbalance[783]: Cannot change IRQ 28 affinity: Operation not permitted
Nov 25 22:53:23 np0005535838.novalocal irqbalance[783]: IRQ 28 affinity is now unmanaged
Nov 25 22:53:23 np0005535838.novalocal irqbalance[783]: Cannot change IRQ 32 affinity: Operation not permitted
Nov 25 22:53:23 np0005535838.novalocal irqbalance[783]: IRQ 32 affinity is now unmanaged
Nov 25 22:53:23 np0005535838.novalocal irqbalance[783]: Cannot change IRQ 30 affinity: Operation not permitted
Nov 25 22:53:23 np0005535838.novalocal irqbalance[783]: IRQ 30 affinity is now unmanaged
Nov 25 22:53:23 np0005535838.novalocal irqbalance[783]: Cannot change IRQ 29 affinity: Operation not permitted
Nov 25 22:53:23 np0005535838.novalocal irqbalance[783]: IRQ 29 affinity is now unmanaged
Nov 25 22:53:23 np0005535838.novalocal dracut[1285]: *** Including module: usrmount ***
Nov 25 22:53:23 np0005535838.novalocal dracut[1285]: *** Including module: base ***
Nov 25 22:53:23 np0005535838.novalocal dracut[1285]: *** Including module: fs-lib ***
Nov 25 22:53:23 np0005535838.novalocal dracut[1285]: *** Including module: kdumpbase ***
Nov 25 22:53:24 np0005535838.novalocal dracut[1285]: *** Including module: microcode_ctl-fw_dir_override ***
Nov 25 22:53:24 np0005535838.novalocal dracut[1285]:   microcode_ctl module: mangling fw_dir
Nov 25 22:53:24 np0005535838.novalocal dracut[1285]:     microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Nov 25 22:53:24 np0005535838.novalocal dracut[1285]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Nov 25 22:53:24 np0005535838.novalocal dracut[1285]:     microcode_ctl: configuration "intel" is ignored
Nov 25 22:53:24 np0005535838.novalocal dracut[1285]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Nov 25 22:53:24 np0005535838.novalocal dracut[1285]:     microcode_ctl: configuration "intel-06-2d-07" is ignored
Nov 25 22:53:24 np0005535838.novalocal dracut[1285]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Nov 25 22:53:24 np0005535838.novalocal dracut[1285]:     microcode_ctl: configuration "intel-06-4e-03" is ignored
Nov 25 22:53:24 np0005535838.novalocal dracut[1285]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Nov 25 22:53:24 np0005535838.novalocal dracut[1285]:     microcode_ctl: configuration "intel-06-4f-01" is ignored
Nov 25 22:53:24 np0005535838.novalocal dracut[1285]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Nov 25 22:53:24 np0005535838.novalocal dracut[1285]:     microcode_ctl: configuration "intel-06-55-04" is ignored
Nov 25 22:53:24 np0005535838.novalocal dracut[1285]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Nov 25 22:53:24 np0005535838.novalocal dracut[1285]:     microcode_ctl: configuration "intel-06-5e-03" is ignored
Nov 25 22:53:24 np0005535838.novalocal dracut[1285]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Nov 25 22:53:24 np0005535838.novalocal dracut[1285]:     microcode_ctl: configuration "intel-06-8c-01" is ignored
Nov 25 22:53:24 np0005535838.novalocal dracut[1285]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Nov 25 22:53:24 np0005535838.novalocal dracut[1285]:     microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Nov 25 22:53:24 np0005535838.novalocal dracut[1285]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Nov 25 22:53:24 np0005535838.novalocal dracut[1285]:     microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Nov 25 22:53:24 np0005535838.novalocal dracut[1285]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Nov 25 22:53:24 np0005535838.novalocal dracut[1285]:     microcode_ctl: configuration "intel-06-8f-08" is ignored
Nov 25 22:53:24 np0005535838.novalocal dracut[1285]:     microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Nov 25 22:53:24 np0005535838.novalocal dracut[1285]: *** Including module: openssl ***
Nov 25 22:53:24 np0005535838.novalocal dracut[1285]: *** Including module: shutdown ***
Nov 25 22:53:24 np0005535838.novalocal dracut[1285]: *** Including module: squash ***
Nov 25 22:53:24 np0005535838.novalocal dracut[1285]: *** Including modules done ***
Nov 25 22:53:24 np0005535838.novalocal dracut[1285]: *** Installing kernel module dependencies ***
Nov 25 22:53:25 np0005535838.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 25 22:53:25 np0005535838.novalocal dracut[1285]: *** Installing kernel module dependencies done ***
Nov 25 22:53:25 np0005535838.novalocal dracut[1285]: *** Resolving executable dependencies ***
Nov 25 22:53:27 np0005535838.novalocal dracut[1285]: *** Resolving executable dependencies done ***
Nov 25 22:53:27 np0005535838.novalocal dracut[1285]: *** Generating early-microcode cpio image ***
Nov 25 22:53:27 np0005535838.novalocal dracut[1285]: *** Store current command line parameters ***
Nov 25 22:53:27 np0005535838.novalocal dracut[1285]: Stored kernel commandline:
Nov 25 22:53:27 np0005535838.novalocal dracut[1285]: No dracut internal kernel commandline stored in the initramfs
Nov 25 22:53:28 np0005535838.novalocal dracut[1285]: *** Install squash loader ***
Nov 25 22:53:29 np0005535838.novalocal dracut[1285]: *** Squashing the files inside the initramfs ***
Nov 25 22:53:30 np0005535838.novalocal dracut[1285]: *** Squashing the files inside the initramfs done ***
Nov 25 22:53:30 np0005535838.novalocal dracut[1285]: *** Creating image file '/boot/initramfs-5.14.0-642.el9.x86_64kdump.img' ***
Nov 25 22:53:30 np0005535838.novalocal dracut[1285]: *** Hardlinking files ***
Nov 25 22:53:30 np0005535838.novalocal dracut[1285]: Mode:           real
Nov 25 22:53:30 np0005535838.novalocal dracut[1285]: Files:          50
Nov 25 22:53:30 np0005535838.novalocal dracut[1285]: Linked:         0 files
Nov 25 22:53:30 np0005535838.novalocal dracut[1285]: Compared:       0 xattrs
Nov 25 22:53:30 np0005535838.novalocal dracut[1285]: Compared:       0 files
Nov 25 22:53:30 np0005535838.novalocal dracut[1285]: Saved:          0 B
Nov 25 22:53:30 np0005535838.novalocal dracut[1285]: Duration:       0.000951 seconds
Nov 25 22:53:30 np0005535838.novalocal dracut[1285]: *** Hardlinking files done ***
Nov 25 22:53:30 np0005535838.novalocal dracut[1285]: *** Creating initramfs image file '/boot/initramfs-5.14.0-642.el9.x86_64kdump.img' done ***
Nov 25 22:53:31 np0005535838.novalocal kdumpctl[1015]: kdump: kexec: loaded kdump kernel
Nov 25 22:53:31 np0005535838.novalocal kdumpctl[1015]: kdump: Starting kdump: [OK]
Nov 25 22:53:31 np0005535838.novalocal systemd[1]: Finished Crash recovery kernel arming.
Nov 25 22:53:31 np0005535838.novalocal systemd[1]: Startup finished in 1.611s (kernel) + 2.700s (initrd) + 19.819s (userspace) = 24.130s.
Nov 25 22:53:44 np0005535838.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 25 22:54:26 np0005535838.novalocal chronyd[791]: Selected source 206.108.0.133 (2.centos.pool.ntp.org)
Nov 25 22:55:13 np0005535838.novalocal sshd-session[4295]: Accepted publickey for zuul from 38.102.83.114 port 51244 ssh2: RSA SHA256:zhs3MiW0JhxzckYcMHQES8SMYHj1iGcomnyzmbiwor8
Nov 25 22:55:13 np0005535838.novalocal systemd[1]: Created slice User Slice of UID 1000.
Nov 25 22:55:13 np0005535838.novalocal systemd[1]: Starting User Runtime Directory /run/user/1000...
Nov 25 22:55:13 np0005535838.novalocal systemd-logind[789]: New session 1 of user zuul.
Nov 25 22:55:13 np0005535838.novalocal systemd[1]: Finished User Runtime Directory /run/user/1000.
Nov 25 22:55:13 np0005535838.novalocal systemd[1]: Starting User Manager for UID 1000...
Nov 25 22:55:13 np0005535838.novalocal systemd[4299]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 22:55:13 np0005535838.novalocal systemd[4299]: Queued start job for default target Main User Target.
Nov 25 22:55:13 np0005535838.novalocal systemd[4299]: Created slice User Application Slice.
Nov 25 22:55:13 np0005535838.novalocal systemd[4299]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 25 22:55:13 np0005535838.novalocal systemd[4299]: Started Daily Cleanup of User's Temporary Directories.
Nov 25 22:55:13 np0005535838.novalocal systemd[4299]: Reached target Paths.
Nov 25 22:55:13 np0005535838.novalocal systemd[4299]: Reached target Timers.
Nov 25 22:55:13 np0005535838.novalocal systemd[4299]: Starting D-Bus User Message Bus Socket...
Nov 25 22:55:13 np0005535838.novalocal systemd[4299]: Starting Create User's Volatile Files and Directories...
Nov 25 22:55:13 np0005535838.novalocal systemd[4299]: Finished Create User's Volatile Files and Directories.
Nov 25 22:55:13 np0005535838.novalocal systemd[4299]: Listening on D-Bus User Message Bus Socket.
Nov 25 22:55:13 np0005535838.novalocal systemd[4299]: Reached target Sockets.
Nov 25 22:55:13 np0005535838.novalocal systemd[4299]: Reached target Basic System.
Nov 25 22:55:13 np0005535838.novalocal systemd[4299]: Reached target Main User Target.
Nov 25 22:55:13 np0005535838.novalocal systemd[4299]: Startup finished in 149ms.
Nov 25 22:55:13 np0005535838.novalocal systemd[1]: Started User Manager for UID 1000.
Nov 25 22:55:13 np0005535838.novalocal systemd[1]: Started Session 1 of User zuul.
Nov 25 22:55:13 np0005535838.novalocal sshd-session[4295]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 22:55:13 np0005535838.novalocal python3[4381]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 22:55:16 np0005535838.novalocal python3[4409]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 22:55:22 np0005535838.novalocal python3[4467]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 22:55:23 np0005535838.novalocal python3[4507]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Nov 25 22:55:25 np0005535838.novalocal python3[4533]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCmokCZV1ohA/dBgPDQFwsEwGi7bp67XDWUBg6xHh4L1zLa2Z9jcISwU5X/eepX1jQEEUGEzBp1VGdiXF/CmVDSdK2t37ngENzNgsU4CiBjcbHIylEOzktW7a/NZ44lwVzaiop/lmFbnQWjWk/Z2FH7jlY1Gl9SNoM1knAZtsnTr8ciuHNWq+P4NbgWh1dhyVXLPtRk4OiQ/byAY3BXNE3XpwaJBCf5ESIBnUO9LMxbyroV1fA1HPEtehd/9n4SyomvHTdWzGApF7Swo0B4uhR2HP56EvPAfPSHZ/t0HUl7tjAxbzVxa1poJDIRJXaWqbnM9n+yHk3lrw+aLZ8ETpr8JEai4EjgxJjjU6ePJsZYDEmVPHEABhh5cm14mtT+Hs83k1jRcNNM5jNKXf/I6tlvaS++xXjg+QbQs8yVQ+A+dV7OFyUSIbeYqIXsUIux5gFT9Yt9HHi4e4AkEdU2fxRyPg4ekUdVmcaLeTk7RacG1bNcez9g7EP2zg3IVUuu6zk= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 22:55:25 np0005535838.novalocal python3[4557]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 22:55:26 np0005535838.novalocal python3[4656]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 22:55:26 np0005535838.novalocal python3[4727]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764111326.0321875-207-170237529251704/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=0ef163a185774c22975fb6ea7653e91f_id_rsa follow=False checksum=07736deb23d8e4b2f267fc54a7c47707037310e8 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 22:55:27 np0005535838.novalocal python3[4850]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 22:55:28 np0005535838.novalocal python3[4921]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764111327.3414114-240-182833777339376/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=0ef163a185774c22975fb6ea7653e91f_id_rsa.pub follow=False checksum=2b051e17575fcad56bb6c6937fd1a45598225ee2 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 22:55:30 np0005535838.novalocal python3[4969]: ansible-ping Invoked with data=pong
Nov 25 22:55:31 np0005535838.novalocal python3[4993]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 22:55:32 np0005535838.novalocal python3[5051]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Nov 25 22:55:33 np0005535838.novalocal python3[5083]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 22:55:33 np0005535838.novalocal python3[5107]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 22:55:34 np0005535838.novalocal python3[5131]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 22:55:34 np0005535838.novalocal python3[5155]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 22:55:34 np0005535838.novalocal python3[5179]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 22:55:35 np0005535838.novalocal python3[5203]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 22:55:36 np0005535838.novalocal sudo[5227]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgqwardelovucwnxxrsevfvjlackuwie ; /usr/bin/python3'
Nov 25 22:55:36 np0005535838.novalocal sudo[5227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 22:55:36 np0005535838.novalocal python3[5229]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 22:55:36 np0005535838.novalocal sudo[5227]: pam_unix(sudo:session): session closed for user root
Nov 25 22:55:37 np0005535838.novalocal sudo[5305]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wptmnukxwdzwulapyanswmucuyakneke ; /usr/bin/python3'
Nov 25 22:55:37 np0005535838.novalocal sudo[5305]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 22:55:37 np0005535838.novalocal python3[5307]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 22:55:37 np0005535838.novalocal sudo[5305]: pam_unix(sudo:session): session closed for user root
Nov 25 22:55:37 np0005535838.novalocal sudo[5378]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnnqushapwixlsiaattjjnqjwlljmjwl ; /usr/bin/python3'
Nov 25 22:55:37 np0005535838.novalocal sudo[5378]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 22:55:37 np0005535838.novalocal python3[5380]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764111336.7685905-21-41165948861193/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 22:55:37 np0005535838.novalocal sudo[5378]: pam_unix(sudo:session): session closed for user root
Nov 25 22:55:38 np0005535838.novalocal python3[5428]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 22:55:38 np0005535838.novalocal python3[5452]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 22:55:38 np0005535838.novalocal python3[5476]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 22:55:39 np0005535838.novalocal python3[5500]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 22:55:39 np0005535838.novalocal python3[5524]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 22:55:39 np0005535838.novalocal python3[5548]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 22:55:40 np0005535838.novalocal python3[5572]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 22:55:40 np0005535838.novalocal python3[5596]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 22:55:40 np0005535838.novalocal python3[5620]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 22:55:41 np0005535838.novalocal python3[5644]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 22:55:41 np0005535838.novalocal python3[5668]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 22:55:41 np0005535838.novalocal python3[5692]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 22:55:41 np0005535838.novalocal python3[5716]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 22:55:42 np0005535838.novalocal python3[5740]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 22:55:42 np0005535838.novalocal python3[5764]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 22:55:42 np0005535838.novalocal python3[5788]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 22:55:43 np0005535838.novalocal python3[5812]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 22:55:43 np0005535838.novalocal python3[5836]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 22:55:43 np0005535838.novalocal python3[5860]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 22:55:43 np0005535838.novalocal python3[5884]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 22:55:44 np0005535838.novalocal python3[5908]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 22:55:44 np0005535838.novalocal python3[5932]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 22:55:44 np0005535838.novalocal python3[5956]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 22:55:45 np0005535838.novalocal python3[5980]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 22:55:45 np0005535838.novalocal python3[6004]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 22:55:45 np0005535838.novalocal python3[6028]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 22:55:48 np0005535838.novalocal sudo[6052]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flbdgtyazkotlpomtusldtnmakwaqlbp ; /usr/bin/python3'
Nov 25 22:55:48 np0005535838.novalocal sudo[6052]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 22:55:48 np0005535838.novalocal python3[6054]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 25 22:55:48 np0005535838.novalocal systemd[1]: Starting Time & Date Service...
Nov 25 22:55:48 np0005535838.novalocal systemd[1]: Started Time & Date Service.
Nov 25 22:55:48 np0005535838.novalocal systemd-timedated[6056]: Changed time zone to 'UTC' (UTC).
Nov 25 22:55:48 np0005535838.novalocal sudo[6052]: pam_unix(sudo:session): session closed for user root
Nov 25 22:55:48 np0005535838.novalocal sudo[6083]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxampszalkblwebumtskeusgghpgvsqa ; /usr/bin/python3'
Nov 25 22:55:48 np0005535838.novalocal sudo[6083]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 22:55:48 np0005535838.novalocal python3[6085]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 22:55:48 np0005535838.novalocal sudo[6083]: pam_unix(sudo:session): session closed for user root
Nov 25 22:55:49 np0005535838.novalocal python3[6161]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 22:55:49 np0005535838.novalocal python3[6232]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1764111349.067419-153-185972815722863/source _original_basename=tmp93__27ds follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 22:55:50 np0005535838.novalocal python3[6332]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 22:55:50 np0005535838.novalocal python3[6403]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764111350.0090945-183-250604696236522/source _original_basename=tmpps097i7u follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 22:55:51 np0005535838.novalocal sudo[6503]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogarrzmckbeprghnoxbgoffifgnmytge ; /usr/bin/python3'
Nov 25 22:55:51 np0005535838.novalocal sudo[6503]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 22:55:51 np0005535838.novalocal python3[6505]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 22:55:51 np0005535838.novalocal sudo[6503]: pam_unix(sudo:session): session closed for user root
Nov 25 22:55:51 np0005535838.novalocal sudo[6576]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypollugymwhwihnwxmvrtcxhuvyokmyz ; /usr/bin/python3'
Nov 25 22:55:51 np0005535838.novalocal sudo[6576]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 22:55:51 np0005535838.novalocal python3[6578]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764111351.129548-231-81003696640765/source _original_basename=tmp6iqmex1w follow=False checksum=b5d32a20a180d280e10f96c8ee4e4addc6022f99 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 22:55:51 np0005535838.novalocal sudo[6576]: pam_unix(sudo:session): session closed for user root
Nov 25 22:55:52 np0005535838.novalocal python3[6626]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 22:55:52 np0005535838.novalocal python3[6652]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 22:55:53 np0005535838.novalocal sudo[6730]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbheqcyzsqdkijdniktcrlvtkqjtpjbz ; /usr/bin/python3'
Nov 25 22:55:53 np0005535838.novalocal sudo[6730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 22:55:53 np0005535838.novalocal python3[6732]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 22:55:53 np0005535838.novalocal sudo[6730]: pam_unix(sudo:session): session closed for user root
Nov 25 22:55:53 np0005535838.novalocal sudo[6803]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvketaqqidftmdhyldtdizxajjfcufdt ; /usr/bin/python3'
Nov 25 22:55:53 np0005535838.novalocal sudo[6803]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 22:55:53 np0005535838.novalocal python3[6805]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1764111352.9069505-273-148198345768801/source _original_basename=tmpbvvfsosz follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 22:55:53 np0005535838.novalocal sudo[6803]: pam_unix(sudo:session): session closed for user root
Nov 25 22:55:54 np0005535838.novalocal sudo[6854]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgwmjrfjqlacfvgjuqqulnzgaqfrcodk ; /usr/bin/python3'
Nov 25 22:55:54 np0005535838.novalocal sudo[6854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 22:55:54 np0005535838.novalocal python3[6856]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163e3b-3c83-8aa9-b87e-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 22:55:54 np0005535838.novalocal sudo[6854]: pam_unix(sudo:session): session closed for user root
Nov 25 22:55:54 np0005535838.novalocal python3[6884]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env
                                                       _uses_shell=True zuul_log_id=fa163e3b-3c83-8aa9-b87e-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Nov 25 22:55:56 np0005535838.novalocal python3[6912]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 22:56:13 np0005535838.novalocal sudo[6936]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlwhorxezvxkawajlpejvcebywysziqh ; /usr/bin/python3'
Nov 25 22:56:13 np0005535838.novalocal sudo[6936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 22:56:13 np0005535838.novalocal python3[6938]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 22:56:13 np0005535838.novalocal sudo[6936]: pam_unix(sudo:session): session closed for user root
Nov 25 22:56:18 np0005535838.novalocal systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 25 22:56:25 np0005535838.novalocal sshd-session[6942]: error: kex_exchange_identification: read: Connection reset by peer
Nov 25 22:56:25 np0005535838.novalocal sshd-session[6942]: Connection reset by 45.140.17.97 port 29116
Nov 25 22:56:47 np0005535838.novalocal kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Nov 25 22:56:47 np0005535838.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Nov 25 22:56:47 np0005535838.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Nov 25 22:56:47 np0005535838.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Nov 25 22:56:47 np0005535838.novalocal kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Nov 25 22:56:47 np0005535838.novalocal kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Nov 25 22:56:47 np0005535838.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Nov 25 22:56:47 np0005535838.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Nov 25 22:56:47 np0005535838.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Nov 25 22:56:47 np0005535838.novalocal kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Nov 25 22:56:47 np0005535838.novalocal NetworkManager[855]: <info>  [1764111407.7849] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 25 22:56:47 np0005535838.novalocal systemd-udevd[6943]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 22:56:47 np0005535838.novalocal NetworkManager[855]: <info>  [1764111407.8044] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 22:56:47 np0005535838.novalocal NetworkManager[855]: <info>  [1764111407.8087] settings: (eth1): created default wired connection 'Wired connection 1'
Nov 25 22:56:47 np0005535838.novalocal NetworkManager[855]: <info>  [1764111407.8093] device (eth1): carrier: link connected
Nov 25 22:56:47 np0005535838.novalocal NetworkManager[855]: <info>  [1764111407.8096] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Nov 25 22:56:47 np0005535838.novalocal NetworkManager[855]: <info>  [1764111407.8106] policy: auto-activating connection 'Wired connection 1' (395ca42c-36a5-36fb-90b5-378a60416156)
Nov 25 22:56:47 np0005535838.novalocal NetworkManager[855]: <info>  [1764111407.8113] device (eth1): Activation: starting connection 'Wired connection 1' (395ca42c-36a5-36fb-90b5-378a60416156)
Nov 25 22:56:47 np0005535838.novalocal NetworkManager[855]: <info>  [1764111407.8115] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 22:56:47 np0005535838.novalocal NetworkManager[855]: <info>  [1764111407.8118] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 22:56:47 np0005535838.novalocal NetworkManager[855]: <info>  [1764111407.8125] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 22:56:47 np0005535838.novalocal NetworkManager[855]: <info>  [1764111407.8133] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 25 22:56:48 np0005535838.novalocal python3[6970]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163e3b-3c83-119a-e3c0-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 22:56:58 np0005535838.novalocal sudo[7048]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cyvqjktabgvetjxjhhyozsvqpdplhgzy ; OS_CLOUD=vexxhost /usr/bin/python3'
Nov 25 22:56:58 np0005535838.novalocal sudo[7048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 22:56:58 np0005535838.novalocal python3[7050]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 22:56:58 np0005535838.novalocal sudo[7048]: pam_unix(sudo:session): session closed for user root
Nov 25 22:56:59 np0005535838.novalocal sudo[7121]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edjadbnfrwcighzjhvrzwmsxwogfobod ; OS_CLOUD=vexxhost /usr/bin/python3'
Nov 25 22:56:59 np0005535838.novalocal sudo[7121]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 22:56:59 np0005535838.novalocal python3[7123]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764111418.5904531-102-222388212489256/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=deaca1671c15558608e1a78179cd209f2d80334e backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 22:56:59 np0005535838.novalocal sudo[7121]: pam_unix(sudo:session): session closed for user root
Nov 25 22:56:59 np0005535838.novalocal sudo[7171]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hubfrfymfpeeoajmsfllyvopgzzepady ; OS_CLOUD=vexxhost /usr/bin/python3'
Nov 25 22:56:59 np0005535838.novalocal sudo[7171]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 22:57:00 np0005535838.novalocal python3[7173]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 22:57:00 np0005535838.novalocal systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Nov 25 22:57:00 np0005535838.novalocal systemd[1]: Stopped Network Manager Wait Online.
Nov 25 22:57:00 np0005535838.novalocal systemd[1]: Stopping Network Manager Wait Online...
Nov 25 22:57:00 np0005535838.novalocal systemd[1]: Stopping Network Manager...
Nov 25 22:57:00 np0005535838.novalocal NetworkManager[855]: <info>  [1764111420.2869] caught SIGTERM, shutting down normally.
Nov 25 22:57:00 np0005535838.novalocal NetworkManager[855]: <info>  [1764111420.2877] dhcp4 (eth0): canceled DHCP transaction
Nov 25 22:57:00 np0005535838.novalocal NetworkManager[855]: <info>  [1764111420.2877] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 25 22:57:00 np0005535838.novalocal NetworkManager[855]: <info>  [1764111420.2877] dhcp4 (eth0): state changed no lease
Nov 25 22:57:00 np0005535838.novalocal NetworkManager[855]: <info>  [1764111420.2878] manager: NetworkManager state is now CONNECTING
Nov 25 22:57:00 np0005535838.novalocal NetworkManager[855]: <info>  [1764111420.2962] dhcp4 (eth1): canceled DHCP transaction
Nov 25 22:57:00 np0005535838.novalocal NetworkManager[855]: <info>  [1764111420.2962] dhcp4 (eth1): state changed no lease
Nov 25 22:57:00 np0005535838.novalocal NetworkManager[855]: <info>  [1764111420.3006] exiting (success)
Nov 25 22:57:00 np0005535838.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 25 22:57:00 np0005535838.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 25 22:57:00 np0005535838.novalocal systemd[1]: NetworkManager.service: Deactivated successfully.
Nov 25 22:57:00 np0005535838.novalocal systemd[1]: Stopped Network Manager.
Nov 25 22:57:00 np0005535838.novalocal systemd[1]: NetworkManager.service: Consumed 1.622s CPU time, 10.0M memory peak.
Nov 25 22:57:00 np0005535838.novalocal systemd[1]: Starting Network Manager...
Nov 25 22:57:00 np0005535838.novalocal NetworkManager[7181]: <info>  [1764111420.3559] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:3edafa6c-db49-405c-9758-42faad226154)
Nov 25 22:57:00 np0005535838.novalocal NetworkManager[7181]: <info>  [1764111420.3564] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 25 22:57:00 np0005535838.novalocal NetworkManager[7181]: <info>  [1764111420.3630] manager[0x557215882070]: monitoring kernel firmware directory '/lib/firmware'.
Nov 25 22:57:00 np0005535838.novalocal systemd[1]: Starting Hostname Service...
Nov 25 22:57:00 np0005535838.novalocal systemd[1]: Started Hostname Service.
Nov 25 22:57:00 np0005535838.novalocal NetworkManager[7181]: <info>  [1764111420.4341] hostname: hostname: using hostnamed
Nov 25 22:57:00 np0005535838.novalocal NetworkManager[7181]: <info>  [1764111420.4344] hostname: static hostname changed from (none) to "np0005535838.novalocal"
Nov 25 22:57:00 np0005535838.novalocal NetworkManager[7181]: <info>  [1764111420.4351] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 25 22:57:00 np0005535838.novalocal NetworkManager[7181]: <info>  [1764111420.4358] manager[0x557215882070]: rfkill: Wi-Fi hardware radio set enabled
Nov 25 22:57:00 np0005535838.novalocal NetworkManager[7181]: <info>  [1764111420.4358] manager[0x557215882070]: rfkill: WWAN hardware radio set enabled
Nov 25 22:57:00 np0005535838.novalocal NetworkManager[7181]: <info>  [1764111420.4403] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 25 22:57:00 np0005535838.novalocal NetworkManager[7181]: <info>  [1764111420.4403] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 25 22:57:00 np0005535838.novalocal NetworkManager[7181]: <info>  [1764111420.4404] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 25 22:57:00 np0005535838.novalocal NetworkManager[7181]: <info>  [1764111420.4405] manager: Networking is enabled by state file
Nov 25 22:57:00 np0005535838.novalocal NetworkManager[7181]: <info>  [1764111420.4409] settings: Loaded settings plugin: keyfile (internal)
Nov 25 22:57:00 np0005535838.novalocal NetworkManager[7181]: <info>  [1764111420.4416] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 25 22:57:00 np0005535838.novalocal NetworkManager[7181]: <info>  [1764111420.4457] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 25 22:57:00 np0005535838.novalocal NetworkManager[7181]: <info>  [1764111420.4472] dhcp: init: Using DHCP client 'internal'
Nov 25 22:57:00 np0005535838.novalocal NetworkManager[7181]: <info>  [1764111420.4477] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 25 22:57:00 np0005535838.novalocal NetworkManager[7181]: <info>  [1764111420.4484] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 22:57:00 np0005535838.novalocal NetworkManager[7181]: <info>  [1764111420.4492] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 25 22:57:00 np0005535838.novalocal NetworkManager[7181]: <info>  [1764111420.4506] device (lo): Activation: starting connection 'lo' (8a2e98f0-f5c9-4e09-92f1-2bf0997fed4f)
Nov 25 22:57:00 np0005535838.novalocal NetworkManager[7181]: <info>  [1764111420.4518] device (eth0): carrier: link connected
Nov 25 22:57:00 np0005535838.novalocal NetworkManager[7181]: <info>  [1764111420.4526] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 25 22:57:00 np0005535838.novalocal NetworkManager[7181]: <info>  [1764111420.4534] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Nov 25 22:57:00 np0005535838.novalocal NetworkManager[7181]: <info>  [1764111420.4535] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 25 22:57:00 np0005535838.novalocal NetworkManager[7181]: <info>  [1764111420.4546] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 25 22:57:00 np0005535838.novalocal NetworkManager[7181]: <info>  [1764111420.4558] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 25 22:57:00 np0005535838.novalocal NetworkManager[7181]: <info>  [1764111420.4569] device (eth1): carrier: link connected
Nov 25 22:57:00 np0005535838.novalocal NetworkManager[7181]: <info>  [1764111420.4577] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 25 22:57:00 np0005535838.novalocal NetworkManager[7181]: <info>  [1764111420.4585] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (395ca42c-36a5-36fb-90b5-378a60416156) (indicated)
Nov 25 22:57:00 np0005535838.novalocal NetworkManager[7181]: <info>  [1764111420.4586] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 25 22:57:00 np0005535838.novalocal NetworkManager[7181]: <info>  [1764111420.4597] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 25 22:57:00 np0005535838.novalocal NetworkManager[7181]: <info>  [1764111420.4608] device (eth1): Activation: starting connection 'Wired connection 1' (395ca42c-36a5-36fb-90b5-378a60416156)
Nov 25 22:57:00 np0005535838.novalocal NetworkManager[7181]: <info>  [1764111420.4618] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 25 22:57:00 np0005535838.novalocal systemd[1]: Started Network Manager.
Nov 25 22:57:00 np0005535838.novalocal NetworkManager[7181]: <info>  [1764111420.4624] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 25 22:57:00 np0005535838.novalocal NetworkManager[7181]: <info>  [1764111420.4627] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 25 22:57:00 np0005535838.novalocal NetworkManager[7181]: <info>  [1764111420.4630] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 25 22:57:00 np0005535838.novalocal NetworkManager[7181]: <info>  [1764111420.4635] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 25 22:57:00 np0005535838.novalocal NetworkManager[7181]: <info>  [1764111420.4657] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 25 22:57:00 np0005535838.novalocal NetworkManager[7181]: <info>  [1764111420.4661] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 25 22:57:00 np0005535838.novalocal NetworkManager[7181]: <info>  [1764111420.4664] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 25 22:57:00 np0005535838.novalocal NetworkManager[7181]: <info>  [1764111420.4667] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 25 22:57:00 np0005535838.novalocal NetworkManager[7181]: <info>  [1764111420.4675] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 25 22:57:00 np0005535838.novalocal NetworkManager[7181]: <info>  [1764111420.4678] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 25 22:57:00 np0005535838.novalocal NetworkManager[7181]: <info>  [1764111420.4686] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 25 22:57:00 np0005535838.novalocal NetworkManager[7181]: <info>  [1764111420.4689] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 25 22:57:00 np0005535838.novalocal NetworkManager[7181]: <info>  [1764111420.4703] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 25 22:57:00 np0005535838.novalocal NetworkManager[7181]: <info>  [1764111420.4710] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 25 22:57:00 np0005535838.novalocal NetworkManager[7181]: <info>  [1764111420.4716] device (lo): Activation: successful, device activated.
Nov 25 22:57:00 np0005535838.novalocal NetworkManager[7181]: <info>  [1764111420.4740] dhcp4 (eth0): state changed new lease, address=38.102.83.77
Nov 25 22:57:00 np0005535838.novalocal NetworkManager[7181]: <info>  [1764111420.4746] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 25 22:57:00 np0005535838.novalocal systemd[1]: Starting Network Manager Wait Online...
Nov 25 22:57:00 np0005535838.novalocal NetworkManager[7181]: <info>  [1764111420.4816] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 25 22:57:00 np0005535838.novalocal NetworkManager[7181]: <info>  [1764111420.4833] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 25 22:57:00 np0005535838.novalocal NetworkManager[7181]: <info>  [1764111420.4835] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 25 22:57:00 np0005535838.novalocal NetworkManager[7181]: <info>  [1764111420.4838] manager: NetworkManager state is now CONNECTED_SITE
Nov 25 22:57:00 np0005535838.novalocal NetworkManager[7181]: <info>  [1764111420.4842] device (eth0): Activation: successful, device activated.
Nov 25 22:57:00 np0005535838.novalocal NetworkManager[7181]: <info>  [1764111420.4847] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 25 22:57:00 np0005535838.novalocal sudo[7171]: pam_unix(sudo:session): session closed for user root
Nov 25 22:57:00 np0005535838.novalocal python3[7257]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163e3b-3c83-119a-e3c0-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 22:57:10 np0005535838.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 25 22:57:30 np0005535838.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 25 22:57:45 np0005535838.novalocal NetworkManager[7181]: <info>  [1764111465.2822] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 25 22:57:45 np0005535838.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 25 22:57:45 np0005535838.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 25 22:57:45 np0005535838.novalocal NetworkManager[7181]: <info>  [1764111465.3110] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 25 22:57:45 np0005535838.novalocal NetworkManager[7181]: <info>  [1764111465.3113] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 25 22:57:45 np0005535838.novalocal NetworkManager[7181]: <info>  [1764111465.3125] device (eth1): Activation: successful, device activated.
Nov 25 22:57:45 np0005535838.novalocal NetworkManager[7181]: <info>  [1764111465.3136] manager: startup complete
Nov 25 22:57:45 np0005535838.novalocal NetworkManager[7181]: <info>  [1764111465.3141] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Nov 25 22:57:45 np0005535838.novalocal NetworkManager[7181]: <warn>  [1764111465.3172] device (eth1): Activation: failed for connection 'Wired connection 1'
Nov 25 22:57:45 np0005535838.novalocal systemd[1]: Finished Network Manager Wait Online.
Nov 25 22:57:45 np0005535838.novalocal NetworkManager[7181]: <info>  [1764111465.3187] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Nov 25 22:57:45 np0005535838.novalocal NetworkManager[7181]: <info>  [1764111465.3273] dhcp4 (eth1): canceled DHCP transaction
Nov 25 22:57:45 np0005535838.novalocal NetworkManager[7181]: <info>  [1764111465.3274] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 25 22:57:45 np0005535838.novalocal NetworkManager[7181]: <info>  [1764111465.3274] dhcp4 (eth1): state changed no lease
Nov 25 22:57:45 np0005535838.novalocal NetworkManager[7181]: <info>  [1764111465.3297] policy: auto-activating connection 'ci-private-network' (c033a316-88a7-53bd-b45b-dbc337c105ea)
Nov 25 22:57:45 np0005535838.novalocal NetworkManager[7181]: <info>  [1764111465.3304] device (eth1): Activation: starting connection 'ci-private-network' (c033a316-88a7-53bd-b45b-dbc337c105ea)
Nov 25 22:57:45 np0005535838.novalocal NetworkManager[7181]: <info>  [1764111465.3305] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 22:57:45 np0005535838.novalocal NetworkManager[7181]: <info>  [1764111465.3309] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 22:57:45 np0005535838.novalocal NetworkManager[7181]: <info>  [1764111465.3318] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 22:57:45 np0005535838.novalocal NetworkManager[7181]: <info>  [1764111465.3328] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 22:57:45 np0005535838.novalocal NetworkManager[7181]: <info>  [1764111465.3825] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 22:57:45 np0005535838.novalocal NetworkManager[7181]: <info>  [1764111465.3830] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 22:57:45 np0005535838.novalocal NetworkManager[7181]: <info>  [1764111465.3845] device (eth1): Activation: successful, device activated.
Nov 25 22:57:55 np0005535838.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 25 22:58:00 np0005535838.novalocal sshd-session[4308]: Received disconnect from 38.102.83.114 port 51244:11: disconnected by user
Nov 25 22:58:00 np0005535838.novalocal sshd-session[4308]: Disconnected from user zuul 38.102.83.114 port 51244
Nov 25 22:58:00 np0005535838.novalocal sshd-session[4295]: pam_unix(sshd:session): session closed for user zuul
Nov 25 22:58:00 np0005535838.novalocal systemd-logind[789]: Session 1 logged out. Waiting for processes to exit.
Nov 25 22:58:01 np0005535838.novalocal systemd[4299]: Starting Mark boot as successful...
Nov 25 22:58:01 np0005535838.novalocal systemd[4299]: Finished Mark boot as successful.
Nov 25 22:58:01 np0005535838.novalocal sshd-session[7286]: Accepted publickey for zuul from 38.102.83.114 port 38168 ssh2: RSA SHA256:JQBEbRWdTRxWWBZFMXTcgpV6tX76cFl0ZQrAk8DySQQ
Nov 25 22:58:01 np0005535838.novalocal systemd-logind[789]: New session 3 of user zuul.
Nov 25 22:58:01 np0005535838.novalocal systemd[1]: Started Session 3 of User zuul.
Nov 25 22:58:01 np0005535838.novalocal sshd-session[7286]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 22:58:01 np0005535838.novalocal sudo[7365]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbsyslghnbfiolgitaviintmbxjkpkhs ; OS_CLOUD=vexxhost /usr/bin/python3'
Nov 25 22:58:01 np0005535838.novalocal sudo[7365]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 22:58:01 np0005535838.novalocal python3[7367]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 22:58:01 np0005535838.novalocal sudo[7365]: pam_unix(sudo:session): session closed for user root
Nov 25 22:58:02 np0005535838.novalocal sudo[7438]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifcjzjlgkxwzgvyfowqrnklcaggvvwnb ; OS_CLOUD=vexxhost /usr/bin/python3'
Nov 25 22:58:02 np0005535838.novalocal sudo[7438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 22:58:02 np0005535838.novalocal python3[7440]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764111481.446876-267-171045953555470/source _original_basename=tmp5nlays8v follow=False checksum=03a7293d1f772836c1203af9d59475bc4177093e backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 22:58:02 np0005535838.novalocal sudo[7438]: pam_unix(sudo:session): session closed for user root
Nov 25 22:58:04 np0005535838.novalocal sshd-session[7289]: Connection closed by 38.102.83.114 port 38168
Nov 25 22:58:04 np0005535838.novalocal sshd-session[7286]: pam_unix(sshd:session): session closed for user zuul
Nov 25 22:58:04 np0005535838.novalocal systemd[1]: session-3.scope: Deactivated successfully.
Nov 25 22:58:04 np0005535838.novalocal systemd-logind[789]: Session 3 logged out. Waiting for processes to exit.
Nov 25 22:58:04 np0005535838.novalocal systemd-logind[789]: Removed session 3.
Nov 25 23:00:17 np0005535838.novalocal sshd-session[7467]: Invalid user elsearch from 121.37.183.152 port 57968
Nov 25 23:00:17 np0005535838.novalocal sshd-session[7467]: Received disconnect from 121.37.183.152 port 57968:11: Bye Bye [preauth]
Nov 25 23:00:17 np0005535838.novalocal sshd-session[7467]: Disconnected from invalid user elsearch 121.37.183.152 port 57968 [preauth]
Nov 25 23:00:44 np0005535838.novalocal sshd-session[7470]: Invalid user bitnami from 124.18.141.70 port 59196
Nov 25 23:00:45 np0005535838.novalocal sshd-session[7470]: Received disconnect from 124.18.141.70 port 59196:11: Bye Bye [preauth]
Nov 25 23:00:45 np0005535838.novalocal sshd-session[7470]: Disconnected from invalid user bitnami 124.18.141.70 port 59196 [preauth]
Nov 25 23:00:56 np0005535838.novalocal chronyd[791]: Selected source 149.56.19.163 (2.centos.pool.ntp.org)
Nov 25 23:01:01 np0005535838.novalocal systemd[4299]: Created slice User Background Tasks Slice.
Nov 25 23:01:01 np0005535838.novalocal systemd[4299]: Starting Cleanup of User's Temporary Files and Directories...
Nov 25 23:01:01 np0005535838.novalocal systemd[4299]: Finished Cleanup of User's Temporary Files and Directories.
Nov 25 23:01:01 np0005535838.novalocal CROND[7474]: (root) CMD (run-parts /etc/cron.hourly)
Nov 25 23:01:01 np0005535838.novalocal run-parts[7477]: (/etc/cron.hourly) starting 0anacron
Nov 25 23:01:01 np0005535838.novalocal anacron[7485]: Anacron started on 2025-11-25
Nov 25 23:01:01 np0005535838.novalocal anacron[7485]: Will run job `cron.daily' in 31 min.
Nov 25 23:01:01 np0005535838.novalocal anacron[7485]: Will run job `cron.weekly' in 51 min.
Nov 25 23:01:01 np0005535838.novalocal anacron[7485]: Will run job `cron.monthly' in 71 min.
Nov 25 23:01:01 np0005535838.novalocal anacron[7485]: Jobs will be executed sequentially
Nov 25 23:01:01 np0005535838.novalocal run-parts[7487]: (/etc/cron.hourly) finished 0anacron
Nov 25 23:01:01 np0005535838.novalocal CROND[7473]: (root) CMDEND (run-parts /etc/cron.hourly)
Nov 25 23:01:14 np0005535838.novalocal sshd-session[7488]: Invalid user bitnami from 68.183.82.237 port 50426
Nov 25 23:01:14 np0005535838.novalocal sshd-session[7488]: Received disconnect from 68.183.82.237 port 50426:11: Bye Bye [preauth]
Nov 25 23:01:14 np0005535838.novalocal sshd-session[7488]: Disconnected from invalid user bitnami 68.183.82.237 port 50426 [preauth]
Nov 25 23:01:21 np0005535838.novalocal sshd-session[7490]: Invalid user testing from 202.83.162.167 port 42728
Nov 25 23:01:21 np0005535838.novalocal sshd-session[7490]: Received disconnect from 202.83.162.167 port 42728:11: Bye Bye [preauth]
Nov 25 23:01:21 np0005535838.novalocal sshd-session[7490]: Disconnected from invalid user testing 202.83.162.167 port 42728 [preauth]
Nov 25 23:01:33 np0005535838.novalocal sshd-session[7492]: Invalid user ec2-user from 171.244.140.22 port 54552
Nov 25 23:01:33 np0005535838.novalocal sshd-session[7492]: Received disconnect from 171.244.140.22 port 54552:11: Bye Bye [preauth]
Nov 25 23:01:33 np0005535838.novalocal sshd-session[7492]: Disconnected from invalid user ec2-user 171.244.140.22 port 54552 [preauth]
Nov 25 23:02:42 np0005535838.novalocal sshd-session[7494]: Invalid user user5 from 45.78.216.132 port 46498
Nov 25 23:02:42 np0005535838.novalocal sshd-session[7494]: Received disconnect from 45.78.216.132 port 46498:11: Bye Bye [preauth]
Nov 25 23:02:42 np0005535838.novalocal sshd-session[7494]: Disconnected from invalid user user5 45.78.216.132 port 46498 [preauth]
Nov 25 23:03:14 np0005535838.novalocal sshd-session[7496]: Received disconnect from 124.18.141.70 port 41712:11: Bye Bye [preauth]
Nov 25 23:03:14 np0005535838.novalocal sshd-session[7496]: Disconnected from authenticating user root 124.18.141.70 port 41712 [preauth]
Nov 25 23:03:40 np0005535838.novalocal sshd-session[7498]: Received disconnect from 68.183.82.237 port 35404:11: Bye Bye [preauth]
Nov 25 23:03:40 np0005535838.novalocal sshd-session[7498]: Disconnected from authenticating user root 68.183.82.237 port 35404 [preauth]
Nov 25 23:04:16 np0005535838.novalocal sshd-session[7502]: Connection reset by 121.37.183.152 port 39756 [preauth]
Nov 25 23:04:24 np0005535838.novalocal sshd-session[7504]: Received disconnect from 202.83.162.167 port 60942:11: Bye Bye [preauth]
Nov 25 23:04:24 np0005535838.novalocal sshd-session[7504]: Disconnected from authenticating user root 202.83.162.167 port 60942 [preauth]
Nov 25 23:04:46 np0005535838.novalocal sshd-session[7506]: Received disconnect from 171.244.140.22 port 34892:11: Bye Bye [preauth]
Nov 25 23:04:46 np0005535838.novalocal sshd-session[7506]: Disconnected from authenticating user root 171.244.140.22 port 34892 [preauth]
Nov 25 23:04:54 np0005535838.novalocal sshd-session[7508]: Invalid user magento from 68.183.82.237 port 54604
Nov 25 23:04:55 np0005535838.novalocal sshd-session[7508]: Received disconnect from 68.183.82.237 port 54604:11: Bye Bye [preauth]
Nov 25 23:04:55 np0005535838.novalocal sshd-session[7508]: Disconnected from invalid user magento 68.183.82.237 port 54604 [preauth]
Nov 25 23:04:59 np0005535838.novalocal sshd-session[7510]: Invalid user g from 124.18.141.70 port 35022
Nov 25 23:04:59 np0005535838.novalocal sshd-session[7510]: Received disconnect from 124.18.141.70 port 35022:11: Bye Bye [preauth]
Nov 25 23:04:59 np0005535838.novalocal sshd-session[7510]: Disconnected from invalid user g 124.18.141.70 port 35022 [preauth]
Nov 25 23:05:22 np0005535838.novalocal sshd-session[7514]: Accepted publickey for zuul from 38.102.83.114 port 51256 ssh2: RSA SHA256:JQBEbRWdTRxWWBZFMXTcgpV6tX76cFl0ZQrAk8DySQQ
Nov 25 23:05:22 np0005535838.novalocal systemd-logind[789]: New session 4 of user zuul.
Nov 25 23:05:22 np0005535838.novalocal systemd[1]: Started Session 4 of User zuul.
Nov 25 23:05:22 np0005535838.novalocal sshd-session[7514]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 23:05:22 np0005535838.novalocal sudo[7541]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jspujnjrnhyhnmcpjqddvnwsltozzpya ; /usr/bin/python3'
Nov 25 23:05:22 np0005535838.novalocal sudo[7541]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:05:22 np0005535838.novalocal python3[7543]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda
                                                       _uses_shell=True zuul_log_id=fa163e3b-3c83-ed86-e3a6-000000001cd6-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:05:22 np0005535838.novalocal sudo[7541]: pam_unix(sudo:session): session closed for user root
Nov 25 23:05:22 np0005535838.novalocal sudo[7569]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nanvifaxykbggyfvdzqcbvmyulyzibbc ; /usr/bin/python3'
Nov 25 23:05:22 np0005535838.novalocal sudo[7569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:05:22 np0005535838.novalocal python3[7571]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:05:22 np0005535838.novalocal sudo[7569]: pam_unix(sudo:session): session closed for user root
Nov 25 23:05:22 np0005535838.novalocal sudo[7595]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbfsekcgdszvbltdgaojviwqxznntmon ; /usr/bin/python3'
Nov 25 23:05:22 np0005535838.novalocal sudo[7595]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:05:22 np0005535838.novalocal python3[7598]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:05:22 np0005535838.novalocal sudo[7595]: pam_unix(sudo:session): session closed for user root
Nov 25 23:05:23 np0005535838.novalocal sudo[7622]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zraplawqltlriqbaesjskfrzrhnusvrs ; /usr/bin/python3'
Nov 25 23:05:23 np0005535838.novalocal sudo[7622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:05:23 np0005535838.novalocal python3[7624]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:05:23 np0005535838.novalocal sudo[7622]: pam_unix(sudo:session): session closed for user root
Nov 25 23:05:23 np0005535838.novalocal sudo[7648]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imfvsrfllnikkdfavjyfsdmcfpjfwmvc ; /usr/bin/python3'
Nov 25 23:05:23 np0005535838.novalocal sudo[7648]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:05:23 np0005535838.novalocal python3[7650]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:05:23 np0005535838.novalocal sudo[7648]: pam_unix(sudo:session): session closed for user root
Nov 25 23:05:23 np0005535838.novalocal sudo[7674]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljjydktlojfstzrgsskqoqocifbsbcuh ; /usr/bin/python3'
Nov 25 23:05:23 np0005535838.novalocal sudo[7674]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:05:23 np0005535838.novalocal python3[7676]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:05:24 np0005535838.novalocal sudo[7674]: pam_unix(sudo:session): session closed for user root
Nov 25 23:05:24 np0005535838.novalocal sudo[7752]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvtemybxrishgxhtrczgbglqtnyxzzln ; /usr/bin/python3'
Nov 25 23:05:24 np0005535838.novalocal sudo[7752]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:05:24 np0005535838.novalocal python3[7754]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 23:05:24 np0005535838.novalocal sudo[7752]: pam_unix(sudo:session): session closed for user root
Nov 25 23:05:24 np0005535838.novalocal sudo[7825]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzexafeqzkzdpkbjjgcktqgothakkcnz ; /usr/bin/python3'
Nov 25 23:05:24 np0005535838.novalocal sudo[7825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:05:24 np0005535838.novalocal python3[7827]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764111924.1817765-478-259793698352113/source _original_basename=tmp568700zs follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:05:24 np0005535838.novalocal sudo[7825]: pam_unix(sudo:session): session closed for user root
Nov 25 23:05:25 np0005535838.novalocal sudo[7875]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zewnmghzpowlhgzftumrwfzgjpzfvcbi ; /usr/bin/python3'
Nov 25 23:05:25 np0005535838.novalocal sudo[7875]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:05:25 np0005535838.novalocal python3[7877]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 23:05:25 np0005535838.novalocal systemd[1]: Reloading.
Nov 25 23:05:25 np0005535838.novalocal systemd-rc-local-generator[7898]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 23:05:26 np0005535838.novalocal sudo[7875]: pam_unix(sudo:session): session closed for user root
Nov 25 23:05:27 np0005535838.novalocal sudo[7930]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypbovykvrotsyaciqmskiyvytdfbcdjk ; /usr/bin/python3'
Nov 25 23:05:27 np0005535838.novalocal sudo[7930]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:05:27 np0005535838.novalocal python3[7932]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Nov 25 23:05:27 np0005535838.novalocal sudo[7930]: pam_unix(sudo:session): session closed for user root
Nov 25 23:05:27 np0005535838.novalocal sudo[7956]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kunehrjplpznwqahxrlaliemkaorenex ; /usr/bin/python3'
Nov 25 23:05:27 np0005535838.novalocal sudo[7956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:05:27 np0005535838.novalocal python3[7958]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:05:27 np0005535838.novalocal sudo[7956]: pam_unix(sudo:session): session closed for user root
Nov 25 23:05:28 np0005535838.novalocal sudo[7984]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nldcpmhxpzfwgrojxitiqwyayimmakxi ; /usr/bin/python3'
Nov 25 23:05:28 np0005535838.novalocal sudo[7984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:05:28 np0005535838.novalocal python3[7986]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:05:28 np0005535838.novalocal sudo[7984]: pam_unix(sudo:session): session closed for user root
Nov 25 23:05:28 np0005535838.novalocal sudo[8012]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmbaoikwdkjrgfoxrcfvijowqqiktoxm ; /usr/bin/python3'
Nov 25 23:05:28 np0005535838.novalocal sudo[8012]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:05:28 np0005535838.novalocal python3[8014]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:05:28 np0005535838.novalocal sudo[8012]: pam_unix(sudo:session): session closed for user root
Nov 25 23:05:28 np0005535838.novalocal sudo[8040]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxntkpwimhxkugkdpwqptwgyjuivoilb ; /usr/bin/python3'
Nov 25 23:05:28 np0005535838.novalocal sudo[8040]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:05:28 np0005535838.novalocal python3[8042]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:05:28 np0005535838.novalocal sudo[8040]: pam_unix(sudo:session): session closed for user root
Nov 25 23:05:29 np0005535838.novalocal python3[8069]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;
                                                       _uses_shell=True zuul_log_id=fa163e3b-3c83-ed86-e3a6-000000001cdd-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:05:29 np0005535838.novalocal python3[8099]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 25 23:05:31 np0005535838.novalocal sshd-session[7517]: Connection closed by 38.102.83.114 port 51256
Nov 25 23:05:31 np0005535838.novalocal sshd-session[7514]: pam_unix(sshd:session): session closed for user zuul
Nov 25 23:05:31 np0005535838.novalocal systemd[1]: session-4.scope: Deactivated successfully.
Nov 25 23:05:31 np0005535838.novalocal systemd[1]: session-4.scope: Consumed 4.927s CPU time.
Nov 25 23:05:31 np0005535838.novalocal systemd-logind[789]: Session 4 logged out. Waiting for processes to exit.
Nov 25 23:05:31 np0005535838.novalocal systemd-logind[789]: Removed session 4.
Nov 25 23:05:33 np0005535838.novalocal sshd-session[8104]: Accepted publickey for zuul from 38.102.83.114 port 48102 ssh2: RSA SHA256:JQBEbRWdTRxWWBZFMXTcgpV6tX76cFl0ZQrAk8DySQQ
Nov 25 23:05:33 np0005535838.novalocal systemd-logind[789]: New session 5 of user zuul.
Nov 25 23:05:33 np0005535838.novalocal systemd[1]: Started Session 5 of User zuul.
Nov 25 23:05:33 np0005535838.novalocal sshd-session[8104]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 23:05:33 np0005535838.novalocal sudo[8131]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvjgfetinczgvkibqlmvvgnicfmvnbwt ; /usr/bin/python3'
Nov 25 23:05:33 np0005535838.novalocal sudo[8131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:05:33 np0005535838.novalocal python3[8133]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 25 23:05:50 np0005535838.novalocal kernel: SELinux:  Converting 386 SID table entries...
Nov 25 23:05:50 np0005535838.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 23:05:50 np0005535838.novalocal kernel: SELinux:  policy capability open_perms=1
Nov 25 23:05:50 np0005535838.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 23:05:50 np0005535838.novalocal kernel: SELinux:  policy capability always_check_network=0
Nov 25 23:05:50 np0005535838.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 23:05:50 np0005535838.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 23:05:50 np0005535838.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 23:06:00 np0005535838.novalocal kernel: SELinux:  Converting 386 SID table entries...
Nov 25 23:06:00 np0005535838.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 23:06:00 np0005535838.novalocal kernel: SELinux:  policy capability open_perms=1
Nov 25 23:06:00 np0005535838.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 23:06:00 np0005535838.novalocal kernel: SELinux:  policy capability always_check_network=0
Nov 25 23:06:00 np0005535838.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 23:06:00 np0005535838.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 23:06:00 np0005535838.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 23:06:05 np0005535838.novalocal sshd-session[8187]: Invalid user hadoop from 68.183.82.237 port 41310
Nov 25 23:06:05 np0005535838.novalocal sshd-session[8187]: Received disconnect from 68.183.82.237 port 41310:11: Bye Bye [preauth]
Nov 25 23:06:05 np0005535838.novalocal sshd-session[8187]: Disconnected from invalid user hadoop 68.183.82.237 port 41310 [preauth]
Nov 25 23:06:08 np0005535838.novalocal kernel: SELinux:  Converting 386 SID table entries...
Nov 25 23:06:08 np0005535838.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 23:06:08 np0005535838.novalocal kernel: SELinux:  policy capability open_perms=1
Nov 25 23:06:08 np0005535838.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 23:06:08 np0005535838.novalocal kernel: SELinux:  policy capability always_check_network=0
Nov 25 23:06:08 np0005535838.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 23:06:08 np0005535838.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 23:06:08 np0005535838.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 23:06:08 np0005535838.novalocal sshd[1002]: Timeout before authentication for connection from 101.126.130.181 to 38.102.83.77, pid = 7500
Nov 25 23:06:09 np0005535838.novalocal setsebool[8200]: The virt_use_nfs policy boolean was changed to 1 by root
Nov 25 23:06:09 np0005535838.novalocal setsebool[8200]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Nov 25 23:06:15 np0005535838.novalocal sshd-session[8209]: Received disconnect from 202.83.162.167 port 55520:11: Bye Bye [preauth]
Nov 25 23:06:15 np0005535838.novalocal sshd-session[8209]: Disconnected from authenticating user root 202.83.162.167 port 55520 [preauth]
Nov 25 23:06:20 np0005535838.novalocal kernel: SELinux:  Converting 389 SID table entries...
Nov 25 23:06:20 np0005535838.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 23:06:20 np0005535838.novalocal kernel: SELinux:  policy capability open_perms=1
Nov 25 23:06:20 np0005535838.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 23:06:20 np0005535838.novalocal kernel: SELinux:  policy capability always_check_network=0
Nov 25 23:06:20 np0005535838.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 23:06:20 np0005535838.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 23:06:20 np0005535838.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 23:06:39 np0005535838.novalocal sshd-session[8918]: Invalid user hadoop from 124.18.141.70 port 37632
Nov 25 23:06:39 np0005535838.novalocal sshd-session[8918]: Received disconnect from 124.18.141.70 port 37632:11: Bye Bye [preauth]
Nov 25 23:06:39 np0005535838.novalocal sshd-session[8918]: Disconnected from invalid user hadoop 124.18.141.70 port 37632 [preauth]
Nov 25 23:06:39 np0005535838.novalocal dbus-broker-launch[769]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Nov 25 23:06:39 np0005535838.novalocal systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 25 23:06:39 np0005535838.novalocal systemd[1]: Starting man-db-cache-update.service...
Nov 25 23:06:39 np0005535838.novalocal systemd[1]: Reloading.
Nov 25 23:06:39 np0005535838.novalocal systemd-rc-local-generator[8960]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 23:06:39 np0005535838.novalocal systemd[1]: Queuing reload/restart jobs for marked units…
Nov 25 23:06:40 np0005535838.novalocal sudo[8131]: pam_unix(sudo:session): session closed for user root
Nov 25 23:06:41 np0005535838.novalocal python3[10418]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"
                                                        _uses_shell=True zuul_log_id=fa163e3b-3c83-0553-b758-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:06:42 np0005535838.novalocal kernel: evm: overlay not supported
Nov 25 23:06:42 np0005535838.novalocal systemd[4299]: Starting D-Bus User Message Bus...
Nov 25 23:06:42 np0005535838.novalocal dbus-broker-launch[11414]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Nov 25 23:06:42 np0005535838.novalocal dbus-broker-launch[11414]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Nov 25 23:06:42 np0005535838.novalocal systemd[4299]: Started D-Bus User Message Bus.
Nov 25 23:06:42 np0005535838.novalocal dbus-broker-lau[11414]: Ready
Nov 25 23:06:42 np0005535838.novalocal systemd[4299]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Nov 25 23:06:42 np0005535838.novalocal systemd[4299]: Created slice Slice /user.
Nov 25 23:06:42 np0005535838.novalocal systemd[4299]: podman-11257.scope: unit configures an IP firewall, but not running as root.
Nov 25 23:06:42 np0005535838.novalocal systemd[4299]: (This warning is only shown for the first unit using IP firewalling.)
Nov 25 23:06:42 np0005535838.novalocal systemd[4299]: Started podman-11257.scope.
Nov 25 23:06:43 np0005535838.novalocal systemd[4299]: Started podman-pause-11b68d86.scope.
Nov 25 23:06:43 np0005535838.novalocal sudo[11971]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywfhnzgdsbdwojlmptzkfycaieuuisjj ; /usr/bin/python3'
Nov 25 23:06:43 np0005535838.novalocal sudo[11971]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:06:43 np0005535838.novalocal python3[11985]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]
                                                       location = "38.102.83.64:5001"
                                                       insecure = true path=/etc/containers/registries.conf block=[[registry]]
                                                       location = "38.102.83.64:5001"
                                                       insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:06:43 np0005535838.novalocal python3[11985]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Nov 25 23:06:43 np0005535838.novalocal sudo[11971]: pam_unix(sudo:session): session closed for user root
Nov 25 23:06:43 np0005535838.novalocal sshd-session[8107]: Connection closed by 38.102.83.114 port 48102
Nov 25 23:06:43 np0005535838.novalocal sshd-session[8104]: pam_unix(sshd:session): session closed for user zuul
Nov 25 23:06:43 np0005535838.novalocal systemd-logind[789]: Session 5 logged out. Waiting for processes to exit.
Nov 25 23:06:43 np0005535838.novalocal systemd[1]: session-5.scope: Deactivated successfully.
Nov 25 23:06:43 np0005535838.novalocal systemd[1]: session-5.scope: Consumed 1min 2.351s CPU time.
Nov 25 23:06:44 np0005535838.novalocal systemd-logind[789]: Removed session 5.
Nov 25 23:06:46 np0005535838.novalocal sshd-session[12654]: Received disconnect from 171.244.140.22 port 42060:11: Bye Bye [preauth]
Nov 25 23:06:46 np0005535838.novalocal sshd-session[12654]: Disconnected from authenticating user root 171.244.140.22 port 42060 [preauth]
Nov 25 23:07:02 np0005535838.novalocal sshd-session[18998]: Unable to negotiate with 38.102.83.97 port 39464: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Nov 25 23:07:02 np0005535838.novalocal sshd-session[19000]: Unable to negotiate with 38.102.83.97 port 39442: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Nov 25 23:07:02 np0005535838.novalocal sshd-session[18997]: Unable to negotiate with 38.102.83.97 port 39450: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Nov 25 23:07:02 np0005535838.novalocal sshd-session[19005]: Connection closed by 38.102.83.97 port 39424 [preauth]
Nov 25 23:07:02 np0005535838.novalocal sshd-session[19001]: Connection closed by 38.102.83.97 port 39436 [preauth]
Nov 25 23:07:06 np0005535838.novalocal sshd-session[20268]: Accepted publickey for zuul from 38.102.83.114 port 41834 ssh2: RSA SHA256:JQBEbRWdTRxWWBZFMXTcgpV6tX76cFl0ZQrAk8DySQQ
Nov 25 23:07:06 np0005535838.novalocal systemd-logind[789]: New session 6 of user zuul.
Nov 25 23:07:06 np0005535838.novalocal systemd[1]: Started Session 6 of User zuul.
Nov 25 23:07:06 np0005535838.novalocal sshd-session[20268]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 23:07:06 np0005535838.novalocal python3[20389]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPM7Qil9AMr2etKuOlKIrXGcRMkAFiz1ts6BOxq0bX4RBlm/XJgvcH7YJWV/REzh/qqlLkHxRdWzWpdOjdNO7dM= zuul@np0005535837.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 23:07:07 np0005535838.novalocal sudo[20544]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-orrfzjomctvaijnhxexwgttnjuvuqzsj ; /usr/bin/python3'
Nov 25 23:07:07 np0005535838.novalocal sudo[20544]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:07:07 np0005535838.novalocal python3[20551]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPM7Qil9AMr2etKuOlKIrXGcRMkAFiz1ts6BOxq0bX4RBlm/XJgvcH7YJWV/REzh/qqlLkHxRdWzWpdOjdNO7dM= zuul@np0005535837.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 23:07:07 np0005535838.novalocal sudo[20544]: pam_unix(sudo:session): session closed for user root
Nov 25 23:07:08 np0005535838.novalocal sudo[20804]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgrmzvjklnagochcbvgyzuguyarajkqi ; /usr/bin/python3'
Nov 25 23:07:08 np0005535838.novalocal sudo[20804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:07:08 np0005535838.novalocal python3[20810]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005535838.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Nov 25 23:07:08 np0005535838.novalocal useradd[20866]: new group: name=cloud-admin, GID=1002
Nov 25 23:07:08 np0005535838.novalocal useradd[20866]: new user: name=cloud-admin, UID=1002, GID=1002, home=/home/cloud-admin, shell=/bin/bash, from=none
Nov 25 23:07:08 np0005535838.novalocal sudo[20804]: pam_unix(sudo:session): session closed for user root
Nov 25 23:07:08 np0005535838.novalocal sudo[20980]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pboerkzxfwdnozrpuujawoseyaapkqwt ; /usr/bin/python3'
Nov 25 23:07:08 np0005535838.novalocal sudo[20980]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:07:08 np0005535838.novalocal python3[20988]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPM7Qil9AMr2etKuOlKIrXGcRMkAFiz1ts6BOxq0bX4RBlm/XJgvcH7YJWV/REzh/qqlLkHxRdWzWpdOjdNO7dM= zuul@np0005535837.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 23:07:08 np0005535838.novalocal sudo[20980]: pam_unix(sudo:session): session closed for user root
Nov 25 23:07:09 np0005535838.novalocal sudo[21223]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrjdkhjxqhrnricrrgfmagsjbfmhbkds ; /usr/bin/python3'
Nov 25 23:07:09 np0005535838.novalocal sudo[21223]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:07:09 np0005535838.novalocal python3[21233]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 23:07:09 np0005535838.novalocal sudo[21223]: pam_unix(sudo:session): session closed for user root
Nov 25 23:07:09 np0005535838.novalocal sudo[21468]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcthgyehiwthzlwnxjitoxrldioeguhi ; /usr/bin/python3'
Nov 25 23:07:09 np0005535838.novalocal sudo[21468]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:07:09 np0005535838.novalocal python3[21479]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764112028.8805213-135-84787243895554/source _original_basename=tmp1n04_1_m follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:07:09 np0005535838.novalocal sudo[21468]: pam_unix(sudo:session): session closed for user root
Nov 25 23:07:10 np0005535838.novalocal sudo[21742]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqnfuolcfebehevtmxpyvukmnuozuhmq ; /usr/bin/python3'
Nov 25 23:07:10 np0005535838.novalocal sudo[21742]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:07:10 np0005535838.novalocal python3[21752]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Nov 25 23:07:10 np0005535838.novalocal systemd[1]: Starting Hostname Service...
Nov 25 23:07:10 np0005535838.novalocal systemd[1]: Started Hostname Service.
Nov 25 23:07:10 np0005535838.novalocal systemd-hostnamed[21851]: Changed pretty hostname to 'compute-0'
Nov 25 23:07:10 compute-0 systemd-hostnamed[21851]: Hostname set to <compute-0> (static)
Nov 25 23:07:10 compute-0 NetworkManager[7181]: <info>  [1764112030.8148] hostname: static hostname changed from "np0005535838.novalocal" to "compute-0"
Nov 25 23:07:10 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 25 23:07:10 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 25 23:07:10 compute-0 sudo[21742]: pam_unix(sudo:session): session closed for user root
Nov 25 23:07:11 compute-0 sshd-session[20322]: Connection closed by 38.102.83.114 port 41834
Nov 25 23:07:11 compute-0 sshd-session[20268]: pam_unix(sshd:session): session closed for user zuul
Nov 25 23:07:11 compute-0 systemd[1]: session-6.scope: Deactivated successfully.
Nov 25 23:07:11 compute-0 systemd[1]: session-6.scope: Consumed 2.615s CPU time.
Nov 25 23:07:11 compute-0 systemd-logind[789]: Session 6 logged out. Waiting for processes to exit.
Nov 25 23:07:11 compute-0 systemd-logind[789]: Removed session 6.
Nov 25 23:07:16 compute-0 sshd-session[23301]: Invalid user ubuntu from 68.183.82.237 port 34274
Nov 25 23:07:16 compute-0 sshd-session[23301]: Received disconnect from 68.183.82.237 port 34274:11: Bye Bye [preauth]
Nov 25 23:07:16 compute-0 sshd-session[23301]: Disconnected from invalid user ubuntu 68.183.82.237 port 34274 [preauth]
Nov 25 23:07:20 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 25 23:07:33 compute-0 sshd-session[28456]: Received disconnect from 121.37.183.152 port 59102:11: Bye Bye [preauth]
Nov 25 23:07:33 compute-0 sshd-session[28456]: Disconnected from authenticating user root 121.37.183.152 port 59102 [preauth]
Nov 25 23:07:37 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 25 23:07:37 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 25 23:07:37 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1min 10.338s CPU time.
Nov 25 23:07:37 compute-0 systemd[1]: run-r65364839656d4d5385082e5d9b56d764.service: Deactivated successfully.
Nov 25 23:07:40 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 25 23:07:45 compute-0 sshd-session[30019]: Invalid user ubuntu from 202.83.162.167 port 44800
Nov 25 23:07:45 compute-0 sshd-session[30019]: Received disconnect from 202.83.162.167 port 44800:11: Bye Bye [preauth]
Nov 25 23:07:45 compute-0 sshd-session[30019]: Disconnected from invalid user ubuntu 202.83.162.167 port 44800 [preauth]
Nov 25 23:07:54 compute-0 sshd-session[30021]: Connection reset by 121.37.183.152 port 41276 [preauth]
Nov 25 23:08:12 compute-0 systemd[1]: Starting Cleanup of Temporary Directories...
Nov 25 23:08:12 compute-0 sshd-session[30023]: Invalid user user from 124.18.141.70 port 43120
Nov 25 23:08:12 compute-0 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Nov 25 23:08:12 compute-0 systemd[1]: Finished Cleanup of Temporary Directories.
Nov 25 23:08:12 compute-0 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Nov 25 23:08:12 compute-0 sshd-session[30023]: Received disconnect from 124.18.141.70 port 43120:11: Bye Bye [preauth]
Nov 25 23:08:12 compute-0 sshd-session[30023]: Disconnected from invalid user user 124.18.141.70 port 43120 [preauth]
Nov 25 23:08:26 compute-0 sshd[1002]: Timeout before authentication for connection from 45.78.216.132 to 38.102.83.77, pid = 8916
Nov 25 23:08:32 compute-0 sshd-session[30027]: Invalid user docker from 68.183.82.237 port 46656
Nov 25 23:08:32 compute-0 sshd-session[30027]: Received disconnect from 68.183.82.237 port 46656:11: Bye Bye [preauth]
Nov 25 23:08:32 compute-0 sshd-session[30027]: Disconnected from invalid user docker 68.183.82.237 port 46656 [preauth]
Nov 25 23:08:49 compute-0 sshd-session[30029]: Received disconnect from 171.244.140.22 port 59296:11: Bye Bye [preauth]
Nov 25 23:08:49 compute-0 sshd-session[30029]: Disconnected from authenticating user root 171.244.140.22 port 59296 [preauth]
Nov 25 23:09:02 compute-0 sshd[1002]: drop connection #0 from [45.78.216.132]:44290 on [38.102.83.77]:22 penalty: exceeded LoginGraceTime
Nov 25 23:09:17 compute-0 sshd-session[30033]: Invalid user alex from 202.83.162.167 port 39242
Nov 25 23:09:17 compute-0 sshd-session[30033]: Received disconnect from 202.83.162.167 port 39242:11: Bye Bye [preauth]
Nov 25 23:09:17 compute-0 sshd-session[30033]: Disconnected from invalid user alex 202.83.162.167 port 39242 [preauth]
Nov 25 23:09:47 compute-0 sshd-session[30036]: Received disconnect from 68.183.82.237 port 47806:11: Bye Bye [preauth]
Nov 25 23:09:47 compute-0 sshd-session[30036]: Disconnected from authenticating user root 68.183.82.237 port 47806 [preauth]
Nov 25 23:09:52 compute-0 sshd-session[30038]: Invalid user tech from 124.18.141.70 port 37182
Nov 25 23:09:52 compute-0 sshd-session[30038]: Received disconnect from 124.18.141.70 port 37182:11: Bye Bye [preauth]
Nov 25 23:09:52 compute-0 sshd-session[30038]: Disconnected from invalid user tech 124.18.141.70 port 37182 [preauth]
Nov 25 23:10:44 compute-0 sshd-session[30040]: Received disconnect from 202.83.162.167 port 52656:11: Bye Bye [preauth]
Nov 25 23:10:44 compute-0 sshd-session[30040]: Disconnected from authenticating user root 202.83.162.167 port 52656 [preauth]
Nov 25 23:10:47 compute-0 sshd-session[30042]: Received disconnect from 171.244.140.22 port 43884:11: Bye Bye [preauth]
Nov 25 23:10:47 compute-0 sshd-session[30042]: Disconnected from authenticating user root 171.244.140.22 port 43884 [preauth]
Nov 25 23:10:59 compute-0 sshd-session[30044]: Received disconnect from 68.183.82.237 port 50898:11: Bye Bye [preauth]
Nov 25 23:10:59 compute-0 sshd-session[30044]: Disconnected from authenticating user root 68.183.82.237 port 50898 [preauth]
Nov 25 23:11:05 compute-0 sshd-session[30046]: Accepted publickey for zuul from 38.102.83.97 port 56678 ssh2: RSA SHA256:JQBEbRWdTRxWWBZFMXTcgpV6tX76cFl0ZQrAk8DySQQ
Nov 25 23:11:05 compute-0 systemd-logind[789]: New session 7 of user zuul.
Nov 25 23:11:05 compute-0 systemd[1]: Started Session 7 of User zuul.
Nov 25 23:11:05 compute-0 sshd-session[30046]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 23:11:06 compute-0 python3[30122]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 23:11:09 compute-0 sudo[30236]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtmcjozbvqynbsnbqdwofutqgxbudkqw ; /usr/bin/python3'
Nov 25 23:11:09 compute-0 sudo[30236]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:11:09 compute-0 python3[30238]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 23:11:09 compute-0 sudo[30236]: pam_unix(sudo:session): session closed for user root
Nov 25 23:11:09 compute-0 sudo[30309]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkojjvldsafqirigglvubkrgziilftco ; /usr/bin/python3'
Nov 25 23:11:09 compute-0 sudo[30309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:11:09 compute-0 python3[30311]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764112268.953095-33647-43402362039189/source mode=0755 _original_basename=delorean.repo follow=False checksum=1830be8248976a7f714fb01ca8550e92dfc79ad2 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:11:09 compute-0 sudo[30309]: pam_unix(sudo:session): session closed for user root
Nov 25 23:11:09 compute-0 sudo[30335]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iamyienbunrrtmznnqnzwkrbhkdwdbez ; /usr/bin/python3'
Nov 25 23:11:09 compute-0 sudo[30335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:11:10 compute-0 python3[30337]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 23:11:10 compute-0 sudo[30335]: pam_unix(sudo:session): session closed for user root
Nov 25 23:11:10 compute-0 sudo[30408]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kampthdcyiuhlsgtlfhxhsarqseqsljy ; /usr/bin/python3'
Nov 25 23:11:10 compute-0 sudo[30408]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:11:10 compute-0 python3[30410]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764112268.953095-33647-43402362039189/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=0bdbb813b840548359ae77c28d76ca272ccaf31b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:11:10 compute-0 sudo[30408]: pam_unix(sudo:session): session closed for user root
Nov 25 23:11:10 compute-0 sudo[30434]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-escappsnugrlrlubpkmpupvdgqkfuija ; /usr/bin/python3'
Nov 25 23:11:10 compute-0 sudo[30434]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:11:10 compute-0 python3[30436]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 23:11:10 compute-0 sudo[30434]: pam_unix(sudo:session): session closed for user root
Nov 25 23:11:11 compute-0 sudo[30507]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alabwowofejwlzkytuqejbjhuhlwqjzx ; /usr/bin/python3'
Nov 25 23:11:11 compute-0 sudo[30507]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:11:11 compute-0 python3[30509]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764112268.953095-33647-43402362039189/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:11:11 compute-0 sudo[30507]: pam_unix(sudo:session): session closed for user root
Nov 25 23:11:11 compute-0 sudo[30533]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdcmismvuihgtlqlsujffvdajbawylrq ; /usr/bin/python3'
Nov 25 23:11:11 compute-0 sudo[30533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:11:11 compute-0 python3[30535]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 23:11:11 compute-0 sudo[30533]: pam_unix(sudo:session): session closed for user root
Nov 25 23:11:11 compute-0 sudo[30606]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlyxpgzzbgjyqsjysttdpxbnrzsdanbi ; /usr/bin/python3'
Nov 25 23:11:11 compute-0 sudo[30606]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:11:11 compute-0 python3[30608]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764112268.953095-33647-43402362039189/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:11:11 compute-0 sudo[30606]: pam_unix(sudo:session): session closed for user root
Nov 25 23:11:11 compute-0 sudo[30632]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-unyiqmcwujyqlkytxtjscfeujhgprqiy ; /usr/bin/python3'
Nov 25 23:11:11 compute-0 sudo[30632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:11:12 compute-0 python3[30634]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 23:11:12 compute-0 sudo[30632]: pam_unix(sudo:session): session closed for user root
Nov 25 23:11:12 compute-0 sudo[30705]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvdutldmulfotmqpkqebemfumepxceuu ; /usr/bin/python3'
Nov 25 23:11:12 compute-0 sudo[30705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:11:12 compute-0 python3[30707]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764112268.953095-33647-43402362039189/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:11:12 compute-0 sudo[30705]: pam_unix(sudo:session): session closed for user root
Nov 25 23:11:12 compute-0 sudo[30731]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxkzajveousxivxvhoprmtqkegqriwdm ; /usr/bin/python3'
Nov 25 23:11:12 compute-0 sudo[30731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:11:12 compute-0 python3[30733]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 23:11:12 compute-0 sudo[30731]: pam_unix(sudo:session): session closed for user root
Nov 25 23:11:13 compute-0 sudo[30804]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-parnqzcgcqosdfhdllbtohomdoegirtd ; /usr/bin/python3'
Nov 25 23:11:13 compute-0 sudo[30804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:11:13 compute-0 python3[30806]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764112268.953095-33647-43402362039189/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:11:13 compute-0 sudo[30804]: pam_unix(sudo:session): session closed for user root
Nov 25 23:11:13 compute-0 sudo[30830]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbelxmidpnciimhznspivcpugrrjmgue ; /usr/bin/python3'
Nov 25 23:11:13 compute-0 sudo[30830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:11:13 compute-0 python3[30832]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 23:11:13 compute-0 sudo[30830]: pam_unix(sudo:session): session closed for user root
Nov 25 23:11:13 compute-0 sudo[30903]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-indnsleinrxjimcitgbpfpgqmifoyukx ; /usr/bin/python3'
Nov 25 23:11:13 compute-0 sudo[30903]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:11:13 compute-0 python3[30905]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764112268.953095-33647-43402362039189/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=6646317362318a9831d66a1804f6bb7dd1b97cd5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:11:14 compute-0 sudo[30903]: pam_unix(sudo:session): session closed for user root
Nov 25 23:11:16 compute-0 sshd-session[30932]: Unable to negotiate with 192.168.122.11 port 46052: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Nov 25 23:11:16 compute-0 sshd-session[30931]: Connection closed by 192.168.122.11 port 46028 [preauth]
Nov 25 23:11:16 compute-0 sshd-session[30930]: Connection closed by 192.168.122.11 port 46038 [preauth]
Nov 25 23:11:16 compute-0 sshd-session[30933]: Unable to negotiate with 192.168.122.11 port 46046: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Nov 25 23:11:16 compute-0 sshd-session[30934]: Unable to negotiate with 192.168.122.11 port 46054: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Nov 25 23:11:25 compute-0 python3[30963]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:11:28 compute-0 sshd-session[30965]: Invalid user ubuntu from 124.18.141.70 port 45302
Nov 25 23:11:28 compute-0 sshd-session[30965]: Received disconnect from 124.18.141.70 port 45302:11: Bye Bye [preauth]
Nov 25 23:11:28 compute-0 sshd-session[30965]: Disconnected from invalid user ubuntu 124.18.141.70 port 45302 [preauth]
Nov 25 23:12:09 compute-0 sshd-session[30968]: Invalid user tibero from 68.183.82.237 port 35114
Nov 25 23:12:10 compute-0 sshd-session[30968]: Received disconnect from 68.183.82.237 port 35114:11: Bye Bye [preauth]
Nov 25 23:12:10 compute-0 sshd-session[30968]: Disconnected from invalid user tibero 68.183.82.237 port 35114 [preauth]
Nov 25 23:12:10 compute-0 sshd-session[30970]: Invalid user appuser from 202.83.162.167 port 59006
Nov 25 23:12:11 compute-0 sshd-session[30970]: Received disconnect from 202.83.162.167 port 59006:11: Bye Bye [preauth]
Nov 25 23:12:11 compute-0 sshd-session[30970]: Disconnected from invalid user appuser 202.83.162.167 port 59006 [preauth]
Nov 25 23:12:18 compute-0 sshd-session[30972]: Connection closed by 18.190.152.179 port 33134
Nov 25 23:12:46 compute-0 sshd-session[30974]: Received disconnect from 171.244.140.22 port 59064:11: Bye Bye [preauth]
Nov 25 23:12:46 compute-0 sshd-session[30974]: Disconnected from authenticating user root 171.244.140.22 port 59064 [preauth]
Nov 25 23:13:04 compute-0 sshd-session[30976]: Invalid user test3 from 124.18.141.70 port 44760
Nov 25 23:13:04 compute-0 sshd-session[30976]: Received disconnect from 124.18.141.70 port 44760:11: Bye Bye [preauth]
Nov 25 23:13:04 compute-0 sshd-session[30976]: Disconnected from invalid user test3 124.18.141.70 port 44760 [preauth]
Nov 25 23:13:22 compute-0 sshd-session[30978]: Received disconnect from 68.183.82.237 port 50776:11: Bye Bye [preauth]
Nov 25 23:13:22 compute-0 sshd-session[30978]: Disconnected from authenticating user root 68.183.82.237 port 50776 [preauth]
Nov 25 23:13:37 compute-0 sshd-session[30980]: Received disconnect from 202.83.162.167 port 52256:11: Bye Bye [preauth]
Nov 25 23:13:37 compute-0 sshd-session[30980]: Disconnected from authenticating user root 202.83.162.167 port 52256 [preauth]
Nov 25 23:13:46 compute-0 sshd[1002]: Timeout before authentication for connection from 45.78.216.132 to 38.102.83.77, pid = 30967
Nov 25 23:14:22 compute-0 sshd[1002]: drop connection #0 from [45.78.216.132]:36236 on [38.102.83.77]:22 penalty: exceeded LoginGraceTime
Nov 25 23:14:35 compute-0 sshd-session[30982]: Invalid user bitnami from 68.183.82.237 port 47878
Nov 25 23:14:36 compute-0 sshd-session[30982]: Received disconnect from 68.183.82.237 port 47878:11: Bye Bye [preauth]
Nov 25 23:14:36 compute-0 sshd-session[30982]: Disconnected from invalid user bitnami 68.183.82.237 port 47878 [preauth]
Nov 25 23:14:39 compute-0 sshd-session[30984]: Received disconnect from 124.18.141.70 port 36200:11: Bye Bye [preauth]
Nov 25 23:14:39 compute-0 sshd-session[30984]: Disconnected from authenticating user root 124.18.141.70 port 36200 [preauth]
Nov 25 23:14:46 compute-0 sshd-session[30986]: Received disconnect from 171.244.140.22 port 59476:11: Bye Bye [preauth]
Nov 25 23:14:46 compute-0 sshd-session[30986]: Disconnected from authenticating user root 171.244.140.22 port 59476 [preauth]
Nov 25 23:15:06 compute-0 sshd-session[30989]: Invalid user mm from 202.83.162.167 port 38362
Nov 25 23:15:06 compute-0 sshd-session[30989]: Received disconnect from 202.83.162.167 port 38362:11: Bye Bye [preauth]
Nov 25 23:15:06 compute-0 sshd-session[30989]: Disconnected from invalid user mm 202.83.162.167 port 38362 [preauth]
Nov 25 23:15:57 compute-0 sshd-session[30992]: Invalid user vyos from 68.183.82.237 port 48540
Nov 25 23:15:57 compute-0 sshd-session[30992]: Received disconnect from 68.183.82.237 port 48540:11: Bye Bye [preauth]
Nov 25 23:15:57 compute-0 sshd-session[30992]: Disconnected from invalid user vyos 68.183.82.237 port 48540 [preauth]
Nov 25 23:16:17 compute-0 sshd-session[30994]: Invalid user afa from 124.18.141.70 port 48446
Nov 25 23:16:17 compute-0 sshd-session[30994]: Received disconnect from 124.18.141.70 port 48446:11: Bye Bye [preauth]
Nov 25 23:16:17 compute-0 sshd-session[30994]: Disconnected from invalid user afa 124.18.141.70 port 48446 [preauth]
Nov 25 23:16:25 compute-0 sshd-session[30049]: Received disconnect from 38.102.83.97 port 56678:11: disconnected by user
Nov 25 23:16:25 compute-0 sshd-session[30049]: Disconnected from user zuul 38.102.83.97 port 56678
Nov 25 23:16:25 compute-0 sshd-session[30046]: pam_unix(sshd:session): session closed for user zuul
Nov 25 23:16:25 compute-0 systemd[1]: session-7.scope: Deactivated successfully.
Nov 25 23:16:25 compute-0 systemd[1]: session-7.scope: Consumed 5.896s CPU time.
Nov 25 23:16:25 compute-0 systemd-logind[789]: Session 7 logged out. Waiting for processes to exit.
Nov 25 23:16:25 compute-0 systemd-logind[789]: Removed session 7.
Nov 25 23:16:34 compute-0 sshd-session[30996]: Received disconnect from 202.83.162.167 port 43630:11: Bye Bye [preauth]
Nov 25 23:16:34 compute-0 sshd-session[30996]: Disconnected from authenticating user root 202.83.162.167 port 43630 [preauth]
Nov 25 23:16:40 compute-0 sshd-session[30998]: Received disconnect from 171.244.140.22 port 50162:11: Bye Bye [preauth]
Nov 25 23:16:40 compute-0 sshd-session[30998]: Disconnected from authenticating user root 171.244.140.22 port 50162 [preauth]
Nov 25 23:16:43 compute-0 sshd-session[31000]: Connection closed by authenticating user root 85.94.49.204 port 55522 [preauth]
Nov 25 23:16:44 compute-0 sshd-session[31002]: Connection closed by authenticating user root 85.94.49.204 port 55532 [preauth]
Nov 25 23:16:47 compute-0 sshd-session[31004]: Connection closed by authenticating user root 85.94.49.204 port 55544 [preauth]
Nov 25 23:16:48 compute-0 sshd-session[31006]: Connection closed by authenticating user root 85.94.49.204 port 55546 [preauth]
Nov 25 23:16:48 compute-0 sshd-session[31008]: Connection closed by authenticating user root 85.94.49.204 port 55560 [preauth]
Nov 25 23:16:49 compute-0 sshd-session[31010]: Connection closed by authenticating user root 85.94.49.204 port 56786 [preauth]
Nov 25 23:16:50 compute-0 sshd-session[31012]: Connection closed by authenticating user root 85.94.49.204 port 56800 [preauth]
Nov 25 23:16:51 compute-0 sshd-session[31014]: Connection closed by authenticating user root 85.94.49.204 port 56810 [preauth]
Nov 25 23:16:52 compute-0 sshd-session[31016]: Connection closed by authenticating user root 85.94.49.204 port 56814 [preauth]
Nov 25 23:16:52 compute-0 sshd-session[31018]: Connection closed by authenticating user root 85.94.49.204 port 56816 [preauth]
Nov 25 23:16:53 compute-0 sshd-session[31020]: Connection closed by authenticating user root 85.94.49.204 port 56830 [preauth]
Nov 25 23:16:54 compute-0 sshd-session[31022]: Connection closed by authenticating user root 85.94.49.204 port 56836 [preauth]
Nov 25 23:16:55 compute-0 sshd-session[31024]: Connection closed by authenticating user root 85.94.49.204 port 56846 [preauth]
Nov 25 23:16:55 compute-0 sshd-session[31026]: Connection closed by authenticating user root 85.94.49.204 port 56856 [preauth]
Nov 25 23:16:56 compute-0 sshd-session[31029]: Connection closed by authenticating user root 85.94.49.204 port 56866 [preauth]
Nov 25 23:16:57 compute-0 sshd-session[31031]: Connection closed by authenticating user root 85.94.49.204 port 56880 [preauth]
Nov 25 23:16:58 compute-0 sshd-session[31033]: Connection closed by authenticating user root 85.94.49.204 port 56888 [preauth]
Nov 25 23:16:58 compute-0 sshd-session[31035]: Connection closed by authenticating user root 85.94.49.204 port 56896 [preauth]
Nov 25 23:16:59 compute-0 sshd-session[31037]: Connection closed by authenticating user root 85.94.49.204 port 36802 [preauth]
Nov 25 23:17:00 compute-0 sshd-session[31039]: Connection closed by authenticating user root 85.94.49.204 port 36814 [preauth]
Nov 25 23:17:01 compute-0 sshd-session[31041]: Connection closed by authenticating user root 85.94.49.204 port 36816 [preauth]
Nov 25 23:17:01 compute-0 sshd-session[31028]: Connection closed by 45.78.216.132 port 35532 [preauth]
Nov 25 23:17:01 compute-0 sshd-session[31043]: Connection closed by authenticating user root 85.94.49.204 port 36818 [preauth]
Nov 25 23:17:02 compute-0 sshd-session[31046]: Connection closed by authenticating user root 85.94.49.204 port 36824 [preauth]
Nov 25 23:17:03 compute-0 sshd-session[31048]: Connection closed by authenticating user root 85.94.49.204 port 36830 [preauth]
Nov 25 23:17:04 compute-0 sshd-session[31050]: Connection closed by authenticating user root 85.94.49.204 port 36838 [preauth]
Nov 25 23:17:05 compute-0 sshd-session[31052]: Connection closed by authenticating user root 85.94.49.204 port 36842 [preauth]
Nov 25 23:17:05 compute-0 sshd-session[31054]: Connection closed by authenticating user root 85.94.49.204 port 36858 [preauth]
Nov 25 23:17:06 compute-0 sshd-session[31056]: Connection closed by authenticating user root 85.94.49.204 port 36860 [preauth]
Nov 25 23:17:07 compute-0 sshd-session[31058]: Connection closed by authenticating user root 85.94.49.204 port 36876 [preauth]
Nov 25 23:17:08 compute-0 sshd-session[31060]: Connection closed by authenticating user root 85.94.49.204 port 36882 [preauth]
Nov 25 23:17:08 compute-0 sshd-session[31062]: Connection closed by authenticating user root 85.94.49.204 port 36890 [preauth]
Nov 25 23:17:09 compute-0 sshd-session[31064]: Connection closed by authenticating user root 85.94.49.204 port 33122 [preauth]
Nov 25 23:17:10 compute-0 sshd-session[31066]: Connection closed by authenticating user root 85.94.49.204 port 33132 [preauth]
Nov 25 23:17:11 compute-0 sshd-session[31068]: Connection closed by authenticating user root 85.94.49.204 port 33138 [preauth]
Nov 25 23:17:11 compute-0 sshd-session[31070]: Connection closed by authenticating user root 85.94.49.204 port 33152 [preauth]
Nov 25 23:17:12 compute-0 sshd-session[31072]: Invalid user sftp from 68.183.82.237 port 50288
Nov 25 23:17:12 compute-0 sshd-session[31074]: Connection closed by authenticating user root 85.94.49.204 port 33154 [preauth]
Nov 25 23:17:12 compute-0 sshd-session[31072]: Received disconnect from 68.183.82.237 port 50288:11: Bye Bye [preauth]
Nov 25 23:17:12 compute-0 sshd-session[31072]: Disconnected from invalid user sftp 68.183.82.237 port 50288 [preauth]
Nov 25 23:17:13 compute-0 sshd-session[31076]: Connection closed by authenticating user root 85.94.49.204 port 33160 [preauth]
Nov 25 23:17:14 compute-0 sshd-session[31078]: Connection closed by authenticating user root 85.94.49.204 port 33174 [preauth]
Nov 25 23:17:14 compute-0 sshd-session[31080]: Connection closed by authenticating user root 85.94.49.204 port 33190 [preauth]
Nov 25 23:17:15 compute-0 sshd-session[31082]: Connection closed by authenticating user root 85.94.49.204 port 33204 [preauth]
Nov 25 23:17:16 compute-0 sshd-session[31084]: Connection closed by authenticating user root 85.94.49.204 port 33214 [preauth]
Nov 25 23:17:17 compute-0 sshd-session[31086]: Connection closed by authenticating user root 85.94.49.204 port 33226 [preauth]
Nov 25 23:17:20 compute-0 sshd-session[31088]: Connection closed by authenticating user root 85.94.49.204 port 33234 [preauth]
Nov 25 23:17:20 compute-0 sshd-session[31090]: Connection closed by authenticating user root 85.94.49.204 port 55996 [preauth]
Nov 25 23:17:21 compute-0 sshd-session[31092]: Connection closed by authenticating user root 85.94.49.204 port 56006 [preauth]
Nov 25 23:17:22 compute-0 sshd-session[31094]: Connection closed by authenticating user root 85.94.49.204 port 56018 [preauth]
Nov 25 23:17:23 compute-0 sshd-session[31096]: Connection closed by authenticating user root 85.94.49.204 port 56026 [preauth]
Nov 25 23:17:23 compute-0 sshd-session[31098]: Connection closed by authenticating user root 85.94.49.204 port 56036 [preauth]
Nov 25 23:17:24 compute-0 sshd-session[31100]: Connection closed by authenticating user root 85.94.49.204 port 56050 [preauth]
Nov 25 23:17:25 compute-0 sshd-session[31102]: Connection closed by authenticating user root 85.94.49.204 port 56060 [preauth]
Nov 25 23:17:26 compute-0 sshd-session[31104]: Connection closed by authenticating user root 85.94.49.204 port 56074 [preauth]
Nov 25 23:17:26 compute-0 sshd-session[31106]: Connection closed by authenticating user root 85.94.49.204 port 56090 [preauth]
Nov 25 23:17:27 compute-0 sshd-session[31108]: Connection closed by authenticating user root 85.94.49.204 port 56100 [preauth]
Nov 25 23:17:28 compute-0 sshd-session[31110]: Connection closed by authenticating user root 85.94.49.204 port 56108 [preauth]
Nov 25 23:17:29 compute-0 sshd-session[31112]: Connection closed by authenticating user root 85.94.49.204 port 56124 [preauth]
Nov 25 23:17:29 compute-0 sshd-session[31114]: Connection closed by authenticating user root 85.94.49.204 port 47194 [preauth]
Nov 25 23:17:30 compute-0 sshd-session[31116]: Connection closed by authenticating user root 85.94.49.204 port 47208 [preauth]
Nov 25 23:17:31 compute-0 sshd-session[31118]: Connection closed by authenticating user root 85.94.49.204 port 47210 [preauth]
Nov 25 23:17:32 compute-0 sshd-session[31120]: Connection closed by authenticating user root 85.94.49.204 port 47218 [preauth]
Nov 25 23:17:33 compute-0 sshd-session[31122]: Connection closed by authenticating user root 85.94.49.204 port 47228 [preauth]
Nov 25 23:17:33 compute-0 sshd-session[31124]: Connection closed by authenticating user root 85.94.49.204 port 47234 [preauth]
Nov 25 23:17:34 compute-0 sshd-session[31126]: Connection closed by authenticating user root 85.94.49.204 port 47236 [preauth]
Nov 25 23:17:35 compute-0 sshd-session[31128]: Connection closed by authenticating user root 85.94.49.204 port 47244 [preauth]
Nov 25 23:17:36 compute-0 sshd-session[31130]: Connection closed by authenticating user root 85.94.49.204 port 47246 [preauth]
Nov 25 23:17:36 compute-0 sshd-session[31132]: Connection closed by authenticating user root 85.94.49.204 port 47260 [preauth]
Nov 25 23:17:37 compute-0 sshd-session[31134]: Connection closed by authenticating user root 85.94.49.204 port 47268 [preauth]
Nov 25 23:17:38 compute-0 sshd-session[31136]: Connection closed by authenticating user root 85.94.49.204 port 47282 [preauth]
Nov 25 23:17:39 compute-0 sshd-session[31138]: Connection closed by authenticating user root 85.94.49.204 port 47290 [preauth]
Nov 25 23:17:39 compute-0 sshd-session[31140]: Connection closed by authenticating user root 85.94.49.204 port 41276 [preauth]
Nov 25 23:17:40 compute-0 sshd-session[31142]: Connection closed by authenticating user root 85.94.49.204 port 41280 [preauth]
Nov 25 23:17:41 compute-0 sshd-session[31144]: Connection closed by authenticating user root 85.94.49.204 port 41296 [preauth]
Nov 25 23:17:42 compute-0 sshd-session[31146]: Connection closed by authenticating user root 85.94.49.204 port 41304 [preauth]
Nov 25 23:17:42 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:41318 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:17:42 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:41322 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:17:43 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:41330 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:17:44 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:41340 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:17:44 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:41352 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:17:44 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:41354 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:17:44 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:41364 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:17:45 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:41376 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:17:45 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:41384 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:17:45 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:41390 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:17:45 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:41406 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:17:46 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:41418 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:17:46 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:41432 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:17:46 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:41440 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:17:46 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:41448 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:17:47 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:41452 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:17:47 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:41464 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:17:47 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:41480 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:17:47 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:41484 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:17:48 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:41492 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:17:48 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:41494 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:17:48 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:41496 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:17:48 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:41508 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:17:49 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:51258 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:17:49 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:51272 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:17:49 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:51274 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:17:49 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:51286 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:17:50 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:51292 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:17:50 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:51294 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:17:50 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:51300 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:17:50 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:51308 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:17:51 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:51314 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:17:51 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:51326 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:17:51 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:51336 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:17:51 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:51338 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:17:52 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:51346 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:17:52 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:51362 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:17:52 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:51366 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:17:52 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:51370 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:17:53 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:51378 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:17:53 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:51390 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:17:53 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:51392 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:17:53 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:51396 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:17:54 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:51400 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:17:54 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:51402 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:17:54 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:51412 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:17:54 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:51426 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:17:55 compute-0 sshd[1002]: drop connection #1 from [85.94.49.204]:51434 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:17:55 compute-0 sshd[1002]: drop connection #1 from [85.94.49.204]:51442 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:17:55 compute-0 sshd[1002]: drop connection #1 from [85.94.49.204]:51448 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:17:55 compute-0 sshd[1002]: drop connection #1 from [85.94.49.204]:51454 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:17:56 compute-0 sshd[1002]: drop connection #1 from [85.94.49.204]:51470 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:17:56 compute-0 sshd-session[31148]: Received disconnect from 124.18.141.70 port 53320:11: Bye Bye [preauth]
Nov 25 23:17:56 compute-0 sshd-session[31148]: Disconnected from authenticating user root 124.18.141.70 port 53320 [preauth]
Nov 25 23:17:56 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:51476 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:17:56 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:51482 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:17:56 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:51490 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:17:57 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:51494 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:17:57 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:51504 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:17:57 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:51516 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:17:57 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:51528 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:17:58 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:51536 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:17:58 compute-0 sshd-session[31150]: Invalid user user from 85.94.49.204 port 51546
Nov 25 23:17:59 compute-0 sshd-session[31150]: Connection closed by invalid user user 85.94.49.204 port 51546 [preauth]
Nov 25 23:17:59 compute-0 sshd-session[31152]: Invalid user user from 85.94.49.204 port 52714
Nov 25 23:17:59 compute-0 sshd-session[31152]: Connection closed by invalid user user 85.94.49.204 port 52714 [preauth]
Nov 25 23:18:00 compute-0 sshd-session[31154]: Invalid user user from 85.94.49.204 port 52724
Nov 25 23:18:00 compute-0 sshd-session[31154]: Connection closed by invalid user user 85.94.49.204 port 52724 [preauth]
Nov 25 23:18:01 compute-0 sshd-session[31156]: Invalid user user from 85.94.49.204 port 52738
Nov 25 23:18:01 compute-0 sshd-session[31156]: Connection closed by invalid user user 85.94.49.204 port 52738 [preauth]
Nov 25 23:18:01 compute-0 sshd-session[31158]: Invalid user user from 85.94.49.204 port 52754
Nov 25 23:18:02 compute-0 sshd-session[31158]: Connection closed by invalid user user 85.94.49.204 port 52754 [preauth]
Nov 25 23:18:02 compute-0 sshd-session[31162]: Invalid user user from 85.94.49.204 port 52768
Nov 25 23:18:02 compute-0 sshd-session[31162]: Connection closed by invalid user user 85.94.49.204 port 52768 [preauth]
Nov 25 23:18:03 compute-0 sshd-session[31160]: Invalid user elsearch from 202.83.162.167 port 54980
Nov 25 23:18:03 compute-0 sshd-session[31160]: Received disconnect from 202.83.162.167 port 54980:11: Bye Bye [preauth]
Nov 25 23:18:03 compute-0 sshd-session[31160]: Disconnected from invalid user elsearch 202.83.162.167 port 54980 [preauth]
Nov 25 23:18:03 compute-0 sshd-session[31164]: Invalid user user from 85.94.49.204 port 52782
Nov 25 23:18:03 compute-0 sshd-session[31164]: Connection closed by invalid user user 85.94.49.204 port 52782 [preauth]
Nov 25 23:18:04 compute-0 sshd-session[31166]: Invalid user user from 85.94.49.204 port 52788
Nov 25 23:18:04 compute-0 sshd-session[31166]: Connection closed by invalid user user 85.94.49.204 port 52788 [preauth]
Nov 25 23:18:05 compute-0 sshd-session[31168]: Invalid user user from 85.94.49.204 port 52802
Nov 25 23:18:05 compute-0 sshd-session[31168]: Connection closed by invalid user user 85.94.49.204 port 52802 [preauth]
Nov 25 23:18:05 compute-0 sshd-session[31170]: Invalid user user from 85.94.49.204 port 52816
Nov 25 23:18:05 compute-0 sshd-session[31170]: Connection closed by invalid user user 85.94.49.204 port 52816 [preauth]
Nov 25 23:18:06 compute-0 sshd-session[31172]: Invalid user user from 85.94.49.204 port 52822
Nov 25 23:18:06 compute-0 sshd-session[31172]: Connection closed by invalid user user 85.94.49.204 port 52822 [preauth]
Nov 25 23:18:07 compute-0 sshd-session[31174]: Invalid user user from 85.94.49.204 port 52824
Nov 25 23:18:07 compute-0 sshd-session[31174]: Connection closed by invalid user user 85.94.49.204 port 52824 [preauth]
Nov 25 23:18:08 compute-0 sshd-session[31176]: Invalid user user from 85.94.49.204 port 52826
Nov 25 23:18:08 compute-0 sshd-session[31176]: Connection closed by invalid user user 85.94.49.204 port 52826 [preauth]
Nov 25 23:18:08 compute-0 sshd-session[31178]: Invalid user user from 85.94.49.204 port 52838
Nov 25 23:18:09 compute-0 sshd-session[31178]: Connection closed by invalid user user 85.94.49.204 port 52838 [preauth]
Nov 25 23:18:09 compute-0 sshd-session[31180]: Invalid user user from 85.94.49.204 port 52598
Nov 25 23:18:09 compute-0 sshd-session[31180]: Connection closed by invalid user user 85.94.49.204 port 52598 [preauth]
Nov 25 23:18:10 compute-0 sshd-session[31182]: Invalid user user from 85.94.49.204 port 52612
Nov 25 23:18:10 compute-0 sshd-session[31182]: Connection closed by invalid user user 85.94.49.204 port 52612 [preauth]
Nov 25 23:18:12 compute-0 sshd-session[31184]: Invalid user user from 85.94.49.204 port 52614
Nov 25 23:18:12 compute-0 sshd-session[31184]: Connection closed by invalid user user 85.94.49.204 port 52614 [preauth]
Nov 25 23:18:13 compute-0 sshd-session[31186]: Invalid user user from 85.94.49.204 port 52622
Nov 25 23:18:13 compute-0 sshd-session[31186]: Connection closed by invalid user user 85.94.49.204 port 52622 [preauth]
Nov 25 23:18:13 compute-0 sshd-session[31188]: Invalid user user from 85.94.49.204 port 52628
Nov 25 23:18:13 compute-0 sshd-session[31188]: Connection closed by invalid user user 85.94.49.204 port 52628 [preauth]
Nov 25 23:18:14 compute-0 sshd-session[31190]: Invalid user user from 85.94.49.204 port 52644
Nov 25 23:18:14 compute-0 sshd-session[31190]: Connection closed by invalid user user 85.94.49.204 port 52644 [preauth]
Nov 25 23:18:15 compute-0 sshd-session[31192]: Invalid user user from 85.94.49.204 port 52656
Nov 25 23:18:15 compute-0 sshd-session[31192]: Connection closed by invalid user user 85.94.49.204 port 52656 [preauth]
Nov 25 23:18:16 compute-0 sshd-session[31194]: Invalid user user from 85.94.49.204 port 52660
Nov 25 23:18:16 compute-0 sshd-session[31194]: Connection closed by invalid user user 85.94.49.204 port 52660 [preauth]
Nov 25 23:18:16 compute-0 sshd-session[31196]: Invalid user user from 85.94.49.204 port 52676
Nov 25 23:18:17 compute-0 sshd-session[31196]: Connection closed by invalid user user 85.94.49.204 port 52676 [preauth]
Nov 25 23:18:17 compute-0 sshd-session[31198]: Invalid user user from 85.94.49.204 port 52680
Nov 25 23:18:17 compute-0 sshd-session[31198]: Connection closed by invalid user user 85.94.49.204 port 52680 [preauth]
Nov 25 23:18:18 compute-0 sshd-session[31200]: Invalid user user from 85.94.49.204 port 52688
Nov 25 23:18:18 compute-0 sshd-session[31200]: Connection closed by invalid user user 85.94.49.204 port 52688 [preauth]
Nov 25 23:18:19 compute-0 sshd-session[31202]: Invalid user user from 85.94.49.204 port 52698
Nov 25 23:18:19 compute-0 sshd-session[31202]: Connection closed by invalid user user 85.94.49.204 port 52698 [preauth]
Nov 25 23:18:19 compute-0 sshd-session[31204]: Invalid user user from 85.94.49.204 port 56892
Nov 25 23:18:20 compute-0 sshd-session[31204]: Connection closed by invalid user user 85.94.49.204 port 56892 [preauth]
Nov 25 23:18:20 compute-0 sshd-session[31206]: Invalid user user from 85.94.49.204 port 56906
Nov 25 23:18:20 compute-0 sshd-session[31206]: Connection closed by invalid user user 85.94.49.204 port 56906 [preauth]
Nov 25 23:18:21 compute-0 sshd-session[31208]: Invalid user user from 85.94.49.204 port 56912
Nov 25 23:18:21 compute-0 sshd-session[31208]: Connection closed by invalid user user 85.94.49.204 port 56912 [preauth]
Nov 25 23:18:22 compute-0 sshd-session[31210]: Invalid user user from 85.94.49.204 port 56916
Nov 25 23:18:22 compute-0 sshd-session[31210]: Connection closed by invalid user user 85.94.49.204 port 56916 [preauth]
Nov 25 23:18:22 compute-0 sshd-session[31212]: Invalid user user from 85.94.49.204 port 56922
Nov 25 23:18:23 compute-0 sshd-session[31212]: Connection closed by invalid user user 85.94.49.204 port 56922 [preauth]
Nov 25 23:18:23 compute-0 sshd-session[31214]: Invalid user user from 85.94.49.204 port 56928
Nov 25 23:18:23 compute-0 sshd-session[31214]: Connection closed by invalid user user 85.94.49.204 port 56928 [preauth]
Nov 25 23:18:24 compute-0 sshd-session[31216]: Invalid user ubuntu from 85.94.49.204 port 56932
Nov 25 23:18:24 compute-0 sshd-session[31216]: Connection closed by invalid user ubuntu 85.94.49.204 port 56932 [preauth]
Nov 25 23:18:25 compute-0 sshd-session[31218]: Invalid user ubuntu from 85.94.49.204 port 56936
Nov 25 23:18:25 compute-0 sshd-session[31218]: Connection closed by invalid user ubuntu 85.94.49.204 port 56936 [preauth]
Nov 25 23:18:26 compute-0 sshd-session[31222]: Invalid user ubuntu from 85.94.49.204 port 56946
Nov 25 23:18:26 compute-0 sshd-session[31222]: Connection closed by invalid user ubuntu 85.94.49.204 port 56946 [preauth]
Nov 25 23:18:26 compute-0 sshd-session[31224]: Invalid user ubuntu from 85.94.49.204 port 56948
Nov 25 23:18:26 compute-0 sshd-session[31220]: Received disconnect from 68.183.82.237 port 57280:11: Bye Bye [preauth]
Nov 25 23:18:26 compute-0 sshd-session[31220]: Disconnected from authenticating user root 68.183.82.237 port 57280 [preauth]
Nov 25 23:18:26 compute-0 sshd-session[31224]: Connection closed by invalid user ubuntu 85.94.49.204 port 56948 [preauth]
Nov 25 23:18:27 compute-0 sshd-session[31226]: Invalid user ubuntu from 85.94.49.204 port 56960
Nov 25 23:18:27 compute-0 sshd-session[31226]: Connection closed by invalid user ubuntu 85.94.49.204 port 56960 [preauth]
Nov 25 23:18:28 compute-0 sshd-session[31228]: Invalid user ubuntu from 85.94.49.204 port 56968
Nov 25 23:18:28 compute-0 sshd-session[31228]: Connection closed by invalid user ubuntu 85.94.49.204 port 56968 [preauth]
Nov 25 23:18:29 compute-0 sshd-session[31230]: Invalid user ubuntu from 85.94.49.204 port 56976
Nov 25 23:18:29 compute-0 sshd-session[31230]: Connection closed by invalid user ubuntu 85.94.49.204 port 56976 [preauth]
Nov 25 23:18:29 compute-0 sshd-session[31232]: Invalid user ubuntu from 85.94.49.204 port 41260
Nov 25 23:18:30 compute-0 sshd-session[31232]: Connection closed by invalid user ubuntu 85.94.49.204 port 41260 [preauth]
Nov 25 23:18:30 compute-0 sshd-session[31234]: Invalid user ubuntu from 85.94.49.204 port 41274
Nov 25 23:18:30 compute-0 sshd-session[31234]: Connection closed by invalid user ubuntu 85.94.49.204 port 41274 [preauth]
Nov 25 23:18:31 compute-0 sshd-session[31236]: Invalid user ubuntu from 85.94.49.204 port 41282
Nov 25 23:18:31 compute-0 sshd-session[31236]: Connection closed by invalid user ubuntu 85.94.49.204 port 41282 [preauth]
Nov 25 23:18:32 compute-0 sshd-session[31238]: Invalid user ubuntu from 85.94.49.204 port 41288
Nov 25 23:18:32 compute-0 sshd-session[31238]: Connection closed by invalid user ubuntu 85.94.49.204 port 41288 [preauth]
Nov 25 23:18:32 compute-0 sshd-session[31240]: Invalid user ubuntu from 85.94.49.204 port 41302
Nov 25 23:18:33 compute-0 sshd-session[31240]: Connection closed by invalid user ubuntu 85.94.49.204 port 41302 [preauth]
Nov 25 23:18:33 compute-0 sshd-session[31242]: Invalid user ubuntu from 85.94.49.204 port 41312
Nov 25 23:18:33 compute-0 sshd-session[31242]: Connection closed by invalid user ubuntu 85.94.49.204 port 41312 [preauth]
Nov 25 23:18:34 compute-0 sshd-session[31246]: Invalid user ubuntu from 85.94.49.204 port 41314
Nov 25 23:18:34 compute-0 sshd-session[31246]: Connection closed by invalid user ubuntu 85.94.49.204 port 41314 [preauth]
Nov 25 23:18:35 compute-0 sshd-session[31244]: Received disconnect from 171.244.140.22 port 58242:11: Bye Bye [preauth]
Nov 25 23:18:35 compute-0 sshd-session[31244]: Disconnected from authenticating user root 171.244.140.22 port 58242 [preauth]
Nov 25 23:18:35 compute-0 sshd-session[31248]: Invalid user ubuntu from 85.94.49.204 port 41320
Nov 25 23:18:35 compute-0 sshd-session[31248]: Connection closed by invalid user ubuntu 85.94.49.204 port 41320 [preauth]
Nov 25 23:18:36 compute-0 sshd-session[31250]: Invalid user ubuntu from 85.94.49.204 port 41334
Nov 25 23:18:36 compute-0 sshd-session[31250]: Connection closed by invalid user ubuntu 85.94.49.204 port 41334 [preauth]
Nov 25 23:18:36 compute-0 sshd-session[31252]: Invalid user ubuntu from 85.94.49.204 port 41350
Nov 25 23:18:36 compute-0 sshd-session[31252]: Connection closed by invalid user ubuntu 85.94.49.204 port 41350 [preauth]
Nov 25 23:18:37 compute-0 sshd-session[31254]: Invalid user ubuntu from 85.94.49.204 port 41352
Nov 25 23:18:37 compute-0 sshd-session[31254]: Connection closed by invalid user ubuntu 85.94.49.204 port 41352 [preauth]
Nov 25 23:18:38 compute-0 sshd-session[31256]: Invalid user ubuntu from 85.94.49.204 port 41364
Nov 25 23:18:38 compute-0 sshd-session[31256]: Connection closed by invalid user ubuntu 85.94.49.204 port 41364 [preauth]
Nov 25 23:18:39 compute-0 sshd-session[31258]: Invalid user ubuntu from 85.94.49.204 port 41366
Nov 25 23:18:39 compute-0 sshd-session[31258]: Connection closed by invalid user ubuntu 85.94.49.204 port 41366 [preauth]
Nov 25 23:18:39 compute-0 sshd-session[31260]: Invalid user ubuntu from 85.94.49.204 port 37518
Nov 25 23:18:40 compute-0 sshd-session[31260]: Connection closed by invalid user ubuntu 85.94.49.204 port 37518 [preauth]
Nov 25 23:18:40 compute-0 sshd-session[31262]: Invalid user ubuntu from 85.94.49.204 port 37530
Nov 25 23:18:40 compute-0 sshd-session[31262]: Connection closed by invalid user ubuntu 85.94.49.204 port 37530 [preauth]
Nov 25 23:18:41 compute-0 sshd-session[31265]: Invalid user ubuntu from 85.94.49.204 port 37532
Nov 25 23:18:41 compute-0 sshd-session[31265]: Connection closed by invalid user ubuntu 85.94.49.204 port 37532 [preauth]
Nov 25 23:18:43 compute-0 sshd-session[31267]: Invalid user ubuntu from 85.94.49.204 port 37540
Nov 25 23:18:43 compute-0 sshd-session[31267]: Connection closed by invalid user ubuntu 85.94.49.204 port 37540 [preauth]
Nov 25 23:18:43 compute-0 sshd-session[31269]: Invalid user ubuntu from 85.94.49.204 port 37542
Nov 25 23:18:44 compute-0 sshd-session[31269]: Connection closed by invalid user ubuntu 85.94.49.204 port 37542 [preauth]
Nov 25 23:18:44 compute-0 sshd-session[31271]: Invalid user ubuntu from 85.94.49.204 port 37552
Nov 25 23:18:44 compute-0 sshd-session[31271]: Connection closed by invalid user ubuntu 85.94.49.204 port 37552 [preauth]
Nov 25 23:18:45 compute-0 sshd-session[31273]: Invalid user ubuntu from 85.94.49.204 port 37568
Nov 25 23:18:45 compute-0 sshd-session[31273]: Connection closed by invalid user ubuntu 85.94.49.204 port 37568 [preauth]
Nov 25 23:18:46 compute-0 sshd-session[31275]: Invalid user ubuntu from 85.94.49.204 port 37574
Nov 25 23:18:46 compute-0 sshd-session[31275]: Connection closed by invalid user ubuntu 85.94.49.204 port 37574 [preauth]
Nov 25 23:18:46 compute-0 sshd-session[31277]: Invalid user ubuntu from 85.94.49.204 port 37586
Nov 25 23:18:47 compute-0 sshd-session[31277]: Connection closed by invalid user ubuntu 85.94.49.204 port 37586 [preauth]
Nov 25 23:18:47 compute-0 sshd-session[31279]: Invalid user ubuntu from 85.94.49.204 port 37596
Nov 25 23:18:47 compute-0 sshd-session[31279]: Connection closed by invalid user ubuntu 85.94.49.204 port 37596 [preauth]
Nov 25 23:18:48 compute-0 sshd-session[31281]: Invalid user ubuntu from 85.94.49.204 port 37610
Nov 25 23:18:48 compute-0 sshd-session[31281]: Connection closed by invalid user ubuntu 85.94.49.204 port 37610 [preauth]
Nov 25 23:18:49 compute-0 sshd-session[31283]: Invalid user ubuntu from 85.94.49.204 port 37622
Nov 25 23:18:49 compute-0 sshd-session[31283]: Connection closed by invalid user ubuntu 85.94.49.204 port 37622 [preauth]
Nov 25 23:18:50 compute-0 sshd-session[31285]: Invalid user ubuntu from 85.94.49.204 port 56108
Nov 25 23:18:50 compute-0 sshd-session[31285]: Connection closed by invalid user ubuntu 85.94.49.204 port 56108 [preauth]
Nov 25 23:18:50 compute-0 sshd-session[31287]: Invalid user ubuntu from 85.94.49.204 port 56112
Nov 25 23:18:50 compute-0 sshd-session[31287]: Connection closed by invalid user ubuntu 85.94.49.204 port 56112 [preauth]
Nov 25 23:18:51 compute-0 sshd-session[31289]: Invalid user ubuntu from 85.94.49.204 port 56120
Nov 25 23:18:51 compute-0 sshd-session[31289]: Connection closed by invalid user ubuntu 85.94.49.204 port 56120 [preauth]
Nov 25 23:18:52 compute-0 sshd-session[31291]: Invalid user ubuntu from 85.94.49.204 port 56134
Nov 25 23:18:52 compute-0 sshd-session[31291]: Connection closed by invalid user ubuntu 85.94.49.204 port 56134 [preauth]
Nov 25 23:18:53 compute-0 sshd-session[31293]: Invalid user ubuntu from 85.94.49.204 port 56140
Nov 25 23:18:53 compute-0 sshd-session[31293]: Connection closed by invalid user ubuntu 85.94.49.204 port 56140 [preauth]
Nov 25 23:18:53 compute-0 sshd-session[31295]: Invalid user ubuntu from 85.94.49.204 port 56146
Nov 25 23:18:53 compute-0 sshd-session[31295]: Connection closed by invalid user ubuntu 85.94.49.204 port 56146 [preauth]
Nov 25 23:18:54 compute-0 sshd-session[31297]: Invalid user ubuntu from 85.94.49.204 port 56158
Nov 25 23:18:54 compute-0 sshd-session[31297]: Connection closed by invalid user ubuntu 85.94.49.204 port 56158 [preauth]
Nov 25 23:18:55 compute-0 sshd-session[31299]: Invalid user ubuntu from 85.94.49.204 port 56162
Nov 25 23:18:55 compute-0 sshd-session[31299]: Connection closed by invalid user ubuntu 85.94.49.204 port 56162 [preauth]
Nov 25 23:18:56 compute-0 sshd-session[31301]: Invalid user ubuntu from 85.94.49.204 port 56170
Nov 25 23:18:56 compute-0 sshd-session[31301]: Connection closed by invalid user ubuntu 85.94.49.204 port 56170 [preauth]
Nov 25 23:18:56 compute-0 sshd-session[31303]: Invalid user ubuntu from 85.94.49.204 port 56184
Nov 25 23:18:57 compute-0 sshd-session[31303]: Connection closed by invalid user ubuntu 85.94.49.204 port 56184 [preauth]
Nov 25 23:18:57 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:56196 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:18:57 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:56202 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:18:57 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:56212 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:18:57 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:56216 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:18:58 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:56224 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:18:58 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:56238 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:18:58 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:56240 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:18:58 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:56244 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:18:59 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:60118 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:18:59 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:60122 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:18:59 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:60126 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:18:59 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:60128 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:19:00 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:60142 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:19:00 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:60152 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:19:00 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:60166 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:19:00 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:60172 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:19:01 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:60180 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:19:01 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:60192 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:19:01 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:60204 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:19:01 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:60212 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:19:02 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:60228 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:19:02 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:60234 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:19:02 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:60236 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:19:02 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:60248 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:19:03 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:60250 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:19:03 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:60258 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:19:03 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:60262 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:19:03 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:60272 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:19:04 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:60274 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:19:04 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:60286 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:19:04 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:60290 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:19:04 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:60292 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:19:05 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:60300 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:19:05 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:60302 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:19:05 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:60304 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:19:05 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:60320 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:19:06 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:60330 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:19:06 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:60344 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:19:06 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:60350 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:19:06 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:60352 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:19:07 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:60358 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:19:07 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:60370 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:19:07 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:60376 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:19:07 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:60380 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:19:08 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:60384 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:19:08 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:60394 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:19:08 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:60406 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:19:08 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:60412 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:19:09 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:44196 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:19:09 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:44204 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:19:09 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:44210 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:19:09 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:44220 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:19:10 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:44224 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:19:10 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:44234 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:19:10 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:44238 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:19:10 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:44250 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:19:11 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:44260 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:19:11 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:44268 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:19:11 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:44272 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:19:11 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:44286 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:19:12 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:44300 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:19:12 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:44306 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:19:12 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:44320 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:19:12 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:44328 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:19:13 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:44330 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:19:13 compute-0 sshd-session[31305]: Invalid user debian from 85.94.49.204 port 44344
Nov 25 23:19:13 compute-0 sshd-session[31305]: Connection closed by invalid user debian 85.94.49.204 port 44344 [preauth]
Nov 25 23:19:14 compute-0 sshd-session[31307]: Invalid user debian from 85.94.49.204 port 44360
Nov 25 23:19:14 compute-0 sshd-session[31307]: Connection closed by invalid user debian 85.94.49.204 port 44360 [preauth]
Nov 25 23:19:15 compute-0 sshd-session[31309]: Invalid user debian from 85.94.49.204 port 44372
Nov 25 23:19:15 compute-0 sshd-session[31309]: Connection closed by invalid user debian 85.94.49.204 port 44372 [preauth]
Nov 25 23:19:16 compute-0 sshd-session[31311]: Invalid user debian from 85.94.49.204 port 44378
Nov 25 23:19:16 compute-0 sshd-session[31311]: Connection closed by invalid user debian 85.94.49.204 port 44378 [preauth]
Nov 25 23:19:16 compute-0 sshd-session[31313]: Invalid user debian from 85.94.49.204 port 44392
Nov 25 23:19:17 compute-0 sshd-session[31313]: Connection closed by invalid user debian 85.94.49.204 port 44392 [preauth]
Nov 25 23:19:17 compute-0 sshd-session[31315]: Invalid user debian from 85.94.49.204 port 44408
Nov 25 23:19:17 compute-0 sshd-session[31315]: Connection closed by invalid user debian 85.94.49.204 port 44408 [preauth]
Nov 25 23:19:19 compute-0 sshd-session[31317]: Invalid user debian from 85.94.49.204 port 44410
Nov 25 23:19:19 compute-0 sshd-session[31317]: Connection closed by invalid user debian 85.94.49.204 port 44410 [preauth]
Nov 25 23:19:19 compute-0 sshd-session[31319]: Invalid user debian from 85.94.49.204 port 34710
Nov 25 23:19:20 compute-0 sshd-session[31319]: Connection closed by invalid user debian 85.94.49.204 port 34710 [preauth]
Nov 25 23:19:20 compute-0 sshd-session[31321]: Invalid user debian from 85.94.49.204 port 34718
Nov 25 23:19:20 compute-0 sshd-session[31321]: Connection closed by invalid user debian 85.94.49.204 port 34718 [preauth]
Nov 25 23:19:21 compute-0 sshd-session[31323]: Invalid user debian from 85.94.49.204 port 34720
Nov 25 23:19:21 compute-0 sshd-session[31323]: Connection closed by invalid user debian 85.94.49.204 port 34720 [preauth]
Nov 25 23:19:22 compute-0 sshd-session[31325]: Invalid user debian from 85.94.49.204 port 34734
Nov 25 23:19:22 compute-0 sshd-session[31325]: Connection closed by invalid user debian 85.94.49.204 port 34734 [preauth]
Nov 25 23:19:22 compute-0 sshd-session[31327]: Invalid user debian from 85.94.49.204 port 34750
Nov 25 23:19:23 compute-0 sshd-session[31327]: Connection closed by invalid user debian 85.94.49.204 port 34750 [preauth]
Nov 25 23:19:23 compute-0 sshd-session[31329]: Invalid user debian from 85.94.49.204 port 34752
Nov 25 23:19:23 compute-0 sshd-session[31329]: Connection closed by invalid user debian 85.94.49.204 port 34752 [preauth]
Nov 25 23:19:24 compute-0 sshd-session[31331]: Invalid user debian from 85.94.49.204 port 34760
Nov 25 23:19:24 compute-0 sshd-session[31331]: Connection closed by invalid user debian 85.94.49.204 port 34760 [preauth]
Nov 25 23:19:25 compute-0 sshd-session[31333]: Invalid user debian from 85.94.49.204 port 34768
Nov 25 23:19:25 compute-0 sshd-session[31333]: Connection closed by invalid user debian 85.94.49.204 port 34768 [preauth]
Nov 25 23:19:28 compute-0 sshd-session[31335]: Invalid user debian from 85.94.49.204 port 34770
Nov 25 23:19:28 compute-0 sshd-session[31335]: Connection closed by invalid user debian 85.94.49.204 port 34770 [preauth]
Nov 25 23:19:28 compute-0 sshd-session[31337]: Invalid user debian from 85.94.49.204 port 34778
Nov 25 23:19:28 compute-0 sshd-session[31337]: Connection closed by invalid user debian 85.94.49.204 port 34778 [preauth]
Nov 25 23:19:29 compute-0 sshd-session[31339]: Invalid user debian from 85.94.49.204 port 37220
Nov 25 23:19:29 compute-0 sshd-session[31339]: Connection closed by invalid user debian 85.94.49.204 port 37220 [preauth]
Nov 25 23:19:30 compute-0 sshd-session[31341]: Invalid user debian from 85.94.49.204 port 37226
Nov 25 23:19:30 compute-0 sshd-session[31341]: Connection closed by invalid user debian 85.94.49.204 port 37226 [preauth]
Nov 25 23:19:31 compute-0 sshd-session[31343]: Invalid user debian from 85.94.49.204 port 37242
Nov 25 23:19:31 compute-0 sshd-session[31343]: Connection closed by invalid user debian 85.94.49.204 port 37242 [preauth]
Nov 25 23:19:31 compute-0 sshd-session[31349]: Invalid user debian from 85.94.49.204 port 37244
Nov 25 23:19:32 compute-0 sshd-session[31349]: Connection closed by invalid user debian 85.94.49.204 port 37244 [preauth]
Nov 25 23:19:32 compute-0 sshd-session[31347]: Invalid user sysadmin from 202.83.162.167 port 44716
Nov 25 23:19:32 compute-0 sshd-session[31347]: Received disconnect from 202.83.162.167 port 44716:11: Bye Bye [preauth]
Nov 25 23:19:32 compute-0 sshd-session[31347]: Disconnected from invalid user sysadmin 202.83.162.167 port 44716 [preauth]
Nov 25 23:19:32 compute-0 sshd-session[31351]: Invalid user debian from 85.94.49.204 port 37256
Nov 25 23:19:32 compute-0 sshd-session[31351]: Connection closed by invalid user debian 85.94.49.204 port 37256 [preauth]
Nov 25 23:19:33 compute-0 sshd-session[31345]: Received disconnect from 45.78.216.132 port 52864:11: Bye Bye [preauth]
Nov 25 23:19:33 compute-0 sshd-session[31345]: Disconnected from authenticating user root 45.78.216.132 port 52864 [preauth]
Nov 25 23:19:33 compute-0 sshd-session[31353]: Invalid user toor from 124.18.141.70 port 52798
Nov 25 23:19:33 compute-0 sshd-session[31355]: Invalid user debian from 85.94.49.204 port 37258
Nov 25 23:19:33 compute-0 sshd-session[31353]: Received disconnect from 124.18.141.70 port 52798:11: Bye Bye [preauth]
Nov 25 23:19:33 compute-0 sshd-session[31353]: Disconnected from invalid user toor 124.18.141.70 port 52798 [preauth]
Nov 25 23:19:33 compute-0 sshd-session[31355]: Connection closed by invalid user debian 85.94.49.204 port 37258 [preauth]
Nov 25 23:19:34 compute-0 sshd-session[31357]: Invalid user debian from 85.94.49.204 port 37266
Nov 25 23:19:34 compute-0 sshd-session[31357]: Connection closed by invalid user debian 85.94.49.204 port 37266 [preauth]
Nov 25 23:19:34 compute-0 sshd-session[31359]: Invalid user debian from 85.94.49.204 port 37272
Nov 25 23:19:35 compute-0 sshd-session[31359]: Connection closed by invalid user debian 85.94.49.204 port 37272 [preauth]
Nov 25 23:19:35 compute-0 sshd-session[31361]: Invalid user debian from 85.94.49.204 port 37276
Nov 25 23:19:35 compute-0 sshd-session[31361]: Connection closed by invalid user debian 85.94.49.204 port 37276 [preauth]
Nov 25 23:19:36 compute-0 sshd-session[31363]: Invalid user debian from 85.94.49.204 port 37284
Nov 25 23:19:36 compute-0 sshd-session[31363]: Connection closed by invalid user debian 85.94.49.204 port 37284 [preauth]
Nov 25 23:19:37 compute-0 sshd-session[31365]: Invalid user debian from 85.94.49.204 port 37294
Nov 25 23:19:37 compute-0 sshd-session[31365]: Connection closed by invalid user debian 85.94.49.204 port 37294 [preauth]
Nov 25 23:19:38 compute-0 sshd-session[31367]: Invalid user debian from 85.94.49.204 port 37310
Nov 25 23:19:38 compute-0 sshd-session[31367]: Connection closed by invalid user debian 85.94.49.204 port 37310 [preauth]
Nov 25 23:19:38 compute-0 sshd-session[31369]: Invalid user debian from 85.94.49.204 port 37326
Nov 25 23:19:38 compute-0 sshd-session[31369]: Connection closed by invalid user debian 85.94.49.204 port 37326 [preauth]
Nov 25 23:19:39 compute-0 sshd-session[31371]: Invalid user debian from 85.94.49.204 port 49892
Nov 25 23:19:39 compute-0 sshd-session[31371]: Connection closed by invalid user debian 85.94.49.204 port 49892 [preauth]
Nov 25 23:19:40 compute-0 sshd-session[31375]: Invalid user debian from 85.94.49.204 port 49898
Nov 25 23:19:40 compute-0 sshd-session[31375]: Connection closed by invalid user debian 85.94.49.204 port 49898 [preauth]
Nov 25 23:19:40 compute-0 sshd-session[31373]: Received disconnect from 68.183.82.237 port 36464:11: Bye Bye [preauth]
Nov 25 23:19:40 compute-0 sshd-session[31373]: Disconnected from authenticating user root 68.183.82.237 port 36464 [preauth]
Nov 25 23:19:41 compute-0 sshd-session[31377]: Invalid user debian from 85.94.49.204 port 49906
Nov 25 23:19:41 compute-0 sshd-session[31377]: Connection closed by invalid user debian 85.94.49.204 port 49906 [preauth]
Nov 25 23:19:41 compute-0 sshd-session[31379]: Invalid user debian from 85.94.49.204 port 49922
Nov 25 23:19:41 compute-0 sshd-session[31379]: Connection closed by invalid user debian 85.94.49.204 port 49922 [preauth]
Nov 25 23:19:42 compute-0 sshd-session[31381]: Invalid user debian from 85.94.49.204 port 49938
Nov 25 23:19:42 compute-0 sshd-session[31381]: Connection closed by invalid user debian 85.94.49.204 port 49938 [preauth]
Nov 25 23:19:43 compute-0 sshd-session[31383]: Invalid user debian from 85.94.49.204 port 49952
Nov 25 23:19:43 compute-0 sshd-session[31383]: Connection closed by invalid user debian 85.94.49.204 port 49952 [preauth]
Nov 25 23:19:44 compute-0 sshd-session[31385]: Invalid user debian from 85.94.49.204 port 49968
Nov 25 23:19:44 compute-0 sshd-session[31385]: Connection closed by invalid user debian 85.94.49.204 port 49968 [preauth]
Nov 25 23:19:44 compute-0 sshd-session[31387]: Invalid user debian from 85.94.49.204 port 49974
Nov 25 23:19:45 compute-0 sshd-session[31387]: Connection closed by invalid user debian 85.94.49.204 port 49974 [preauth]
Nov 25 23:19:45 compute-0 sshd-session[31389]: Invalid user debian from 85.94.49.204 port 49988
Nov 25 23:19:45 compute-0 sshd-session[31389]: Connection closed by invalid user debian 85.94.49.204 port 49988 [preauth]
Nov 25 23:19:46 compute-0 sshd-session[31391]: Invalid user debian from 85.94.49.204 port 49990
Nov 25 23:19:46 compute-0 sshd-session[31391]: Connection closed by invalid user debian 85.94.49.204 port 49990 [preauth]
Nov 25 23:19:47 compute-0 sshd-session[31393]: Invalid user debian from 85.94.49.204 port 50004
Nov 25 23:19:47 compute-0 sshd-session[31393]: Connection closed by invalid user debian 85.94.49.204 port 50004 [preauth]
Nov 25 23:19:47 compute-0 sshd-session[31395]: Invalid user debian from 85.94.49.204 port 50014
Nov 25 23:19:48 compute-0 sshd-session[31395]: Connection closed by invalid user debian 85.94.49.204 port 50014 [preauth]
Nov 25 23:19:48 compute-0 sshd-session[31397]: Invalid user debian from 85.94.49.204 port 50020
Nov 25 23:19:48 compute-0 sshd-session[31397]: Connection closed by invalid user debian 85.94.49.204 port 50020 [preauth]
Nov 25 23:19:49 compute-0 sshd-session[31399]: Invalid user debian from 85.94.49.204 port 39238
Nov 25 23:19:49 compute-0 sshd-session[31399]: Connection closed by invalid user debian 85.94.49.204 port 39238 [preauth]
Nov 25 23:19:51 compute-0 sshd-session[31401]: Invalid user debian from 85.94.49.204 port 39240
Nov 25 23:19:51 compute-0 sshd-session[31401]: Connection closed by invalid user debian 85.94.49.204 port 39240 [preauth]
Nov 25 23:19:52 compute-0 sshd-session[31403]: Invalid user debian from 85.94.49.204 port 39242
Nov 25 23:19:52 compute-0 sshd-session[31403]: Connection closed by invalid user debian 85.94.49.204 port 39242 [preauth]
Nov 25 23:19:52 compute-0 sshd-session[31405]: Invalid user debian from 85.94.49.204 port 39252
Nov 25 23:19:52 compute-0 sshd-session[31405]: Connection closed by invalid user debian 85.94.49.204 port 39252 [preauth]
Nov 25 23:19:53 compute-0 sshd-session[31407]: Invalid user debian from 85.94.49.204 port 39262
Nov 25 23:19:53 compute-0 sshd-session[31407]: Connection closed by invalid user debian 85.94.49.204 port 39262 [preauth]
Nov 25 23:19:54 compute-0 sshd-session[31409]: Invalid user debian from 85.94.49.204 port 39276
Nov 25 23:19:54 compute-0 sshd-session[31409]: Connection closed by invalid user debian 85.94.49.204 port 39276 [preauth]
Nov 25 23:19:55 compute-0 sshd-session[31411]: Invalid user debian from 85.94.49.204 port 39278
Nov 25 23:19:55 compute-0 sshd-session[31411]: Connection closed by invalid user debian 85.94.49.204 port 39278 [preauth]
Nov 25 23:19:55 compute-0 sshd-session[31413]: Invalid user debian from 85.94.49.204 port 39280
Nov 25 23:19:55 compute-0 sshd-session[31413]: Connection closed by invalid user debian 85.94.49.204 port 39280 [preauth]
Nov 25 23:19:56 compute-0 sshd-session[31415]: Invalid user debian from 85.94.49.204 port 39286
Nov 25 23:19:56 compute-0 sshd-session[31415]: Connection closed by invalid user debian 85.94.49.204 port 39286 [preauth]
Nov 25 23:19:57 compute-0 sshd-session[31417]: Invalid user debian from 85.94.49.204 port 39298
Nov 25 23:19:57 compute-0 sshd-session[31417]: Connection closed by invalid user debian 85.94.49.204 port 39298 [preauth]
Nov 25 23:19:58 compute-0 sshd-session[31419]: Invalid user debian from 85.94.49.204 port 39302
Nov 25 23:19:58 compute-0 sshd-session[31419]: Connection closed by invalid user debian 85.94.49.204 port 39302 [preauth]
Nov 25 23:19:58 compute-0 sshd-session[31421]: Invalid user debian from 85.94.49.204 port 39308
Nov 25 23:19:59 compute-0 sshd-session[31421]: Connection closed by invalid user debian 85.94.49.204 port 39308 [preauth]
Nov 25 23:19:59 compute-0 sshd-session[31423]: Invalid user debian from 85.94.49.204 port 44862
Nov 25 23:19:59 compute-0 sshd-session[31423]: Connection closed by invalid user debian 85.94.49.204 port 44862 [preauth]
Nov 25 23:20:00 compute-0 sshd-session[31425]: Invalid user debian from 85.94.49.204 port 44872
Nov 25 23:20:00 compute-0 sshd-session[31425]: Connection closed by invalid user debian 85.94.49.204 port 44872 [preauth]
Nov 25 23:20:01 compute-0 sshd-session[31427]: Invalid user admin from 85.94.49.204 port 44876
Nov 25 23:20:01 compute-0 sshd-session[31427]: Connection closed by invalid user admin 85.94.49.204 port 44876 [preauth]
Nov 25 23:20:02 compute-0 sshd-session[31429]: Invalid user admin from 85.94.49.204 port 44882
Nov 25 23:20:02 compute-0 sshd-session[31429]: Connection closed by invalid user admin 85.94.49.204 port 44882 [preauth]
Nov 25 23:20:02 compute-0 sshd-session[31431]: Invalid user admin from 85.94.49.204 port 44890
Nov 25 23:20:02 compute-0 sshd-session[31431]: Connection closed by invalid user admin 85.94.49.204 port 44890 [preauth]
Nov 25 23:20:03 compute-0 sshd-session[31433]: Invalid user admin from 85.94.49.204 port 44898
Nov 25 23:20:03 compute-0 sshd-session[31433]: Connection closed by invalid user admin 85.94.49.204 port 44898 [preauth]
Nov 25 23:20:04 compute-0 sshd-session[31435]: Invalid user admin from 85.94.49.204 port 44906
Nov 25 23:20:04 compute-0 sshd-session[31435]: Connection closed by invalid user admin 85.94.49.204 port 44906 [preauth]
Nov 25 23:20:05 compute-0 sshd-session[31437]: Invalid user admin from 85.94.49.204 port 44908
Nov 25 23:20:05 compute-0 sshd-session[31437]: Connection closed by invalid user admin 85.94.49.204 port 44908 [preauth]
Nov 25 23:20:05 compute-0 sshd-session[31439]: Invalid user admin from 85.94.49.204 port 44918
Nov 25 23:20:05 compute-0 sshd-session[31439]: Connection closed by invalid user admin 85.94.49.204 port 44918 [preauth]
Nov 25 23:20:06 compute-0 sshd-session[31441]: Invalid user admin from 85.94.49.204 port 44922
Nov 25 23:20:06 compute-0 sshd-session[31441]: Connection closed by invalid user admin 85.94.49.204 port 44922 [preauth]
Nov 25 23:20:07 compute-0 sshd-session[31443]: Invalid user admin from 85.94.49.204 port 44924
Nov 25 23:20:07 compute-0 sshd-session[31443]: Connection closed by invalid user admin 85.94.49.204 port 44924 [preauth]
Nov 25 23:20:08 compute-0 sshd-session[31445]: Invalid user admin from 85.94.49.204 port 44936
Nov 25 23:20:08 compute-0 sshd-session[31445]: Connection closed by invalid user admin 85.94.49.204 port 44936 [preauth]
Nov 25 23:20:09 compute-0 sshd-session[31447]: Invalid user admin from 85.94.49.204 port 44944
Nov 25 23:20:09 compute-0 sshd-session[31447]: Connection closed by invalid user admin 85.94.49.204 port 44944 [preauth]
Nov 25 23:20:09 compute-0 sshd-session[31449]: Invalid user admin from 85.94.49.204 port 46384
Nov 25 23:20:10 compute-0 sshd-session[31449]: Connection closed by invalid user admin 85.94.49.204 port 46384 [preauth]
Nov 25 23:20:10 compute-0 sshd-session[31451]: Invalid user admin from 85.94.49.204 port 46398
Nov 25 23:20:10 compute-0 sshd-session[31451]: Connection closed by invalid user admin 85.94.49.204 port 46398 [preauth]
Nov 25 23:20:11 compute-0 sshd-session[31453]: Invalid user admin from 85.94.49.204 port 46408
Nov 25 23:20:11 compute-0 sshd-session[31453]: Connection closed by invalid user admin 85.94.49.204 port 46408 [preauth]
Nov 25 23:20:12 compute-0 sshd-session[31455]: Invalid user admin from 85.94.49.204 port 46414
Nov 25 23:20:12 compute-0 sshd-session[31455]: Connection closed by invalid user admin 85.94.49.204 port 46414 [preauth]
Nov 25 23:20:12 compute-0 sshd-session[31457]: Invalid user admin from 85.94.49.204 port 46422
Nov 25 23:20:13 compute-0 sshd-session[31457]: Connection closed by invalid user admin 85.94.49.204 port 46422 [preauth]
Nov 25 23:20:13 compute-0 sshd-session[31459]: Invalid user admin from 85.94.49.204 port 46436
Nov 25 23:20:13 compute-0 sshd-session[31459]: Connection closed by invalid user admin 85.94.49.204 port 46436 [preauth]
Nov 25 23:20:14 compute-0 sshd-session[31461]: Invalid user admin from 85.94.49.204 port 46440
Nov 25 23:20:14 compute-0 sshd-session[31461]: Connection closed by invalid user admin 85.94.49.204 port 46440 [preauth]
Nov 25 23:20:15 compute-0 sshd-session[31463]: Invalid user admin from 85.94.49.204 port 46442
Nov 25 23:20:15 compute-0 sshd-session[31463]: Connection closed by invalid user admin 85.94.49.204 port 46442 [preauth]
Nov 25 23:20:16 compute-0 sshd-session[31465]: Invalid user admin from 85.94.49.204 port 46450
Nov 25 23:20:16 compute-0 sshd-session[31465]: Connection closed by invalid user admin 85.94.49.204 port 46450 [preauth]
Nov 25 23:20:16 compute-0 sshd-session[31467]: Invalid user admin from 85.94.49.204 port 46462
Nov 25 23:20:16 compute-0 sshd-session[31467]: Connection closed by invalid user admin 85.94.49.204 port 46462 [preauth]
Nov 25 23:20:17 compute-0 sshd-session[31470]: Invalid user admin from 85.94.49.204 port 46474
Nov 25 23:20:17 compute-0 sshd-session[31470]: Connection closed by invalid user admin 85.94.49.204 port 46474 [preauth]
Nov 25 23:20:18 compute-0 sshd-session[31472]: Invalid user admin from 85.94.49.204 port 46482
Nov 25 23:20:18 compute-0 sshd-session[31472]: Connection closed by invalid user admin 85.94.49.204 port 46482 [preauth]
Nov 25 23:20:19 compute-0 sshd-session[31474]: Invalid user admin from 85.94.49.204 port 46486
Nov 25 23:20:19 compute-0 sshd-session[31474]: Connection closed by invalid user admin 85.94.49.204 port 46486 [preauth]
Nov 25 23:20:19 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:54832 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:20:19 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:54840 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:20:19 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:54844 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:20:20 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:54846 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:20:20 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:54858 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:20:20 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:54860 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:20:20 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:54874 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:20:21 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:54878 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:20:21 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:54882 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:20:21 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:54898 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:20:21 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:54900 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:20:22 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:54902 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:20:22 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:54914 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:20:22 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:54924 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:20:22 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:54930 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:20:23 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:54942 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:20:23 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:54958 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:20:23 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:54964 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:20:23 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:54978 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:20:24 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:54980 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:20:24 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:54996 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:20:24 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:55004 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:20:24 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:55008 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:20:25 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:55012 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:20:25 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:55020 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:20:25 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:55026 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:20:25 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:55040 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:20:26 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:55048 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:20:26 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:55056 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:20:26 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:55072 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:20:26 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:55078 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:20:27 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:55086 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:20:27 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:55100 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:20:27 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:55112 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:20:27 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:55114 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:20:28 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:55122 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:20:28 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:55136 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:20:28 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:55142 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:20:28 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:55158 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:20:29 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:58778 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:20:29 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:58782 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:20:29 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:58796 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:20:29 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:58806 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:20:30 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:58818 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:20:30 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:58822 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:20:30 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:58830 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:20:30 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:58834 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:20:31 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:58846 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:20:31 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:58852 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:20:31 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:58868 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:20:31 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:58880 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:20:32 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:58892 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:20:32 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:58906 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:20:32 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:58912 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:20:32 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:58920 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:20:32 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:58928 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:20:33 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:58938 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:20:33 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:58946 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:20:33 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:58952 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:20:33 compute-0 sshd[1002]: drop connection #0 from [85.94.49.204]:58954 on [38.102.83.77]:22 penalty: connections without attempting authentication
Nov 25 23:20:55 compute-0 sshd-session[31477]: Received disconnect from 68.183.82.237 port 46048:11: Bye Bye [preauth]
Nov 25 23:20:55 compute-0 sshd-session[31477]: Disconnected from authenticating user root 68.183.82.237 port 46048 [preauth]
Nov 25 23:21:05 compute-0 sshd-session[31479]: Received disconnect from 202.83.162.167 port 60322:11: Bye Bye [preauth]
Nov 25 23:21:05 compute-0 sshd-session[31479]: Disconnected from authenticating user root 202.83.162.167 port 60322 [preauth]
Nov 25 23:21:12 compute-0 sshd-session[31481]: Invalid user magento from 124.18.141.70 port 60692
Nov 25 23:21:12 compute-0 sshd-session[31481]: Received disconnect from 124.18.141.70 port 60692:11: Bye Bye [preauth]
Nov 25 23:21:12 compute-0 sshd-session[31481]: Disconnected from invalid user magento 124.18.141.70 port 60692 [preauth]
Nov 25 23:22:02 compute-0 sshd-session[31484]: Connection closed by 62.87.151.183 port 27743
Nov 25 23:22:10 compute-0 sshd-session[31487]: Invalid user afa from 68.183.82.237 port 58262
Nov 25 23:22:10 compute-0 sshd-session[31487]: Received disconnect from 68.183.82.237 port 58262:11: Bye Bye [preauth]
Nov 25 23:22:10 compute-0 sshd-session[31487]: Disconnected from invalid user afa 68.183.82.237 port 58262 [preauth]
Nov 25 23:22:18 compute-0 sshd-session[31489]: Accepted publickey for zuul from 192.168.122.30 port 43864 ssh2: ECDSA SHA256:7fdySOwLsSW4/MY3qn2D8coSBpPDcuftesbV+ivPlmY
Nov 25 23:22:18 compute-0 systemd-logind[789]: New session 8 of user zuul.
Nov 25 23:22:19 compute-0 systemd[1]: Started Session 8 of User zuul.
Nov 25 23:22:19 compute-0 sshd-session[31489]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 23:22:19 compute-0 sshd-session[31485]: Invalid user user2 from 45.78.216.132 port 46102
Nov 25 23:22:19 compute-0 sshd-session[31485]: Received disconnect from 45.78.216.132 port 46102:11: Bye Bye [preauth]
Nov 25 23:22:19 compute-0 sshd-session[31485]: Disconnected from invalid user user2 45.78.216.132 port 46102 [preauth]
Nov 25 23:22:20 compute-0 python3.9[31642]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 23:22:21 compute-0 sudo[31821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-locngsqvnjbmzpxjiwjfjddavejhdnbb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764112940.844596-32-40143666589464/AnsiballZ_command.py'
Nov 25 23:22:21 compute-0 sudo[31821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:22:21 compute-0 python3.9[31823]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:22:28 compute-0 sudo[31821]: pam_unix(sudo:session): session closed for user root
Nov 25 23:22:29 compute-0 sshd-session[31492]: Connection closed by 192.168.122.30 port 43864
Nov 25 23:22:29 compute-0 sshd-session[31489]: pam_unix(sshd:session): session closed for user zuul
Nov 25 23:22:29 compute-0 systemd-logind[789]: Session 8 logged out. Waiting for processes to exit.
Nov 25 23:22:29 compute-0 systemd[1]: session-8.scope: Deactivated successfully.
Nov 25 23:22:29 compute-0 systemd[1]: session-8.scope: Consumed 8.102s CPU time.
Nov 25 23:22:29 compute-0 systemd-logind[789]: Removed session 8.
Nov 25 23:22:30 compute-0 sshd-session[31880]: Received disconnect from 171.244.140.22 port 44460:11: Bye Bye [preauth]
Nov 25 23:22:30 compute-0 sshd-session[31880]: Disconnected from authenticating user root 171.244.140.22 port 44460 [preauth]
Nov 25 23:22:36 compute-0 sshd-session[31882]: Invalid user dev from 202.83.162.167 port 58538
Nov 25 23:22:36 compute-0 sshd-session[31882]: Received disconnect from 202.83.162.167 port 58538:11: Bye Bye [preauth]
Nov 25 23:22:36 compute-0 sshd-session[31882]: Disconnected from invalid user dev 202.83.162.167 port 58538 [preauth]
Nov 25 23:22:44 compute-0 sshd-session[31884]: Accepted publickey for zuul from 192.168.122.30 port 39788 ssh2: ECDSA SHA256:7fdySOwLsSW4/MY3qn2D8coSBpPDcuftesbV+ivPlmY
Nov 25 23:22:44 compute-0 systemd-logind[789]: New session 9 of user zuul.
Nov 25 23:22:44 compute-0 systemd[1]: Started Session 9 of User zuul.
Nov 25 23:22:44 compute-0 sshd-session[31884]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 23:22:45 compute-0 python3.9[32037]: ansible-ansible.legacy.ping Invoked with data=pong
Nov 25 23:22:46 compute-0 python3.9[32211]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 23:22:47 compute-0 sudo[32361]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jpsubnagwhxhictsgqnygjwukfozubyp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764112967.1004612-45-253802174004647/AnsiballZ_command.py'
Nov 25 23:22:47 compute-0 sudo[32361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:22:47 compute-0 python3.9[32363]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:22:47 compute-0 sudo[32361]: pam_unix(sudo:session): session closed for user root
Nov 25 23:22:48 compute-0 sudo[32514]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkiyhyztjvfkimgbznmtrhhnqjebvlky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764112968.2056615-57-50082916956687/AnsiballZ_stat.py'
Nov 25 23:22:48 compute-0 sudo[32514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:22:48 compute-0 python3.9[32516]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 23:22:49 compute-0 sudo[32514]: pam_unix(sudo:session): session closed for user root
Nov 25 23:22:49 compute-0 sudo[32666]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lsmkvwqkwkznjuelflwiwdleifktqidm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764112969.2621796-65-258645593479103/AnsiballZ_file.py'
Nov 25 23:22:49 compute-0 sudo[32666]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:22:50 compute-0 python3.9[32668]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:22:50 compute-0 sudo[32666]: pam_unix(sudo:session): session closed for user root
Nov 25 23:22:50 compute-0 sudo[32820]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exgksewvatjhloenygojvjlqvvkbfiaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764112970.3555105-73-86464027415954/AnsiballZ_stat.py'
Nov 25 23:22:50 compute-0 sudo[32820]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:22:50 compute-0 python3.9[32822]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:22:50 compute-0 sudo[32820]: pam_unix(sudo:session): session closed for user root
Nov 25 23:22:51 compute-0 sshd-session[32693]: Invalid user vyos from 124.18.141.70 port 50106
Nov 25 23:22:51 compute-0 sshd-session[32693]: Received disconnect from 124.18.141.70 port 50106:11: Bye Bye [preauth]
Nov 25 23:22:51 compute-0 sshd-session[32693]: Disconnected from invalid user vyos 124.18.141.70 port 50106 [preauth]
Nov 25 23:22:51 compute-0 sudo[32943]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-csmoyasvtuslxumkkyxzukyomrspsezw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764112970.3555105-73-86464027415954/AnsiballZ_copy.py'
Nov 25 23:22:51 compute-0 sudo[32943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:22:51 compute-0 python3.9[32945]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764112970.3555105-73-86464027415954/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:22:51 compute-0 sudo[32943]: pam_unix(sudo:session): session closed for user root
Nov 25 23:22:52 compute-0 sudo[33095]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apezmncsfiodugyrkesbkfqnabxdkwuq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764112972.0032318-88-58422347706177/AnsiballZ_setup.py'
Nov 25 23:22:52 compute-0 sudo[33095]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:22:52 compute-0 python3.9[33097]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 23:22:52 compute-0 sudo[33095]: pam_unix(sudo:session): session closed for user root
Nov 25 23:22:53 compute-0 sudo[33251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-airquzyitzmubiriibhrxeugrqezqgxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764112973.0770366-96-192877896805421/AnsiballZ_file.py'
Nov 25 23:22:53 compute-0 sudo[33251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:22:53 compute-0 python3.9[33253]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:22:53 compute-0 sudo[33251]: pam_unix(sudo:session): session closed for user root
Nov 25 23:22:54 compute-0 sudo[33403]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxgjsxzmzszfwsdldnxhhxnxyfzufqlt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764112973.9894001-105-215853241730144/AnsiballZ_file.py'
Nov 25 23:22:54 compute-0 sudo[33403]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:22:54 compute-0 python3.9[33405]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:22:54 compute-0 sudo[33403]: pam_unix(sudo:session): session closed for user root
Nov 25 23:22:55 compute-0 python3.9[33555]: ansible-ansible.builtin.service_facts Invoked
Nov 25 23:23:01 compute-0 python3.9[33808]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:23:02 compute-0 python3.9[33958]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 23:23:03 compute-0 python3.9[34112]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 23:23:04 compute-0 sudo[34268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phcxpppnxqlqudiykvtbtpgrqddqafdj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764112984.1790652-153-120036498688990/AnsiballZ_setup.py'
Nov 25 23:23:04 compute-0 sudo[34268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:23:04 compute-0 python3.9[34270]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 23:23:05 compute-0 sudo[34268]: pam_unix(sudo:session): session closed for user root
Nov 25 23:23:05 compute-0 sudo[34352]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ccdtlvokxlmeucunstcraoignywqvkcd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764112984.1790652-153-120036498688990/AnsiballZ_dnf.py'
Nov 25 23:23:05 compute-0 sudo[34352]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:23:05 compute-0 python3.9[34354]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 23:23:23 compute-0 sshd-session[34466]: Invalid user user from 68.183.82.237 port 38640
Nov 25 23:23:23 compute-0 sshd-session[34466]: Received disconnect from 68.183.82.237 port 38640:11: Bye Bye [preauth]
Nov 25 23:23:23 compute-0 sshd-session[34466]: Disconnected from invalid user user 68.183.82.237 port 38640 [preauth]
Nov 25 23:24:02 compute-0 systemd[1]: Reloading.
Nov 25 23:24:02 compute-0 systemd-rc-local-generator[34549]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 23:24:03 compute-0 systemd[1]: Starting dnf makecache...
Nov 25 23:24:03 compute-0 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Nov 25 23:24:03 compute-0 dnf[34558]: Failed determining last makecache time.
Nov 25 23:24:03 compute-0 dnf[34558]: delorean-openstack-barbican-42b4c41831408a8e323 160 kB/s | 3.0 kB     00:00
Nov 25 23:24:03 compute-0 dnf[34558]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 193 kB/s | 3.0 kB     00:00
Nov 25 23:24:03 compute-0 dnf[34558]: delorean-openstack-cinder-1c00d6490d88e436f26ef 195 kB/s | 3.0 kB     00:00
Nov 25 23:24:03 compute-0 dnf[34558]: delorean-python-stevedore-c4acc5639fd2329372142 193 kB/s | 3.0 kB     00:00
Nov 25 23:24:03 compute-0 dnf[34558]: delorean-python-observabilityclient-2f31846d73c 195 kB/s | 3.0 kB     00:00
Nov 25 23:24:03 compute-0 dnf[34558]: delorean-os-net-config-bbae2ed8a159b0435a473f38 190 kB/s | 3.0 kB     00:00
Nov 25 23:24:03 compute-0 dnf[34558]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 201 kB/s | 3.0 kB     00:00
Nov 25 23:24:03 compute-0 systemd[1]: Reloading.
Nov 25 23:24:03 compute-0 dnf[34558]: delorean-python-designate-tests-tempest-347fdbc 191 kB/s | 3.0 kB     00:00
Nov 25 23:24:03 compute-0 dnf[34558]: delorean-openstack-glance-1fd12c29b339f30fe823e 190 kB/s | 3.0 kB     00:00
Nov 25 23:24:03 compute-0 dnf[34558]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 200 kB/s | 3.0 kB     00:00
Nov 25 23:24:03 compute-0 dnf[34558]: delorean-openstack-manila-3c01b7181572c95dac462 208 kB/s | 3.0 kB     00:00
Nov 25 23:24:03 compute-0 dnf[34558]: delorean-python-whitebox-neutron-tests-tempest- 191 kB/s | 3.0 kB     00:00
Nov 25 23:24:03 compute-0 systemd-rc-local-generator[34601]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 23:24:03 compute-0 dnf[34558]: delorean-openstack-octavia-ba397f07a7331190208c 196 kB/s | 3.0 kB     00:00
Nov 25 23:24:03 compute-0 dnf[34558]: delorean-openstack-watcher-c014f81a8647287f6dcc 181 kB/s | 3.0 kB     00:00
Nov 25 23:24:03 compute-0 dnf[34558]: delorean-python-tcib-1124124ec06aadbac34f0d340b 199 kB/s | 3.0 kB     00:00
Nov 25 23:24:03 compute-0 dnf[34558]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 155 kB/s | 3.0 kB     00:00
Nov 25 23:24:03 compute-0 dnf[34558]: delorean-openstack-swift-dc98a8463506ac520c469a 186 kB/s | 3.0 kB     00:00
Nov 25 23:24:03 compute-0 dnf[34558]: delorean-python-tempestconf-8515371b7cceebd4282 165 kB/s | 3.0 kB     00:00
Nov 25 23:24:03 compute-0 dnf[34558]: delorean-openstack-heat-ui-013accbfd179753bc3f0 190 kB/s | 3.0 kB     00:00
Nov 25 23:24:03 compute-0 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Nov 25 23:24:03 compute-0 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Nov 25 23:24:03 compute-0 dnf[34558]: CentOS Stream 9 - BaseOS                         44 kB/s | 6.7 kB     00:00
Nov 25 23:24:03 compute-0 systemd[1]: Reloading.
Nov 25 23:24:03 compute-0 sshd-session[34517]: Received disconnect from 202.83.162.167 port 43138:11: Bye Bye [preauth]
Nov 25 23:24:03 compute-0 sshd-session[34517]: Disconnected from authenticating user root 202.83.162.167 port 43138 [preauth]
Nov 25 23:24:03 compute-0 systemd-rc-local-generator[34650]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 23:24:04 compute-0 systemd[1]: Listening on LVM2 poll daemon socket.
Nov 25 23:24:04 compute-0 dnf[34558]: CentOS Stream 9 - AppStream                      25 kB/s | 6.8 kB     00:00
Nov 25 23:24:04 compute-0 dbus-broker-launch[758]: Noticed file-system modification, trigger reload.
Nov 25 23:24:04 compute-0 dbus-broker-launch[758]: Noticed file-system modification, trigger reload.
Nov 25 23:24:04 compute-0 dbus-broker-launch[758]: Noticed file-system modification, trigger reload.
Nov 25 23:24:04 compute-0 dnf[34558]: CentOS Stream 9 - CRB                            60 kB/s | 6.5 kB     00:00
Nov 25 23:24:04 compute-0 dnf[34558]: CentOS Stream 9 - Extras packages                71 kB/s | 8.3 kB     00:00
Nov 25 23:24:04 compute-0 dnf[34558]: dlrn-antelope-testing                           108 kB/s | 3.0 kB     00:00
Nov 25 23:24:04 compute-0 dnf[34558]: dlrn-antelope-build-deps                        113 kB/s | 3.0 kB     00:00
Nov 25 23:24:06 compute-0 dnf[34558]: centos9-rabbitmq                                1.9 kB/s | 3.0 kB     00:01
Nov 25 23:24:06 compute-0 dnf[34558]: centos9-storage                                  81 kB/s | 3.0 kB     00:00
Nov 25 23:24:06 compute-0 dnf[34558]: centos9-opstools                                 75 kB/s | 3.0 kB     00:00
Nov 25 23:24:06 compute-0 dnf[34558]: NFV SIG OpenvSwitch                              77 kB/s | 3.0 kB     00:00
Nov 25 23:24:06 compute-0 dnf[34558]: repo-setup-centos-appstream                     126 kB/s | 4.4 kB     00:00
Nov 25 23:24:06 compute-0 dnf[34558]: repo-setup-centos-baseos                        138 kB/s | 3.9 kB     00:00
Nov 25 23:24:06 compute-0 dnf[34558]: repo-setup-centos-highavailability               12 kB/s | 3.9 kB     00:00
Nov 25 23:24:06 compute-0 dnf[34558]: repo-setup-centos-powertools                    141 kB/s | 4.3 kB     00:00
Nov 25 23:24:07 compute-0 dnf[34558]: Extra Packages for Enterprise Linux 9 - x86_64   91 kB/s |  35 kB     00:00
Nov 25 23:24:07 compute-0 dnf[34558]: Metadata cache created.
Nov 25 23:24:07 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Nov 25 23:24:07 compute-0 systemd[1]: Finished dnf makecache.
Nov 25 23:24:07 compute-0 systemd[1]: dnf-makecache.service: Consumed 1.885s CPU time.
Nov 25 23:24:27 compute-0 sshd-session[34762]: Invalid user tibero from 124.18.141.70 port 57700
Nov 25 23:24:27 compute-0 sshd-session[34762]: Received disconnect from 124.18.141.70 port 57700:11: Bye Bye [preauth]
Nov 25 23:24:27 compute-0 sshd-session[34762]: Disconnected from invalid user tibero 124.18.141.70 port 57700 [preauth]
Nov 25 23:24:28 compute-0 sshd-session[34769]: Invalid user ntps from 171.244.140.22 port 53686
Nov 25 23:24:28 compute-0 sshd-session[34769]: Received disconnect from 171.244.140.22 port 53686:11: Bye Bye [preauth]
Nov 25 23:24:28 compute-0 sshd-session[34769]: Disconnected from invalid user ntps 171.244.140.22 port 53686 [preauth]
Nov 25 23:24:40 compute-0 sshd-session[34816]: Received disconnect from 68.183.82.237 port 46150:11: Bye Bye [preauth]
Nov 25 23:24:40 compute-0 sshd-session[34816]: Disconnected from authenticating user root 68.183.82.237 port 46150 [preauth]
Nov 25 23:25:06 compute-0 kernel: SELinux:  Converting 2718 SID table entries...
Nov 25 23:25:06 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 23:25:06 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 25 23:25:06 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 23:25:06 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 25 23:25:06 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 23:25:06 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 23:25:06 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 23:25:06 compute-0 dbus-broker-launch[769]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Nov 25 23:25:06 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 25 23:25:06 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 25 23:25:06 compute-0 systemd[1]: Reloading.
Nov 25 23:25:06 compute-0 systemd-rc-local-generator[34993]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 23:25:07 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 25 23:25:07 compute-0 sudo[34352]: pam_unix(sudo:session): session closed for user root
Nov 25 23:25:07 compute-0 sudo[35907]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvfmftbmwoatltzazsvnamaqajnbixgz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113107.6496363-165-185341321539997/AnsiballZ_command.py'
Nov 25 23:25:07 compute-0 sudo[35907]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:25:08 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 25 23:25:08 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 25 23:25:08 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.345s CPU time.
Nov 25 23:25:08 compute-0 systemd[1]: run-r426503ff246b49428b1c9976a5ca9905.service: Deactivated successfully.
Nov 25 23:25:08 compute-0 python3.9[35909]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:25:09 compute-0 sudo[35907]: pam_unix(sudo:session): session closed for user root
Nov 25 23:25:10 compute-0 sudo[36190]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfgmcwnmllfhixrzpawgxqbgtpnarlva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113109.501915-173-25019622652153/AnsiballZ_selinux.py'
Nov 25 23:25:10 compute-0 sudo[36190]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:25:10 compute-0 python3.9[36192]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Nov 25 23:25:10 compute-0 sudo[36190]: pam_unix(sudo:session): session closed for user root
Nov 25 23:25:11 compute-0 sudo[36342]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxdmshxlpxjznztktcqrhzcwzludtdsk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113110.9239826-184-64929875693439/AnsiballZ_command.py'
Nov 25 23:25:11 compute-0 sudo[36342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:25:11 compute-0 python3.9[36344]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Nov 25 23:25:12 compute-0 sudo[36342]: pam_unix(sudo:session): session closed for user root
Nov 25 23:25:13 compute-0 sudo[36495]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmczytotpyyqbcuptguzturhmiqkhjvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113112.6765954-192-101161881667624/AnsiballZ_file.py'
Nov 25 23:25:13 compute-0 sudo[36495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:25:13 compute-0 python3.9[36497]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:25:13 compute-0 sudo[36495]: pam_unix(sudo:session): session closed for user root
Nov 25 23:25:14 compute-0 sudo[36647]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxiisdeswgimhmxigxagrycrojyvafrd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113113.7189696-200-65497396216659/AnsiballZ_mount.py'
Nov 25 23:25:14 compute-0 sudo[36647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:25:14 compute-0 python3.9[36649]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Nov 25 23:25:14 compute-0 sudo[36647]: pam_unix(sudo:session): session closed for user root
Nov 25 23:25:15 compute-0 sudo[36799]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgqllxzsjyjnrosunxnzudauhatsroqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113115.4540899-228-231873377182658/AnsiballZ_file.py'
Nov 25 23:25:15 compute-0 sudo[36799]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:25:15 compute-0 python3.9[36801]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:25:16 compute-0 sudo[36799]: pam_unix(sudo:session): session closed for user root
Nov 25 23:25:16 compute-0 sudo[36951]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vamvkwxznpnbwhhbolpiagevkakvdosm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113116.212891-236-260308907406611/AnsiballZ_stat.py'
Nov 25 23:25:16 compute-0 sudo[36951]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:25:16 compute-0 python3.9[36953]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:25:16 compute-0 sudo[36951]: pam_unix(sudo:session): session closed for user root
Nov 25 23:25:17 compute-0 sudo[37074]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-deznhqaltabqgnwsnhtwwrmvupthpblu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113116.212891-236-260308907406611/AnsiballZ_copy.py'
Nov 25 23:25:17 compute-0 sudo[37074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:25:17 compute-0 python3.9[37076]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764113116.212891-236-260308907406611/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=70850e65b5e36c1a89abd37d8f250d78131a9b48 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:25:17 compute-0 sudo[37074]: pam_unix(sudo:session): session closed for user root
Nov 25 23:25:18 compute-0 sudo[37226]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oiztqravteacetoynshghvfvojiikdal ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113118.090925-260-101414028913617/AnsiballZ_stat.py'
Nov 25 23:25:18 compute-0 sudo[37226]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:25:18 compute-0 python3.9[37228]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 23:25:18 compute-0 sudo[37226]: pam_unix(sudo:session): session closed for user root
Nov 25 23:25:19 compute-0 sudo[37378]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmnfubxjcrjpyctzzzsiahjtpvmgxyfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113118.8710597-268-202175785280373/AnsiballZ_command.py'
Nov 25 23:25:19 compute-0 sudo[37378]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:25:19 compute-0 python3.9[37380]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:25:19 compute-0 sudo[37378]: pam_unix(sudo:session): session closed for user root
Nov 25 23:25:20 compute-0 sudo[37531]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdofslupbjsbycumxtsgwgrzvwawizvv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113119.846069-276-46586055057698/AnsiballZ_file.py'
Nov 25 23:25:20 compute-0 sudo[37531]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:25:20 compute-0 python3.9[37533]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:25:20 compute-0 sudo[37531]: pam_unix(sudo:session): session closed for user root
Nov 25 23:25:21 compute-0 sudo[37683]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwtgbsqrbnivkjccpluokbymoizdovju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113120.9214554-287-33027930486077/AnsiballZ_getent.py'
Nov 25 23:25:21 compute-0 sudo[37683]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:25:21 compute-0 python3.9[37685]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Nov 25 23:25:21 compute-0 sudo[37683]: pam_unix(sudo:session): session closed for user root
Nov 25 23:25:21 compute-0 rsyslogd[1001]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 23:25:22 compute-0 sudo[37837]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qotyayxdineaneqbuswrekfavzyvcdvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113121.939068-295-221355961759696/AnsiballZ_group.py'
Nov 25 23:25:22 compute-0 sudo[37837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:25:23 compute-0 irqbalance[783]: Cannot change IRQ 26 affinity: Operation not permitted
Nov 25 23:25:23 compute-0 irqbalance[783]: IRQ 26 affinity is now unmanaged
Nov 25 23:25:25 compute-0 python3.9[37839]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 25 23:25:25 compute-0 groupadd[37840]: group added to /etc/group: name=qemu, GID=107
Nov 25 23:25:25 compute-0 groupadd[37840]: group added to /etc/gshadow: name=qemu
Nov 25 23:25:25 compute-0 groupadd[37840]: new group: name=qemu, GID=107
Nov 25 23:25:25 compute-0 sudo[37837]: pam_unix(sudo:session): session closed for user root
Nov 25 23:25:26 compute-0 sudo[37995]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkdqtjvcihqbqmdcochgdbgwgbtqaufn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113125.4704926-303-247942580405518/AnsiballZ_user.py'
Nov 25 23:25:26 compute-0 sudo[37995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:25:26 compute-0 python3.9[37997]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 25 23:25:26 compute-0 useradd[37999]: new user: name=qemu, UID=107, GID=107, home=/home/qemu, shell=/sbin/nologin, from=/dev/pts/0
Nov 25 23:25:26 compute-0 sudo[37995]: pam_unix(sudo:session): session closed for user root
Nov 25 23:25:26 compute-0 sudo[38155]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydmfcrchyfhfowbwcdrrutyfydqpljhj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113126.5955708-311-108869522823423/AnsiballZ_getent.py'
Nov 25 23:25:26 compute-0 sudo[38155]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:25:27 compute-0 python3.9[38157]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Nov 25 23:25:27 compute-0 sudo[38155]: pam_unix(sudo:session): session closed for user root
Nov 25 23:25:27 compute-0 sudo[38310]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pureduforsdqycxqdamfhmpsaqispijw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113127.4464421-319-236865434851478/AnsiballZ_group.py'
Nov 25 23:25:27 compute-0 sudo[38310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:25:28 compute-0 python3.9[38312]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 25 23:25:28 compute-0 groupadd[38313]: group added to /etc/group: name=hugetlbfs, GID=42477
Nov 25 23:25:28 compute-0 groupadd[38313]: group added to /etc/gshadow: name=hugetlbfs
Nov 25 23:25:28 compute-0 groupadd[38313]: new group: name=hugetlbfs, GID=42477
Nov 25 23:25:28 compute-0 sudo[38310]: pam_unix(sudo:session): session closed for user root
Nov 25 23:25:28 compute-0 sshd-session[38183]: Invalid user runner from 202.83.162.167 port 58034
Nov 25 23:25:28 compute-0 sshd-session[38183]: Received disconnect from 202.83.162.167 port 58034:11: Bye Bye [preauth]
Nov 25 23:25:28 compute-0 sshd-session[38183]: Disconnected from invalid user runner 202.83.162.167 port 58034 [preauth]
Nov 25 23:25:28 compute-0 sudo[38468]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adpnoexfyrhwbjsihwaobjfcgjdijyjv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113128.388317-328-277974058156381/AnsiballZ_file.py'
Nov 25 23:25:28 compute-0 sudo[38468]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:25:29 compute-0 python3.9[38470]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Nov 25 23:25:29 compute-0 sudo[38468]: pam_unix(sudo:session): session closed for user root
Nov 25 23:25:29 compute-0 sudo[38620]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbdarkkgmzcsxzdigvbuowluvpxltvgi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113129.4579701-339-101722402448946/AnsiballZ_dnf.py'
Nov 25 23:25:29 compute-0 sudo[38620]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:25:30 compute-0 python3.9[38622]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 23:25:31 compute-0 sudo[38620]: pam_unix(sudo:session): session closed for user root
Nov 25 23:25:32 compute-0 sudo[38773]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xootskpytcyrwiewmjhztknukynxxgjz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113131.8227656-347-280980658804338/AnsiballZ_file.py'
Nov 25 23:25:32 compute-0 sudo[38773]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:25:32 compute-0 python3.9[38775]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:25:32 compute-0 sudo[38773]: pam_unix(sudo:session): session closed for user root
Nov 25 23:25:32 compute-0 sudo[38925]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgllsnjypjladruwxqnvnlqurqtfmodp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113132.6256735-355-130533914106850/AnsiballZ_stat.py'
Nov 25 23:25:33 compute-0 sudo[38925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:25:33 compute-0 python3.9[38927]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:25:33 compute-0 sudo[38925]: pam_unix(sudo:session): session closed for user root
Nov 25 23:25:33 compute-0 sudo[39048]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-husrkoboxjxmfjcfoontnwjepzjthtjp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113132.6256735-355-130533914106850/AnsiballZ_copy.py'
Nov 25 23:25:33 compute-0 sudo[39048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:25:33 compute-0 python3.9[39050]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764113132.6256735-355-130533914106850/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:25:33 compute-0 sudo[39048]: pam_unix(sudo:session): session closed for user root
Nov 25 23:25:34 compute-0 sudo[39200]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbvbjanabjocaoerfkpqnmzhdcjzoquo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113134.0207448-370-193131826795807/AnsiballZ_systemd.py'
Nov 25 23:25:34 compute-0 sudo[39200]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:25:34 compute-0 python3.9[39202]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 23:25:35 compute-0 systemd[1]: Starting Load Kernel Modules...
Nov 25 23:25:35 compute-0 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Nov 25 23:25:35 compute-0 kernel: Bridge firewalling registered
Nov 25 23:25:35 compute-0 systemd-modules-load[39206]: Inserted module 'br_netfilter'
Nov 25 23:25:35 compute-0 systemd[1]: Finished Load Kernel Modules.
Nov 25 23:25:35 compute-0 sudo[39200]: pam_unix(sudo:session): session closed for user root
Nov 25 23:25:35 compute-0 sudo[39360]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-femoyeosxueioqepboevgxnioiaijzoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113135.3603024-378-224572589975245/AnsiballZ_stat.py'
Nov 25 23:25:35 compute-0 sudo[39360]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:25:35 compute-0 python3.9[39362]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:25:35 compute-0 sudo[39360]: pam_unix(sudo:session): session closed for user root
Nov 25 23:25:36 compute-0 sudo[39483]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfpqgoglcvoqinvhkltkiefxhebmxown ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113135.3603024-378-224572589975245/AnsiballZ_copy.py'
Nov 25 23:25:36 compute-0 sudo[39483]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:25:36 compute-0 python3.9[39485]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764113135.3603024-378-224572589975245/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:25:36 compute-0 sudo[39483]: pam_unix(sudo:session): session closed for user root
Nov 25 23:25:37 compute-0 sudo[39635]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-laqxmjbrakehjgpndmkbxlnovteyugsj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113136.993094-396-5387721725579/AnsiballZ_dnf.py'
Nov 25 23:25:37 compute-0 sudo[39635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:25:37 compute-0 python3.9[39637]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 23:25:41 compute-0 dbus-broker-launch[758]: Noticed file-system modification, trigger reload.
Nov 25 23:25:41 compute-0 dbus-broker-launch[758]: Noticed file-system modification, trigger reload.
Nov 25 23:25:41 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 25 23:25:41 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 25 23:25:41 compute-0 systemd[1]: Reloading.
Nov 25 23:25:41 compute-0 systemd-rc-local-generator[39702]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 23:25:41 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 25 23:25:42 compute-0 sudo[39635]: pam_unix(sudo:session): session closed for user root
Nov 25 23:25:43 compute-0 python3.9[40947]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 23:25:44 compute-0 python3.9[41879]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Nov 25 23:25:44 compute-0 python3.9[42696]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 23:25:45 compute-0 sudo[43531]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdzxorjfevhqskojhbtbbhrwwayqlsjz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113145.2486873-435-139374357169494/AnsiballZ_command.py'
Nov 25 23:25:45 compute-0 sudo[43531]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:25:45 compute-0 python3.9[43553]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:25:45 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 25 23:25:45 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 25 23:25:45 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 25 23:25:45 compute-0 systemd[1]: man-db-cache-update.service: Consumed 5.555s CPU time.
Nov 25 23:25:45 compute-0 systemd[1]: run-re9ed9568aac34088a97998bc402a2d45.service: Deactivated successfully.
Nov 25 23:25:46 compute-0 systemd[1]: Starting Authorization Manager...
Nov 25 23:25:46 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 25 23:25:46 compute-0 polkitd[44014]: Started polkitd version 0.117
Nov 25 23:25:46 compute-0 polkitd[44014]: Loading rules from directory /etc/polkit-1/rules.d
Nov 25 23:25:46 compute-0 polkitd[44014]: Loading rules from directory /usr/share/polkit-1/rules.d
Nov 25 23:25:46 compute-0 polkitd[44014]: Finished loading, compiling and executing 2 rules
Nov 25 23:25:46 compute-0 systemd[1]: Started Authorization Manager.
Nov 25 23:25:46 compute-0 polkitd[44014]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Nov 25 23:25:46 compute-0 sudo[43531]: pam_unix(sudo:session): session closed for user root
Nov 25 23:25:46 compute-0 sudo[44182]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uyfwdfvnjrodtbxsguouxupyllnuysej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113146.5770218-444-276540431288174/AnsiballZ_systemd.py'
Nov 25 23:25:46 compute-0 sudo[44182]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:25:47 compute-0 python3.9[44184]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 23:25:47 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Nov 25 23:25:47 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Nov 25 23:25:47 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Nov 25 23:25:47 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 25 23:25:47 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 25 23:25:47 compute-0 sudo[44182]: pam_unix(sudo:session): session closed for user root
Nov 25 23:25:48 compute-0 python3.9[44346]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Nov 25 23:25:50 compute-0 sudo[44496]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqnnxaqbgiklmndmqpcsetybkqsxdxrd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113150.4326153-501-13926019377583/AnsiballZ_systemd.py'
Nov 25 23:25:50 compute-0 sudo[44496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:25:51 compute-0 python3.9[44498]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 23:25:51 compute-0 systemd[1]: Reloading.
Nov 25 23:25:51 compute-0 systemd-rc-local-generator[44530]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 23:25:51 compute-0 sudo[44496]: pam_unix(sudo:session): session closed for user root
Nov 25 23:25:52 compute-0 sudo[44688]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnbzbqlvdnfqytjvtbkqazeurflpxyxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113151.694522-501-178236586247296/AnsiballZ_systemd.py'
Nov 25 23:25:52 compute-0 sudo[44688]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:25:52 compute-0 sshd-session[44499]: Invalid user tech from 68.183.82.237 port 54302
Nov 25 23:25:52 compute-0 python3.9[44690]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 23:25:52 compute-0 systemd[1]: Reloading.
Nov 25 23:25:52 compute-0 sshd-session[44499]: Received disconnect from 68.183.82.237 port 54302:11: Bye Bye [preauth]
Nov 25 23:25:52 compute-0 sshd-session[44499]: Disconnected from invalid user tech 68.183.82.237 port 54302 [preauth]
Nov 25 23:25:52 compute-0 systemd-rc-local-generator[44719]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 23:25:52 compute-0 sudo[44688]: pam_unix(sudo:session): session closed for user root
Nov 25 23:25:53 compute-0 sudo[44876]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uyrofteaoqbcfhyeozbcjpguampdzyqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113153.0577977-517-136583519926526/AnsiballZ_command.py'
Nov 25 23:25:53 compute-0 sudo[44876]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:25:53 compute-0 python3.9[44878]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:25:53 compute-0 sudo[44876]: pam_unix(sudo:session): session closed for user root
Nov 25 23:25:54 compute-0 sudo[45029]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcuivtfupbdbjkqertuqafiwqicyfxli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113153.7687795-525-127737757832093/AnsiballZ_command.py'
Nov 25 23:25:54 compute-0 sudo[45029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:25:54 compute-0 python3.9[45031]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:25:54 compute-0 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Nov 25 23:25:54 compute-0 sudo[45029]: pam_unix(sudo:session): session closed for user root
Nov 25 23:25:54 compute-0 sudo[45182]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezbdvrkgpmlpjotnsbtzruajryazopat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113154.5888286-533-241916583472627/AnsiballZ_command.py'
Nov 25 23:25:54 compute-0 sudo[45182]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:25:55 compute-0 python3.9[45184]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:25:56 compute-0 sudo[45182]: pam_unix(sudo:session): session closed for user root
Nov 25 23:25:57 compute-0 sudo[45344]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqaodvnwwlwipeaxieqobdonwpqjfgww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113157.0040436-541-210593268588387/AnsiballZ_command.py'
Nov 25 23:25:57 compute-0 sudo[45344]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:25:57 compute-0 python3.9[45346]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:25:57 compute-0 sudo[45344]: pam_unix(sudo:session): session closed for user root
Nov 25 23:25:58 compute-0 sudo[45497]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsypckaiyzyogovoskbfhwozikaopliw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113157.676245-549-9039725720144/AnsiballZ_systemd.py'
Nov 25 23:25:58 compute-0 sudo[45497]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:25:58 compute-0 python3.9[45499]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 23:25:58 compute-0 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Nov 25 23:25:58 compute-0 systemd[1]: Stopped Apply Kernel Variables.
Nov 25 23:25:58 compute-0 systemd[1]: Stopping Apply Kernel Variables...
Nov 25 23:25:58 compute-0 systemd[1]: Starting Apply Kernel Variables...
Nov 25 23:25:58 compute-0 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Nov 25 23:25:58 compute-0 systemd[1]: Finished Apply Kernel Variables.
Nov 25 23:25:58 compute-0 sudo[45497]: pam_unix(sudo:session): session closed for user root
Nov 25 23:25:58 compute-0 sshd-session[31887]: Connection closed by 192.168.122.30 port 39788
Nov 25 23:25:58 compute-0 sshd-session[31884]: pam_unix(sshd:session): session closed for user zuul
Nov 25 23:25:58 compute-0 systemd[1]: session-9.scope: Deactivated successfully.
Nov 25 23:25:58 compute-0 systemd[1]: session-9.scope: Consumed 2min 19.879s CPU time.
Nov 25 23:25:58 compute-0 systemd-logind[789]: Session 9 logged out. Waiting for processes to exit.
Nov 25 23:25:58 compute-0 systemd-logind[789]: Removed session 9.
Nov 25 23:26:04 compute-0 sshd-session[45530]: Accepted publickey for zuul from 192.168.122.30 port 44698 ssh2: ECDSA SHA256:7fdySOwLsSW4/MY3qn2D8coSBpPDcuftesbV+ivPlmY
Nov 25 23:26:04 compute-0 systemd-logind[789]: New session 10 of user zuul.
Nov 25 23:26:04 compute-0 systemd[1]: Started Session 10 of User zuul.
Nov 25 23:26:04 compute-0 sshd-session[45530]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 23:26:05 compute-0 python3.9[45685]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 23:26:05 compute-0 sshd-session[45586]: Invalid user bitnami from 124.18.141.70 port 52304
Nov 25 23:26:06 compute-0 sshd-session[45586]: Received disconnect from 124.18.141.70 port 52304:11: Bye Bye [preauth]
Nov 25 23:26:06 compute-0 sshd-session[45586]: Disconnected from invalid user bitnami 124.18.141.70 port 52304 [preauth]
Nov 25 23:26:06 compute-0 sudo[45839]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrklvkpzfxwastlfmqifjfrhmicstzgu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113166.3289177-36-277958394733285/AnsiballZ_getent.py'
Nov 25 23:26:06 compute-0 sudo[45839]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:26:07 compute-0 python3.9[45841]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Nov 25 23:26:07 compute-0 sudo[45839]: pam_unix(sudo:session): session closed for user root
Nov 25 23:26:07 compute-0 sudo[45992]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omnprstvwcizplzwvrrjlesskulvaegf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113167.2453344-44-196973810415380/AnsiballZ_group.py'
Nov 25 23:26:07 compute-0 sudo[45992]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:26:07 compute-0 python3.9[45994]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 25 23:26:07 compute-0 groupadd[45995]: group added to /etc/group: name=openvswitch, GID=42476
Nov 25 23:26:07 compute-0 groupadd[45995]: group added to /etc/gshadow: name=openvswitch
Nov 25 23:26:07 compute-0 groupadd[45995]: new group: name=openvswitch, GID=42476
Nov 25 23:26:07 compute-0 sudo[45992]: pam_unix(sudo:session): session closed for user root
Nov 25 23:26:08 compute-0 sudo[46150]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbmiygyujuthwvkkvdrpruphvqdmenky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113168.1320329-52-163748870407145/AnsiballZ_user.py'
Nov 25 23:26:08 compute-0 sudo[46150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:26:08 compute-0 python3.9[46152]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 25 23:26:08 compute-0 useradd[46154]: new user: name=openvswitch, UID=42476, GID=42476, home=/home/openvswitch, shell=/sbin/nologin, from=/dev/pts/0
Nov 25 23:26:08 compute-0 useradd[46154]: add 'openvswitch' to group 'hugetlbfs'
Nov 25 23:26:08 compute-0 useradd[46154]: add 'openvswitch' to shadow group 'hugetlbfs'
Nov 25 23:26:08 compute-0 sudo[46150]: pam_unix(sudo:session): session closed for user root
Nov 25 23:26:09 compute-0 sudo[46310]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wltmtnmmljwjrssdabfbelqbghxsrelk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113169.241104-62-234364671967090/AnsiballZ_setup.py'
Nov 25 23:26:09 compute-0 sudo[46310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:26:09 compute-0 python3.9[46312]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 23:26:10 compute-0 sudo[46310]: pam_unix(sudo:session): session closed for user root
Nov 25 23:26:10 compute-0 sudo[46394]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpowwbgehcffkujahhvwyrfkkqcngacn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113169.241104-62-234364671967090/AnsiballZ_dnf.py'
Nov 25 23:26:10 compute-0 sudo[46394]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:26:10 compute-0 python3.9[46396]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 25 23:26:13 compute-0 sudo[46394]: pam_unix(sudo:session): session closed for user root
Nov 25 23:26:14 compute-0 sudo[46559]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksedbgggsigdrnxftdecbhxjnkhtgnqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113173.7305298-76-139531841077823/AnsiballZ_dnf.py'
Nov 25 23:26:14 compute-0 sudo[46559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:26:14 compute-0 python3.9[46561]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 23:26:24 compute-0 kernel: SELinux:  Converting 2730 SID table entries...
Nov 25 23:26:24 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 23:26:24 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 25 23:26:24 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 23:26:24 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 25 23:26:24 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 23:26:24 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 23:26:24 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 23:26:25 compute-0 groupadd[46584]: group added to /etc/group: name=unbound, GID=993
Nov 25 23:26:25 compute-0 groupadd[46584]: group added to /etc/gshadow: name=unbound
Nov 25 23:26:25 compute-0 groupadd[46584]: new group: name=unbound, GID=993
Nov 25 23:26:25 compute-0 useradd[46591]: new user: name=unbound, UID=993, GID=993, home=/var/lib/unbound, shell=/sbin/nologin, from=none
Nov 25 23:26:25 compute-0 dbus-broker-launch[769]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Nov 25 23:26:25 compute-0 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Nov 25 23:26:26 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 25 23:26:26 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 25 23:26:26 compute-0 systemd[1]: Reloading.
Nov 25 23:26:26 compute-0 systemd-rc-local-generator[47092]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 23:26:26 compute-0 systemd-sysv-generator[47096]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 23:26:26 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 25 23:26:27 compute-0 sudo[46559]: pam_unix(sudo:session): session closed for user root
Nov 25 23:26:27 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 25 23:26:27 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 25 23:26:27 compute-0 systemd[1]: run-r3acf9f046dee491789275c046c8a6594.service: Deactivated successfully.
Nov 25 23:26:27 compute-0 sshd-session[46992]: Invalid user ubuntu from 171.244.140.22 port 53628
Nov 25 23:26:27 compute-0 sshd-session[46992]: Received disconnect from 171.244.140.22 port 53628:11: Bye Bye [preauth]
Nov 25 23:26:27 compute-0 sshd-session[46992]: Disconnected from invalid user ubuntu 171.244.140.22 port 53628 [preauth]
Nov 25 23:26:28 compute-0 sudo[47660]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnfdcdbonnhjbzgrxinwgfzxdeqlujtd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113187.5785382-84-207305316308479/AnsiballZ_systemd.py'
Nov 25 23:26:28 compute-0 sudo[47660]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:26:28 compute-0 python3.9[47662]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 25 23:26:28 compute-0 systemd[1]: Reloading.
Nov 25 23:26:28 compute-0 systemd-rc-local-generator[47693]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 23:26:28 compute-0 systemd-sysv-generator[47696]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 23:26:28 compute-0 systemd[1]: Starting Open vSwitch Database Unit...
Nov 25 23:26:28 compute-0 chown[47704]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Nov 25 23:26:29 compute-0 ovs-ctl[47709]: /etc/openvswitch/conf.db does not exist ... (warning).
Nov 25 23:26:29 compute-0 ovs-ctl[47709]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Nov 25 23:26:29 compute-0 ovs-ctl[47709]: Starting ovsdb-server [  OK  ]
Nov 25 23:26:29 compute-0 ovs-vsctl[47758]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Nov 25 23:26:29 compute-0 ovs-vsctl[47778]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"2ba84045-48af-49e3-86f7-35b32300977f\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Nov 25 23:26:29 compute-0 ovs-ctl[47709]: Configuring Open vSwitch system IDs [  OK  ]
Nov 25 23:26:29 compute-0 ovs-ctl[47709]: Enabling remote OVSDB managers [  OK  ]
Nov 25 23:26:29 compute-0 systemd[1]: Started Open vSwitch Database Unit.
Nov 25 23:26:29 compute-0 ovs-vsctl[47784]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Nov 25 23:26:29 compute-0 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Nov 25 23:26:29 compute-0 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Nov 25 23:26:29 compute-0 systemd[1]: Starting Open vSwitch Forwarding Unit...
Nov 25 23:26:29 compute-0 kernel: openvswitch: Open vSwitch switching datapath
Nov 25 23:26:29 compute-0 ovs-ctl[47829]: Inserting openvswitch module [  OK  ]
Nov 25 23:26:29 compute-0 ovs-ctl[47798]: Starting ovs-vswitchd [  OK  ]
Nov 25 23:26:29 compute-0 ovs-ctl[47798]: Enabling remote OVSDB managers [  OK  ]
Nov 25 23:26:29 compute-0 systemd[1]: Started Open vSwitch Forwarding Unit.
Nov 25 23:26:29 compute-0 ovs-vsctl[47851]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Nov 25 23:26:29 compute-0 systemd[1]: Starting Open vSwitch...
Nov 25 23:26:29 compute-0 systemd[1]: Finished Open vSwitch.
Nov 25 23:26:29 compute-0 sudo[47660]: pam_unix(sudo:session): session closed for user root
Nov 25 23:26:30 compute-0 python3.9[48002]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 23:26:31 compute-0 sudo[48152]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aiaxefwnbpkegyqrlqxpxuqwejwghfru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113191.1709387-102-65547435044760/AnsiballZ_sefcontext.py'
Nov 25 23:26:31 compute-0 sudo[48152]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:26:32 compute-0 python3.9[48154]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Nov 25 23:26:33 compute-0 kernel: SELinux:  Converting 2744 SID table entries...
Nov 25 23:26:33 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 23:26:33 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 25 23:26:33 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 23:26:33 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 25 23:26:33 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 23:26:33 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 23:26:33 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 23:26:33 compute-0 sudo[48152]: pam_unix(sudo:session): session closed for user root
Nov 25 23:26:34 compute-0 python3.9[48309]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 23:26:35 compute-0 sudo[48465]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dveaguxnsflnzzkvsphzncjuttnqdhlr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113194.87839-120-125904976590962/AnsiballZ_dnf.py'
Nov 25 23:26:35 compute-0 dbus-broker-launch[769]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Nov 25 23:26:35 compute-0 sudo[48465]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:26:35 compute-0 python3.9[48467]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 23:26:36 compute-0 sudo[48465]: pam_unix(sudo:session): session closed for user root
Nov 25 23:26:37 compute-0 sudo[48618]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnunhntgeaektusxbxtpbhiistxcwusj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113196.91284-128-72014903973168/AnsiballZ_command.py'
Nov 25 23:26:37 compute-0 sudo[48618]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:26:37 compute-0 python3.9[48620]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:26:38 compute-0 sudo[48618]: pam_unix(sudo:session): session closed for user root
Nov 25 23:26:39 compute-0 sudo[48905]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhhuhlbnidzbdtvaxtosvvgpsvpysaph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113198.6406498-136-220851804904804/AnsiballZ_file.py'
Nov 25 23:26:39 compute-0 sudo[48905]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:26:39 compute-0 python3.9[48907]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 25 23:26:39 compute-0 sudo[48905]: pam_unix(sudo:session): session closed for user root
Nov 25 23:26:40 compute-0 python3.9[49057]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 23:26:40 compute-0 sudo[49209]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfmozocvagpmaujnfftxpntpxqngwzbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113200.5575125-152-38577470876654/AnsiballZ_dnf.py'
Nov 25 23:26:40 compute-0 sudo[49209]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:26:41 compute-0 python3.9[49211]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 23:26:42 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 25 23:26:42 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 25 23:26:43 compute-0 systemd[1]: Reloading.
Nov 25 23:26:43 compute-0 systemd-rc-local-generator[49246]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 23:26:43 compute-0 systemd-sysv-generator[49252]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 23:26:43 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 25 23:26:43 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 25 23:26:43 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 25 23:26:43 compute-0 systemd[1]: run-re9f5fa994f44462194f74842383ab087.service: Deactivated successfully.
Nov 25 23:26:43 compute-0 sudo[49209]: pam_unix(sudo:session): session closed for user root
Nov 25 23:26:44 compute-0 sudo[49525]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxotihtdyuiiajqabnqgsiohqbjxgrvh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113203.7493343-160-223018023382503/AnsiballZ_systemd.py'
Nov 25 23:26:44 compute-0 sudo[49525]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:26:44 compute-0 python3.9[49527]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 23:26:44 compute-0 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Nov 25 23:26:44 compute-0 systemd[1]: Stopped Network Manager Wait Online.
Nov 25 23:26:44 compute-0 systemd[1]: Stopping Network Manager Wait Online...
Nov 25 23:26:44 compute-0 NetworkManager[7181]: <info>  [1764113204.5166] caught SIGTERM, shutting down normally.
Nov 25 23:26:44 compute-0 systemd[1]: Stopping Network Manager...
Nov 25 23:26:44 compute-0 NetworkManager[7181]: <info>  [1764113204.5181] dhcp4 (eth0): canceled DHCP transaction
Nov 25 23:26:44 compute-0 NetworkManager[7181]: <info>  [1764113204.5181] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 25 23:26:44 compute-0 NetworkManager[7181]: <info>  [1764113204.5181] dhcp4 (eth0): state changed no lease
Nov 25 23:26:44 compute-0 NetworkManager[7181]: <info>  [1764113204.5184] manager: NetworkManager state is now CONNECTED_SITE
Nov 25 23:26:44 compute-0 NetworkManager[7181]: <info>  [1764113204.5253] exiting (success)
Nov 25 23:26:44 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 25 23:26:44 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 25 23:26:44 compute-0 systemd[1]: NetworkManager.service: Deactivated successfully.
Nov 25 23:26:44 compute-0 systemd[1]: Stopped Network Manager.
Nov 25 23:26:44 compute-0 systemd[1]: NetworkManager.service: Consumed 13.174s CPU time, 4.1M memory peak, read 0B from disk, written 30.5K to disk.
Nov 25 23:26:44 compute-0 systemd[1]: Starting Network Manager...
Nov 25 23:26:44 compute-0 NetworkManager[49538]: <info>  [1764113204.6112] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:3edafa6c-db49-405c-9758-42faad226154)
Nov 25 23:26:44 compute-0 NetworkManager[49538]: <info>  [1764113204.6113] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 25 23:26:44 compute-0 NetworkManager[49538]: <info>  [1764113204.6186] manager[0x5631585c4090]: monitoring kernel firmware directory '/lib/firmware'.
Nov 25 23:26:44 compute-0 systemd[1]: Starting Hostname Service...
Nov 25 23:26:44 compute-0 systemd[1]: Started Hostname Service.
Nov 25 23:26:44 compute-0 NetworkManager[49538]: <info>  [1764113204.6955] hostname: hostname: using hostnamed
Nov 25 23:26:44 compute-0 NetworkManager[49538]: <info>  [1764113204.6958] hostname: static hostname changed from (none) to "compute-0"
Nov 25 23:26:44 compute-0 NetworkManager[49538]: <info>  [1764113204.6965] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 25 23:26:44 compute-0 NetworkManager[49538]: <info>  [1764113204.6973] manager[0x5631585c4090]: rfkill: Wi-Fi hardware radio set enabled
Nov 25 23:26:44 compute-0 NetworkManager[49538]: <info>  [1764113204.6973] manager[0x5631585c4090]: rfkill: WWAN hardware radio set enabled
Nov 25 23:26:44 compute-0 NetworkManager[49538]: <info>  [1764113204.7014] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Nov 25 23:26:44 compute-0 NetworkManager[49538]: <info>  [1764113204.7030] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 25 23:26:44 compute-0 NetworkManager[49538]: <info>  [1764113204.7031] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 25 23:26:44 compute-0 NetworkManager[49538]: <info>  [1764113204.7032] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 25 23:26:44 compute-0 NetworkManager[49538]: <info>  [1764113204.7033] manager: Networking is enabled by state file
Nov 25 23:26:44 compute-0 NetworkManager[49538]: <info>  [1764113204.7037] settings: Loaded settings plugin: keyfile (internal)
Nov 25 23:26:44 compute-0 NetworkManager[49538]: <info>  [1764113204.7044] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 25 23:26:44 compute-0 NetworkManager[49538]: <info>  [1764113204.7100] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 25 23:26:44 compute-0 NetworkManager[49538]: <info>  [1764113204.7119] dhcp: init: Using DHCP client 'internal'
Nov 25 23:26:44 compute-0 NetworkManager[49538]: <info>  [1764113204.7124] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 25 23:26:44 compute-0 NetworkManager[49538]: <info>  [1764113204.7135] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 23:26:44 compute-0 NetworkManager[49538]: <info>  [1764113204.7148] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 25 23:26:44 compute-0 NetworkManager[49538]: <info>  [1764113204.7165] device (lo): Activation: starting connection 'lo' (8a2e98f0-f5c9-4e09-92f1-2bf0997fed4f)
Nov 25 23:26:44 compute-0 NetworkManager[49538]: <info>  [1764113204.7179] device (eth0): carrier: link connected
Nov 25 23:26:44 compute-0 NetworkManager[49538]: <info>  [1764113204.7189] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 25 23:26:44 compute-0 NetworkManager[49538]: <info>  [1764113204.7199] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Nov 25 23:26:44 compute-0 NetworkManager[49538]: <info>  [1764113204.7200] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 25 23:26:44 compute-0 NetworkManager[49538]: <info>  [1764113204.7215] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 25 23:26:44 compute-0 NetworkManager[49538]: <info>  [1764113204.7229] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 25 23:26:44 compute-0 NetworkManager[49538]: <info>  [1764113204.7242] device (eth1): carrier: link connected
Nov 25 23:26:44 compute-0 NetworkManager[49538]: <info>  [1764113204.7250] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 25 23:26:44 compute-0 NetworkManager[49538]: <info>  [1764113204.7262] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (c033a316-88a7-53bd-b45b-dbc337c105ea) (indicated)
Nov 25 23:26:44 compute-0 NetworkManager[49538]: <info>  [1764113204.7263] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 25 23:26:44 compute-0 NetworkManager[49538]: <info>  [1764113204.7275] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 25 23:26:44 compute-0 NetworkManager[49538]: <info>  [1764113204.7292] device (eth1): Activation: starting connection 'ci-private-network' (c033a316-88a7-53bd-b45b-dbc337c105ea)
Nov 25 23:26:44 compute-0 NetworkManager[49538]: <info>  [1764113204.7301] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 25 23:26:44 compute-0 systemd[1]: Started Network Manager.
Nov 25 23:26:44 compute-0 NetworkManager[49538]: <info>  [1764113204.7329] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 25 23:26:44 compute-0 NetworkManager[49538]: <info>  [1764113204.7347] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 25 23:26:44 compute-0 NetworkManager[49538]: <info>  [1764113204.7351] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 25 23:26:44 compute-0 NetworkManager[49538]: <info>  [1764113204.7355] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 25 23:26:44 compute-0 NetworkManager[49538]: <info>  [1764113204.7359] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 25 23:26:44 compute-0 NetworkManager[49538]: <info>  [1764113204.7364] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 25 23:26:44 compute-0 NetworkManager[49538]: <info>  [1764113204.7369] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 25 23:26:44 compute-0 NetworkManager[49538]: <info>  [1764113204.7374] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 25 23:26:44 compute-0 NetworkManager[49538]: <info>  [1764113204.7390] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 25 23:26:44 compute-0 NetworkManager[49538]: <info>  [1764113204.7395] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 25 23:26:44 compute-0 NetworkManager[49538]: <info>  [1764113204.7409] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 25 23:26:44 compute-0 NetworkManager[49538]: <info>  [1764113204.7430] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 25 23:26:44 compute-0 NetworkManager[49538]: <info>  [1764113204.7447] dhcp4 (eth0): state changed new lease, address=38.102.83.77
Nov 25 23:26:44 compute-0 NetworkManager[49538]: <info>  [1764113204.7460] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 25 23:26:44 compute-0 NetworkManager[49538]: <info>  [1764113204.7557] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 25 23:26:44 compute-0 sudo[49525]: pam_unix(sudo:session): session closed for user root
Nov 25 23:26:44 compute-0 NetworkManager[49538]: <info>  [1764113204.7846] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 25 23:26:44 compute-0 NetworkManager[49538]: <info>  [1764113204.7847] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 25 23:26:44 compute-0 NetworkManager[49538]: <info>  [1764113204.7853] device (lo): Activation: successful, device activated.
Nov 25 23:26:44 compute-0 NetworkManager[49538]: <info>  [1764113204.7860] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 25 23:26:44 compute-0 NetworkManager[49538]: <info>  [1764113204.7862] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 25 23:26:44 compute-0 NetworkManager[49538]: <info>  [1764113204.7864] manager: NetworkManager state is now CONNECTED_LOCAL
Nov 25 23:26:44 compute-0 NetworkManager[49538]: <info>  [1764113204.7866] device (eth1): Activation: successful, device activated.
Nov 25 23:26:44 compute-0 NetworkManager[49538]: <info>  [1764113204.7878] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 25 23:26:44 compute-0 NetworkManager[49538]: <info>  [1764113204.7880] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 25 23:26:44 compute-0 NetworkManager[49538]: <info>  [1764113204.7883] manager: NetworkManager state is now CONNECTED_SITE
Nov 25 23:26:44 compute-0 NetworkManager[49538]: <info>  [1764113204.7885] device (eth0): Activation: successful, device activated.
Nov 25 23:26:44 compute-0 NetworkManager[49538]: <info>  [1764113204.7890] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 25 23:26:44 compute-0 NetworkManager[49538]: <info>  [1764113204.7891] manager: startup complete
Nov 25 23:26:44 compute-0 systemd[1]: Starting Network Manager Wait Online...
Nov 25 23:26:44 compute-0 systemd[1]: Finished Network Manager Wait Online.
Nov 25 23:26:45 compute-0 sshd[1002]: Timeout before authentication for connection from 45.78.216.132 to 38.102.83.77, pid = 34835
Nov 25 23:26:45 compute-0 sudo[49751]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mekkfxixchowildaxqhgmizvbnedpbrq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113204.9847047-168-153967429865418/AnsiballZ_dnf.py'
Nov 25 23:26:45 compute-0 sudo[49751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:26:45 compute-0 python3.9[49753]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 23:26:50 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 25 23:26:50 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 25 23:26:50 compute-0 systemd[1]: Reloading.
Nov 25 23:26:50 compute-0 systemd-sysv-generator[49809]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 23:26:50 compute-0 systemd-rc-local-generator[49806]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 23:26:50 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 25 23:26:51 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 25 23:26:51 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 25 23:26:51 compute-0 systemd[1]: run-rc7d41e3d4ba6474989c0b7b62881b9f7.service: Deactivated successfully.
Nov 25 23:26:51 compute-0 sudo[49751]: pam_unix(sudo:session): session closed for user root
Nov 25 23:26:52 compute-0 sudo[50210]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqmdvrsiceobhpipmsmjpozterbzercv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113211.7597363-180-202979660458783/AnsiballZ_stat.py'
Nov 25 23:26:52 compute-0 sudo[50210]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:26:52 compute-0 python3.9[50212]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 23:26:52 compute-0 sudo[50210]: pam_unix(sudo:session): session closed for user root
Nov 25 23:26:53 compute-0 sudo[50362]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idiwiryfsxqbyscbbeoxufjsocdbgvcc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113212.621397-189-50823934205025/AnsiballZ_ini_file.py'
Nov 25 23:26:53 compute-0 sudo[50362]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:26:53 compute-0 python3.9[50364]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:26:53 compute-0 sudo[50362]: pam_unix(sudo:session): session closed for user root
Nov 25 23:26:53 compute-0 sudo[50516]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-syibfjefumawazhsspprbuklzxubdbog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113213.6416073-199-18250371072768/AnsiballZ_ini_file.py'
Nov 25 23:26:53 compute-0 sudo[50516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:26:54 compute-0 python3.9[50518]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:26:54 compute-0 sudo[50516]: pam_unix(sudo:session): session closed for user root
Nov 25 23:26:54 compute-0 sudo[50670]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnvnqxovxkoaqqqgpycsugatvtnqvjpt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113214.3847072-199-64240773864827/AnsiballZ_ini_file.py'
Nov 25 23:26:54 compute-0 sudo[50670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:26:54 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 25 23:26:54 compute-0 python3.9[50672]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:26:54 compute-0 sudo[50670]: pam_unix(sudo:session): session closed for user root
Nov 25 23:26:55 compute-0 sudo[50822]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbwadmkrfeplujkwevjxisodklcaviwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113215.2069356-214-264002954749545/AnsiballZ_ini_file.py'
Nov 25 23:26:55 compute-0 sudo[50822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:26:55 compute-0 python3.9[50824]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:26:55 compute-0 sudo[50822]: pam_unix(sudo:session): session closed for user root
Nov 25 23:26:55 compute-0 sshd-session[50642]: Invalid user jack from 202.83.162.167 port 40828
Nov 25 23:26:56 compute-0 sshd-session[50642]: Received disconnect from 202.83.162.167 port 40828:11: Bye Bye [preauth]
Nov 25 23:26:56 compute-0 sshd-session[50642]: Disconnected from invalid user jack 202.83.162.167 port 40828 [preauth]
Nov 25 23:26:56 compute-0 sudo[50974]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibbizfnxiyhwcctbmawzhbtucluydmie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113215.9469583-214-52391659244861/AnsiballZ_ini_file.py'
Nov 25 23:26:56 compute-0 sudo[50974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:26:56 compute-0 python3.9[50976]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:26:56 compute-0 sudo[50974]: pam_unix(sudo:session): session closed for user root
Nov 25 23:26:57 compute-0 sudo[51126]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxbjtyevuwhpibfcwsbdhyrypzghjcrs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113216.7272477-229-211187757122222/AnsiballZ_stat.py'
Nov 25 23:26:57 compute-0 sudo[51126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:26:57 compute-0 python3.9[51128]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:26:57 compute-0 sudo[51126]: pam_unix(sudo:session): session closed for user root
Nov 25 23:26:57 compute-0 sudo[51249]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-guqdnkzldybisjxvyjzmydkagaaiztoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113216.7272477-229-211187757122222/AnsiballZ_copy.py'
Nov 25 23:26:57 compute-0 sudo[51249]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:26:58 compute-0 python3.9[51251]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764113216.7272477-229-211187757122222/.source _original_basename=.5htp7qku follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:26:58 compute-0 sudo[51249]: pam_unix(sudo:session): session closed for user root
Nov 25 23:26:58 compute-0 sudo[51401]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slviicellkdhkcmvoncytpyqcineoqvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113218.2861736-244-239114873490819/AnsiballZ_file.py'
Nov 25 23:26:58 compute-0 sudo[51401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:26:58 compute-0 python3.9[51403]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:26:58 compute-0 sudo[51401]: pam_unix(sudo:session): session closed for user root
Nov 25 23:26:59 compute-0 sudo[51553]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iziynustnhfrondruhduhrtmarwnucxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113219.0679245-252-220812064287889/AnsiballZ_edpm_os_net_config_mappings.py'
Nov 25 23:26:59 compute-0 sudo[51553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:26:59 compute-0 python3.9[51555]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Nov 25 23:26:59 compute-0 sudo[51553]: pam_unix(sudo:session): session closed for user root
Nov 25 23:27:00 compute-0 sudo[51705]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iyqibrlprlhqphgfbnbpnmzuxziaabfm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113220.1327558-261-175728930626859/AnsiballZ_file.py'
Nov 25 23:27:00 compute-0 sudo[51705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:27:00 compute-0 python3.9[51707]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:27:00 compute-0 sudo[51705]: pam_unix(sudo:session): session closed for user root
Nov 25 23:27:01 compute-0 sudo[51857]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epxkiorabereoupfmqmhkyjhvvttacmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113221.0419664-271-231348379433654/AnsiballZ_stat.py'
Nov 25 23:27:01 compute-0 sudo[51857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:27:01 compute-0 sudo[51857]: pam_unix(sudo:session): session closed for user root
Nov 25 23:27:02 compute-0 sudo[51980]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-muijydjgrcdhajxsnpljcykgrqzvqzpz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113221.0419664-271-231348379433654/AnsiballZ_copy.py'
Nov 25 23:27:02 compute-0 sudo[51980]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:27:02 compute-0 sudo[51980]: pam_unix(sudo:session): session closed for user root
Nov 25 23:27:03 compute-0 sudo[52132]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zinmddtfajepjthxvezxmmukiuchysgg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113222.5600264-286-182414115890745/AnsiballZ_slurp.py'
Nov 25 23:27:03 compute-0 sudo[52132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:27:03 compute-0 python3.9[52134]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Nov 25 23:27:03 compute-0 sudo[52132]: pam_unix(sudo:session): session closed for user root
Nov 25 23:27:04 compute-0 sudo[52307]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vijyfdwemwjteceldlzojunryahjjdiz ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113223.5894563-295-33890118623343/async_wrapper.py j269707048957 300 /home/zuul/.ansible/tmp/ansible-tmp-1764113223.5894563-295-33890118623343/AnsiballZ_edpm_os_net_config.py _'
Nov 25 23:27:04 compute-0 sudo[52307]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:27:04 compute-0 ansible-async_wrapper.py[52309]: Invoked with j269707048957 300 /home/zuul/.ansible/tmp/ansible-tmp-1764113223.5894563-295-33890118623343/AnsiballZ_edpm_os_net_config.py _
Nov 25 23:27:04 compute-0 ansible-async_wrapper.py[52312]: Starting module and watcher
Nov 25 23:27:04 compute-0 ansible-async_wrapper.py[52312]: Start watching 52313 (300)
Nov 25 23:27:04 compute-0 ansible-async_wrapper.py[52313]: Start module (52313)
Nov 25 23:27:04 compute-0 ansible-async_wrapper.py[52309]: Return async_wrapper task started.
Nov 25 23:27:04 compute-0 sudo[52307]: pam_unix(sudo:session): session closed for user root
Nov 25 23:27:04 compute-0 python3.9[52314]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Nov 25 23:27:05 compute-0 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Nov 25 23:27:05 compute-0 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Nov 25 23:27:05 compute-0 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Nov 25 23:27:05 compute-0 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Nov 25 23:27:05 compute-0 kernel: cfg80211: failed to load regulatory.db
Nov 25 23:27:05 compute-0 sshd-session[52315]: Invalid user msf from 68.183.82.237 port 39562
Nov 25 23:27:06 compute-0 sshd-session[52315]: Received disconnect from 68.183.82.237 port 39562:11: Bye Bye [preauth]
Nov 25 23:27:06 compute-0 sshd-session[52315]: Disconnected from invalid user msf 68.183.82.237 port 39562 [preauth]
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.0918] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=52317 uid=0 result="success"
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.0942] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=52317 uid=0 result="success"
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.1810] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.1813] audit: op="connection-add" uuid="9c708b3a-69d0-43ee-b403-d7301cd129e2" name="br-ex-br" pid=52317 uid=0 result="success"
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.1836] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.1838] audit: op="connection-add" uuid="50921d83-0472-4957-b860-618d86855943" name="br-ex-port" pid=52317 uid=0 result="success"
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.1859] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.1861] audit: op="connection-add" uuid="a27ac2e2-e8a9-45af-94be-b51b58d97c3f" name="eth1-port" pid=52317 uid=0 result="success"
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.1881] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.1884] audit: op="connection-add" uuid="d33b806a-8180-49b2-85e2-c860ee2e7ae4" name="vlan20-port" pid=52317 uid=0 result="success"
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.1905] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.1908] audit: op="connection-add" uuid="1680c4e9-1e1a-4b7b-8844-e59f320c8c3e" name="vlan21-port" pid=52317 uid=0 result="success"
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.1928] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.1931] audit: op="connection-add" uuid="c1877d06-7c4e-4fcf-9774-985a89a4720e" name="vlan22-port" pid=52317 uid=0 result="success"
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.1950] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.1953] audit: op="connection-add" uuid="4c7ef48f-7cfd-49bd-b160-32bf38bb9924" name="vlan23-port" pid=52317 uid=0 result="success"
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.1986] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="ipv4.dhcp-timeout,ipv4.dhcp-client-id,connection.timestamp,connection.autoconnect-priority,ipv6.addr-gen-mode,ipv6.dhcp-timeout,ipv6.method,802-3-ethernet.mtu" pid=52317 uid=0 result="success"
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2015] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2018] audit: op="connection-add" uuid="a3c3a56d-712c-4672-be0f-5a239aa74094" name="br-ex-if" pid=52317 uid=0 result="success"
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2081] audit: op="connection-update" uuid="c033a316-88a7-53bd-b45b-dbc337c105ea" name="ci-private-network" args="ipv4.dns,ipv4.never-default,ipv4.addresses,ipv4.routes,ipv4.routing-rules,ipv4.method,connection.slave-type,connection.controller,connection.port-type,connection.master,connection.timestamp,ipv6.addr-gen-mode,ipv6.dns,ipv6.addresses,ipv6.routes,ipv6.routing-rules,ipv6.method,ovs-interface.type,ovs-external-ids.data" pid=52317 uid=0 result="success"
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2113] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2116] audit: op="connection-add" uuid="8ae29937-410c-4a40-9ccd-249a4bb9fd1b" name="vlan20-if" pid=52317 uid=0 result="success"
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2144] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2147] audit: op="connection-add" uuid="fa6b6eb4-bb5d-4723-8e70-6eedb5b870f6" name="vlan21-if" pid=52317 uid=0 result="success"
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2175] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2177] audit: op="connection-add" uuid="95a7d88b-877c-4534-856e-447059dcc871" name="vlan22-if" pid=52317 uid=0 result="success"
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2205] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2208] audit: op="connection-add" uuid="814c5f06-5e24-4818-8c00-db9bc8b02641" name="vlan23-if" pid=52317 uid=0 result="success"
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2227] audit: op="connection-delete" uuid="395ca42c-36a5-36fb-90b5-378a60416156" name="Wired connection 1" pid=52317 uid=0 result="success"
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2246] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2262] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2268] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (9c708b3a-69d0-43ee-b403-d7301cd129e2)
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2269] audit: op="connection-activate" uuid="9c708b3a-69d0-43ee-b403-d7301cd129e2" name="br-ex-br" pid=52317 uid=0 result="success"
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2272] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2283] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2289] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (50921d83-0472-4957-b860-618d86855943)
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2293] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2302] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2309] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (a27ac2e2-e8a9-45af-94be-b51b58d97c3f)
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2312] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2324] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2331] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (d33b806a-8180-49b2-85e2-c860ee2e7ae4)
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2334] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2344] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2351] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (1680c4e9-1e1a-4b7b-8844-e59f320c8c3e)
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2354] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2364] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2371] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (c1877d06-7c4e-4fcf-9774-985a89a4720e)
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2374] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2385] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2390] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (4c7ef48f-7cfd-49bd-b160-32bf38bb9924)
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2391] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2394] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2396] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2402] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2408] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2412] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (a3c3a56d-712c-4672-be0f-5a239aa74094)
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2413] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2416] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2418] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2419] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2421] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2434] device (eth1): disconnecting for new activation request.
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2435] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2446] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2450] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2453] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2458] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2468] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2476] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (8ae29937-410c-4a40-9ccd-249a4bb9fd1b)
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2477] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2482] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2486] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2489] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2494] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2503] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2510] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (fa6b6eb4-bb5d-4723-8e70-6eedb5b870f6)
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2512] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2518] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2523] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2525] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2530] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2539] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2548] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (95a7d88b-877c-4534-856e-447059dcc871)
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2549] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2553] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2555] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2557] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2560] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2566] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2570] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (814c5f06-5e24-4818-8c00-db9bc8b02641)
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2572] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2575] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2578] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2579] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2581] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 23:27:07 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2597] audit: op="device-reapply" interface="eth0" ifindex=2 args="ipv4.dhcp-timeout,ipv4.dhcp-client-id,connection.autoconnect-priority,ipv6.addr-gen-mode,ipv6.method,802-3-ethernet.mtu" pid=52317 uid=0 result="success"
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2602] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2607] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2609] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2627] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2632] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2639] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 23:27:07 compute-0 kernel: ovs-system: entered promiscuous mode
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2643] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2645] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2654] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2660] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2666] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2670] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2677] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2684] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2689] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 23:27:07 compute-0 kernel: Timeout policy base is empty
Nov 25 23:27:07 compute-0 systemd-udevd[52321]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2691] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2699] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2706] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2712] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2714] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2720] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2727] dhcp4 (eth0): canceled DHCP transaction
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2727] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2727] dhcp4 (eth0): state changed no lease
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2729] dhcp4 (eth0): activation: beginning transaction (no timeout)
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2742] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Nov 25 23:27:07 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2760] audit: op="device-reapply" interface="eth1" ifindex=3 pid=52317 uid=0 result="fail" reason="Device is not activated"
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2768] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2790] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2794] dhcp4 (eth0): state changed new lease, address=38.102.83.77
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2852] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2944] device (eth1): Activation: starting connection 'ci-private-network' (c033a316-88a7-53bd-b45b-dbc337c105ea)
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2951] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2961] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2974] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2983] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.2990] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.3002] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.3010] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.3018] device (eth1): state change: config -> deactivating (reason 'new-activation', managed-type: 'full')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.3022] device (eth1): released from controller device eth1
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.3032] device (eth1): disconnecting for new activation request.
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.3033] audit: op="connection-activate" uuid="c033a316-88a7-53bd-b45b-dbc337c105ea" name="ci-private-network" pid=52317 uid=0 result="success"
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.3034] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.3036] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.3038] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.3041] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.3043] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.3045] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.3049] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.3056] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.3063] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.3070] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.3075] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.3081] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.3086] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.3093] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.3098] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.3105] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.3110] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.3116] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.3122] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.3133] device (eth1): Activation: starting connection 'ci-private-network' (c033a316-88a7-53bd-b45b-dbc337c105ea)
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.3154] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.3159] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.3165] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=52317 uid=0 result="success"
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.3167] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 23:27:07 compute-0 kernel: br-ex: entered promiscuous mode
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.3211] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.3216] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.3259] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.3262] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 23:27:07 compute-0 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.3271] device (eth1): Activation: successful, device activated.
Nov 25 23:27:07 compute-0 kernel: vlan22: entered promiscuous mode
Nov 25 23:27:07 compute-0 systemd-udevd[52322]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.3434] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.3449] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.3516] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.3519] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.3528] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 25 23:27:07 compute-0 kernel: vlan21: entered promiscuous mode
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.3589] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Nov 25 23:27:07 compute-0 systemd-udevd[52421]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.3618] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.3674] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.3677] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.3687] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 25 23:27:07 compute-0 kernel: vlan20: entered promiscuous mode
Nov 25 23:27:07 compute-0 kernel: vlan23: entered promiscuous mode
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.3812] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.3829] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.3900] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.3905] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.3917] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.3936] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.3957] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.4013] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.4015] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.4023] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.4036] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.4064] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.4116] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.4121] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 23:27:07 compute-0 NetworkManager[49538]: <info>  [1764113227.4133] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 25 23:27:08 compute-0 sudo[52678]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwoylogvzxlrgfssgytwemudvslyhrcq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113227.665647-295-239420670674906/AnsiballZ_async_status.py'
Nov 25 23:27:08 compute-0 sudo[52678]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:27:08 compute-0 python3.9[52680]: ansible-ansible.legacy.async_status Invoked with jid=j269707048957.52309 mode=status _async_dir=/root/.ansible_async
Nov 25 23:27:08 compute-0 sudo[52678]: pam_unix(sudo:session): session closed for user root
Nov 25 23:27:08 compute-0 NetworkManager[49538]: <info>  [1764113228.5580] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=52317 uid=0 result="success"
Nov 25 23:27:08 compute-0 NetworkManager[49538]: <info>  [1764113228.8320] checkpoint[0x56315859a950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Nov 25 23:27:08 compute-0 NetworkManager[49538]: <info>  [1764113228.8327] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=52317 uid=0 result="success"
Nov 25 23:27:09 compute-0 NetworkManager[49538]: <info>  [1764113229.3659] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=52317 uid=0 result="success"
Nov 25 23:27:09 compute-0 NetworkManager[49538]: <info>  [1764113229.3675] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=52317 uid=0 result="success"
Nov 25 23:27:09 compute-0 ansible-async_wrapper.py[52312]: 52313 still running (300)
Nov 25 23:27:09 compute-0 NetworkManager[49538]: <info>  [1764113229.6038] audit: op="networking-control" arg="global-dns-configuration" pid=52317 uid=0 result="success"
Nov 25 23:27:09 compute-0 NetworkManager[49538]: <info>  [1764113229.6064] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Nov 25 23:27:09 compute-0 NetworkManager[49538]: <info>  [1764113229.6086] audit: op="networking-control" arg="global-dns-configuration" pid=52317 uid=0 result="success"
Nov 25 23:27:09 compute-0 NetworkManager[49538]: <info>  [1764113229.6538] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=52317 uid=0 result="success"
Nov 25 23:27:09 compute-0 NetworkManager[49538]: <info>  [1764113229.8489] checkpoint[0x56315859aa20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Nov 25 23:27:09 compute-0 NetworkManager[49538]: <info>  [1764113229.8496] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=52317 uid=0 result="success"
Nov 25 23:27:09 compute-0 ansible-async_wrapper.py[52313]: Module complete (52313)
Nov 25 23:27:11 compute-0 sudo[52784]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehmkrqhyysvrtafyiyntkktbcltqywwr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113227.665647-295-239420670674906/AnsiballZ_async_status.py'
Nov 25 23:27:11 compute-0 sudo[52784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:27:11 compute-0 python3.9[52786]: ansible-ansible.legacy.async_status Invoked with jid=j269707048957.52309 mode=status _async_dir=/root/.ansible_async
Nov 25 23:27:12 compute-0 sudo[52784]: pam_unix(sudo:session): session closed for user root
Nov 25 23:27:12 compute-0 sudo[52884]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fuuvzshzexceuwehzuabgrbfdeacpqat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113227.665647-295-239420670674906/AnsiballZ_async_status.py'
Nov 25 23:27:12 compute-0 sudo[52884]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:27:12 compute-0 python3.9[52886]: ansible-ansible.legacy.async_status Invoked with jid=j269707048957.52309 mode=cleanup _async_dir=/root/.ansible_async
Nov 25 23:27:12 compute-0 sudo[52884]: pam_unix(sudo:session): session closed for user root
Nov 25 23:27:13 compute-0 sudo[53036]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hokttmhnnuxeykzxvhzlngjbqpxzcjvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113232.8052025-322-693976135028/AnsiballZ_stat.py'
Nov 25 23:27:13 compute-0 sudo[53036]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:27:13 compute-0 python3.9[53038]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:27:13 compute-0 sudo[53036]: pam_unix(sudo:session): session closed for user root
Nov 25 23:27:13 compute-0 sudo[53159]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aigmlbjgqsjdueigvbcjsxaxacgcigjo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113232.8052025-322-693976135028/AnsiballZ_copy.py'
Nov 25 23:27:13 compute-0 sudo[53159]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:27:14 compute-0 python3.9[53161]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764113232.8052025-322-693976135028/.source.returncode _original_basename=.9i55egcd follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:27:14 compute-0 sudo[53159]: pam_unix(sudo:session): session closed for user root
Nov 25 23:27:14 compute-0 ansible-async_wrapper.py[52312]: Done in kid B.
Nov 25 23:27:14 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 25 23:27:14 compute-0 sudo[53314]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtpvcslkfeolzzrpqaythfuaduqrnrnn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113234.3899596-338-276113039052793/AnsiballZ_stat.py'
Nov 25 23:27:14 compute-0 sudo[53314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:27:14 compute-0 python3.9[53316]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:27:14 compute-0 sudo[53314]: pam_unix(sudo:session): session closed for user root
Nov 25 23:27:15 compute-0 sudo[53438]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnuzrjhpftgtwoitefhmkiwqxqbwfhrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113234.3899596-338-276113039052793/AnsiballZ_copy.py'
Nov 25 23:27:15 compute-0 sudo[53438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:27:15 compute-0 python3.9[53440]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764113234.3899596-338-276113039052793/.source.cfg _original_basename=.9frrghco follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:27:15 compute-0 sudo[53438]: pam_unix(sudo:session): session closed for user root
Nov 25 23:27:16 compute-0 sudo[53590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywvoxpxapqfiwzopammcnhyzqsklokwd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113235.9080496-353-104199782135772/AnsiballZ_systemd.py'
Nov 25 23:27:16 compute-0 sudo[53590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:27:16 compute-0 python3.9[53592]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 23:27:16 compute-0 systemd[1]: Reloading Network Manager...
Nov 25 23:27:16 compute-0 NetworkManager[49538]: <info>  [1764113236.6988] audit: op="reload" arg="0" pid=53596 uid=0 result="success"
Nov 25 23:27:16 compute-0 NetworkManager[49538]: <info>  [1764113236.6999] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Nov 25 23:27:16 compute-0 systemd[1]: Reloaded Network Manager.
Nov 25 23:27:16 compute-0 sudo[53590]: pam_unix(sudo:session): session closed for user root
Nov 25 23:27:17 compute-0 sshd-session[45533]: Connection closed by 192.168.122.30 port 44698
Nov 25 23:27:17 compute-0 sshd-session[45530]: pam_unix(sshd:session): session closed for user zuul
Nov 25 23:27:17 compute-0 systemd-logind[789]: Session 10 logged out. Waiting for processes to exit.
Nov 25 23:27:17 compute-0 systemd[1]: session-10.scope: Deactivated successfully.
Nov 25 23:27:17 compute-0 systemd[1]: session-10.scope: Consumed 53.532s CPU time.
Nov 25 23:27:17 compute-0 systemd-logind[789]: Removed session 10.
Nov 25 23:27:20 compute-0 sshd[1002]: drop connection #0 from [45.78.216.132]:36294 on [38.102.83.77]:22 penalty: exceeded LoginGraceTime
Nov 25 23:27:22 compute-0 sshd-session[53627]: Accepted publickey for zuul from 192.168.122.30 port 54798 ssh2: ECDSA SHA256:7fdySOwLsSW4/MY3qn2D8coSBpPDcuftesbV+ivPlmY
Nov 25 23:27:22 compute-0 systemd-logind[789]: New session 11 of user zuul.
Nov 25 23:27:22 compute-0 systemd[1]: Started Session 11 of User zuul.
Nov 25 23:27:22 compute-0 sshd-session[53627]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 23:27:23 compute-0 python3.9[53780]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 23:27:24 compute-0 python3.9[53934]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 23:27:26 compute-0 python3.9[54128]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:27:26 compute-0 sshd-session[53630]: Connection closed by 192.168.122.30 port 54798
Nov 25 23:27:26 compute-0 sshd-session[53627]: pam_unix(sshd:session): session closed for user zuul
Nov 25 23:27:26 compute-0 systemd[1]: session-11.scope: Deactivated successfully.
Nov 25 23:27:26 compute-0 systemd[1]: session-11.scope: Consumed 2.816s CPU time.
Nov 25 23:27:26 compute-0 systemd-logind[789]: Session 11 logged out. Waiting for processes to exit.
Nov 25 23:27:26 compute-0 systemd-logind[789]: Removed session 11.
Nov 25 23:27:26 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 25 23:27:32 compute-0 sshd-session[54157]: Accepted publickey for zuul from 192.168.122.30 port 36136 ssh2: ECDSA SHA256:7fdySOwLsSW4/MY3qn2D8coSBpPDcuftesbV+ivPlmY
Nov 25 23:27:32 compute-0 systemd-logind[789]: New session 12 of user zuul.
Nov 25 23:27:32 compute-0 systemd[1]: Started Session 12 of User zuul.
Nov 25 23:27:32 compute-0 sshd-session[54157]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 23:27:33 compute-0 python3.9[54310]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 23:27:34 compute-0 python3.9[54465]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 23:27:35 compute-0 sudo[54619]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iidnqfwxtkesimeqraobjgsbcajinvfn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113254.9832544-40-250994308072360/AnsiballZ_setup.py'
Nov 25 23:27:35 compute-0 sudo[54619]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:27:35 compute-0 python3.9[54621]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 23:27:35 compute-0 sudo[54619]: pam_unix(sudo:session): session closed for user root
Nov 25 23:27:36 compute-0 sudo[54703]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmeaxhnponpmyhlxypanqqjkbehydkcb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113254.9832544-40-250994308072360/AnsiballZ_dnf.py'
Nov 25 23:27:36 compute-0 sudo[54703]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:27:36 compute-0 python3.9[54705]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 23:27:37 compute-0 sshd-session[54707]: Invalid user vyos from 124.18.141.70 port 48282
Nov 25 23:27:37 compute-0 sudo[54703]: pam_unix(sudo:session): session closed for user root
Nov 25 23:27:38 compute-0 sshd-session[54707]: Received disconnect from 124.18.141.70 port 48282:11: Bye Bye [preauth]
Nov 25 23:27:38 compute-0 sshd-session[54707]: Disconnected from invalid user vyos 124.18.141.70 port 48282 [preauth]
Nov 25 23:27:38 compute-0 sudo[54859]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tumrfgzijhxfbizqwzsmxhlltqsabgcp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113258.0378714-52-133360112665338/AnsiballZ_setup.py'
Nov 25 23:27:38 compute-0 sudo[54859]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:27:38 compute-0 python3.9[54861]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 23:27:38 compute-0 sudo[54859]: pam_unix(sudo:session): session closed for user root
Nov 25 23:27:39 compute-0 sudo[55054]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mitmdhdufadaputnunsreqvtgisjosju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113259.297258-63-255807668093119/AnsiballZ_file.py'
Nov 25 23:27:39 compute-0 sudo[55054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:27:40 compute-0 python3.9[55056]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:27:40 compute-0 sudo[55054]: pam_unix(sudo:session): session closed for user root
Nov 25 23:27:40 compute-0 sudo[55206]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evohypsqnlnzhpdlppkjgryziuskttfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113260.228531-71-244636284448715/AnsiballZ_command.py'
Nov 25 23:27:40 compute-0 sudo[55206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:27:40 compute-0 python3.9[55208]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:27:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat2330883822-merged.mount: Deactivated successfully.
Nov 25 23:27:41 compute-0 podman[55209]: 2025-11-25 23:27:41.032096284 +0000 UTC m=+0.076266489 system refresh
Nov 25 23:27:41 compute-0 sudo[55206]: pam_unix(sudo:session): session closed for user root
Nov 25 23:27:41 compute-0 sudo[55369]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvcjwrnpsonxtwqowzgzbweqcshncbzb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113261.2892268-79-61236339164330/AnsiballZ_stat.py'
Nov 25 23:27:41 compute-0 sudo[55369]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:27:42 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 23:27:42 compute-0 python3.9[55371]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:27:42 compute-0 sudo[55369]: pam_unix(sudo:session): session closed for user root
Nov 25 23:27:42 compute-0 sudo[55492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aducepyonmavabcqrlwwjypegogladxd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113261.2892268-79-61236339164330/AnsiballZ_copy.py'
Nov 25 23:27:42 compute-0 sudo[55492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:27:43 compute-0 python3.9[55494]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764113261.2892268-79-61236339164330/.source.json follow=False _original_basename=podman_network_config.j2 checksum=f280372936b849c8e6221e8789aa3704a9a98b1a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:27:43 compute-0 sudo[55492]: pam_unix(sudo:session): session closed for user root
Nov 25 23:27:43 compute-0 sudo[55644]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-enrduredsyeellozjtqtxxenlaqhgwrw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113263.2862916-94-10864311615966/AnsiballZ_stat.py'
Nov 25 23:27:43 compute-0 sudo[55644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:27:43 compute-0 python3.9[55646]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:27:43 compute-0 sudo[55644]: pam_unix(sudo:session): session closed for user root
Nov 25 23:27:44 compute-0 sudo[55767]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffwwlfgnxgsimshjzxghwsmfdbjijabs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113263.2862916-94-10864311615966/AnsiballZ_copy.py'
Nov 25 23:27:44 compute-0 sudo[55767]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:27:44 compute-0 python3.9[55769]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764113263.2862916-94-10864311615966/.source.conf follow=False _original_basename=registries.conf.j2 checksum=485c636425e28137b9c2e788e9d5fc748a88106d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:27:44 compute-0 sudo[55767]: pam_unix(sudo:session): session closed for user root
Nov 25 23:27:45 compute-0 sudo[55919]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-asuqlsqornoliuohqthbmgtopxbdnakb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113264.939692-110-217112056229042/AnsiballZ_ini_file.py'
Nov 25 23:27:45 compute-0 sudo[55919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:27:45 compute-0 python3.9[55921]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:27:45 compute-0 sudo[55919]: pam_unix(sudo:session): session closed for user root
Nov 25 23:27:46 compute-0 sudo[56071]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-scsxxrqgnnhkagalhrgwdvomjabtdezq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113265.9346325-110-109107760338584/AnsiballZ_ini_file.py'
Nov 25 23:27:46 compute-0 sudo[56071]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:27:46 compute-0 python3.9[56073]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:27:46 compute-0 sudo[56071]: pam_unix(sudo:session): session closed for user root
Nov 25 23:27:46 compute-0 sudo[56223]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpiyngwnomzupmbufwwgcdvplivfpdcz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113266.6108823-110-8754518702271/AnsiballZ_ini_file.py'
Nov 25 23:27:46 compute-0 sudo[56223]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:27:47 compute-0 python3.9[56225]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:27:47 compute-0 sudo[56223]: pam_unix(sudo:session): session closed for user root
Nov 25 23:27:47 compute-0 sudo[56375]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqmipbhitcnwlwynmfhvnhhsvuvtmqmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113267.3013704-110-265465737750496/AnsiballZ_ini_file.py'
Nov 25 23:27:47 compute-0 sudo[56375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:27:47 compute-0 python3.9[56377]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:27:47 compute-0 sudo[56375]: pam_unix(sudo:session): session closed for user root
Nov 25 23:27:48 compute-0 sudo[56527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njfchbnsjhxgznhfduiqlpgmxnifnuix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113268.0748117-141-211206281925431/AnsiballZ_dnf.py'
Nov 25 23:27:48 compute-0 sudo[56527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:27:48 compute-0 python3.9[56529]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 23:27:49 compute-0 sudo[56527]: pam_unix(sudo:session): session closed for user root
Nov 25 23:27:50 compute-0 sudo[56680]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbbircouzacbnwbugrnzwgzezawstrre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113270.273348-152-178122044054016/AnsiballZ_setup.py'
Nov 25 23:27:50 compute-0 sudo[56680]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:27:50 compute-0 python3.9[56682]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 23:27:50 compute-0 sudo[56680]: pam_unix(sudo:session): session closed for user root
Nov 25 23:27:51 compute-0 sudo[56834]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qleodqfdjqnsztnhklgvrrdxbicwifuk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113271.159309-160-78267103638470/AnsiballZ_stat.py'
Nov 25 23:27:51 compute-0 sudo[56834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:27:51 compute-0 python3.9[56836]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 23:27:51 compute-0 sudo[56834]: pam_unix(sudo:session): session closed for user root
Nov 25 23:27:52 compute-0 sudo[56986]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxewjoayinmlkvzvsyfvlwitnkjxwjdw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113272.086082-169-217065893515051/AnsiballZ_stat.py'
Nov 25 23:27:52 compute-0 sudo[56986]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:27:52 compute-0 python3.9[56988]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 23:27:52 compute-0 sudo[56986]: pam_unix(sudo:session): session closed for user root
Nov 25 23:27:53 compute-0 sudo[57138]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-engfsteedloejbtxsgtigrwfddujwdfn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113273.0431156-179-90136546824389/AnsiballZ_command.py'
Nov 25 23:27:53 compute-0 sudo[57138]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:27:53 compute-0 python3.9[57140]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:27:53 compute-0 sudo[57138]: pam_unix(sudo:session): session closed for user root
Nov 25 23:27:54 compute-0 sudo[57291]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwercclegrtwtpudljvrpirfzzqvtnrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113273.9889612-189-166950758966783/AnsiballZ_service_facts.py'
Nov 25 23:27:54 compute-0 sudo[57291]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:27:54 compute-0 python3.9[57293]: ansible-service_facts Invoked
Nov 25 23:27:54 compute-0 network[57310]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 25 23:27:54 compute-0 network[57311]: 'network-scripts' will be removed from distribution in near future.
Nov 25 23:27:54 compute-0 network[57312]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 25 23:27:57 compute-0 sudo[57291]: pam_unix(sudo:session): session closed for user root
Nov 25 23:27:58 compute-0 sudo[57595]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emlvxfhjpjznvkzjlsgxrgfjoxwdqqdu ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1764113278.51342-204-174073036284151/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1764113278.51342-204-174073036284151/args'
Nov 25 23:27:58 compute-0 sudo[57595]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:27:59 compute-0 sudo[57595]: pam_unix(sudo:session): session closed for user root
Nov 25 23:27:59 compute-0 sudo[57762]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jewvrymxipgamrrvkwkjajbqcojmsdqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113279.4453602-215-236192781412591/AnsiballZ_dnf.py'
Nov 25 23:27:59 compute-0 sudo[57762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:28:00 compute-0 python3.9[57764]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 23:28:01 compute-0 sudo[57762]: pam_unix(sudo:session): session closed for user root
Nov 25 23:28:02 compute-0 sudo[57915]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbnqagddozxqupycfklhoyqtcxsjfyxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113281.6512525-228-251415251297852/AnsiballZ_package_facts.py'
Nov 25 23:28:02 compute-0 sudo[57915]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:28:02 compute-0 python3.9[57917]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Nov 25 23:28:02 compute-0 sudo[57915]: pam_unix(sudo:session): session closed for user root
Nov 25 23:28:03 compute-0 sudo[58067]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-faqfscmodburskbkfxdyqnbqcwceekbk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113283.4254043-238-80182029626208/AnsiballZ_stat.py'
Nov 25 23:28:03 compute-0 sudo[58067]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:28:04 compute-0 python3.9[58069]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:28:04 compute-0 sudo[58067]: pam_unix(sudo:session): session closed for user root
Nov 25 23:28:04 compute-0 sudo[58192]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkjakwlcxobaiugqasgiorqtyhndskdu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113283.4254043-238-80182029626208/AnsiballZ_copy.py'
Nov 25 23:28:04 compute-0 sudo[58192]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:28:04 compute-0 python3.9[58194]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764113283.4254043-238-80182029626208/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:28:04 compute-0 sudo[58192]: pam_unix(sudo:session): session closed for user root
Nov 25 23:28:05 compute-0 sudo[58346]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxmljcfospozyyiqqtytboxmtmzhvklz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113285.037947-253-33826834532301/AnsiballZ_stat.py'
Nov 25 23:28:05 compute-0 sudo[58346]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:28:05 compute-0 python3.9[58348]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:28:05 compute-0 sudo[58346]: pam_unix(sudo:session): session closed for user root
Nov 25 23:28:06 compute-0 sudo[58471]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhgzxcoxbdhbzskhiwiypuyuhmmjlqat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113285.037947-253-33826834532301/AnsiballZ_copy.py'
Nov 25 23:28:06 compute-0 sudo[58471]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:28:06 compute-0 python3.9[58473]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764113285.037947-253-33826834532301/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:28:06 compute-0 sudo[58471]: pam_unix(sudo:session): session closed for user root
Nov 25 23:28:07 compute-0 sudo[58625]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovauwlkppfsppgixyouenhwklyqvvhoh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113286.8373766-274-74928175793989/AnsiballZ_lineinfile.py'
Nov 25 23:28:07 compute-0 sudo[58625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:28:07 compute-0 python3.9[58627]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:28:07 compute-0 sudo[58625]: pam_unix(sudo:session): session closed for user root
Nov 25 23:28:08 compute-0 sudo[58779]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlnaleermtwclxwccydgdcdqzzeclnvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113288.1892729-289-358229302149/AnsiballZ_setup.py'
Nov 25 23:28:08 compute-0 sudo[58779]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:28:08 compute-0 python3.9[58781]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 23:28:09 compute-0 sudo[58779]: pam_unix(sudo:session): session closed for user root
Nov 25 23:28:09 compute-0 sudo[58863]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmigofamnqdinsxrfthtllaizbaqpvsm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113288.1892729-289-358229302149/AnsiballZ_systemd.py'
Nov 25 23:28:09 compute-0 sudo[58863]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:28:10 compute-0 python3.9[58865]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 23:28:10 compute-0 sudo[58863]: pam_unix(sudo:session): session closed for user root
Nov 25 23:28:11 compute-0 sudo[59017]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uauvllwbqwsobdafhtbmqkjqjyunyphh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113290.7646825-305-218410581773535/AnsiballZ_setup.py'
Nov 25 23:28:11 compute-0 sudo[59017]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:28:11 compute-0 python3.9[59019]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 23:28:11 compute-0 sudo[59017]: pam_unix(sudo:session): session closed for user root
Nov 25 23:28:12 compute-0 sudo[59101]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bolqnejqjgewhzwxjpvbdxrcpkqzxbaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113290.7646825-305-218410581773535/AnsiballZ_systemd.py'
Nov 25 23:28:12 compute-0 sudo[59101]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:28:12 compute-0 python3.9[59103]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 23:28:12 compute-0 chronyd[791]: chronyd exiting
Nov 25 23:28:12 compute-0 systemd[1]: Stopping NTP client/server...
Nov 25 23:28:12 compute-0 systemd[1]: chronyd.service: Deactivated successfully.
Nov 25 23:28:12 compute-0 systemd[1]: Stopped NTP client/server.
Nov 25 23:28:12 compute-0 systemd[1]: Starting NTP client/server...
Nov 25 23:28:12 compute-0 chronyd[59112]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Nov 25 23:28:12 compute-0 chronyd[59112]: Frequency -26.084 +/- 0.082 ppm read from /var/lib/chrony/drift
Nov 25 23:28:12 compute-0 chronyd[59112]: Loaded seccomp filter (level 2)
Nov 25 23:28:12 compute-0 systemd[1]: Started NTP client/server.
Nov 25 23:28:12 compute-0 sudo[59101]: pam_unix(sudo:session): session closed for user root
Nov 25 23:28:12 compute-0 sshd-session[54160]: Connection closed by 192.168.122.30 port 36136
Nov 25 23:28:12 compute-0 sshd-session[54157]: pam_unix(sshd:session): session closed for user zuul
Nov 25 23:28:12 compute-0 systemd[1]: session-12.scope: Deactivated successfully.
Nov 25 23:28:12 compute-0 systemd[1]: session-12.scope: Consumed 28.618s CPU time.
Nov 25 23:28:12 compute-0 systemd-logind[789]: Session 12 logged out. Waiting for processes to exit.
Nov 25 23:28:12 compute-0 systemd-logind[789]: Removed session 12.
Nov 25 23:28:18 compute-0 sshd-session[59138]: Accepted publickey for zuul from 192.168.122.30 port 49782 ssh2: ECDSA SHA256:7fdySOwLsSW4/MY3qn2D8coSBpPDcuftesbV+ivPlmY
Nov 25 23:28:18 compute-0 systemd-logind[789]: New session 13 of user zuul.
Nov 25 23:28:18 compute-0 systemd[1]: Started Session 13 of User zuul.
Nov 25 23:28:18 compute-0 sshd-session[59138]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 23:28:18 compute-0 sudo[59291]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sospxwctpramtbabofsyqdjqdmydxgts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113298.2555292-22-206989109787853/AnsiballZ_file.py'
Nov 25 23:28:18 compute-0 sudo[59291]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:28:19 compute-0 python3.9[59293]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:28:19 compute-0 sudo[59291]: pam_unix(sudo:session): session closed for user root
Nov 25 23:28:19 compute-0 sudo[59445]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kksqmzytdgercwmnpdacmliknscvqvdm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113299.3208683-34-265652927229391/AnsiballZ_stat.py'
Nov 25 23:28:19 compute-0 sudo[59445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:28:20 compute-0 python3.9[59447]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:28:20 compute-0 sudo[59445]: pam_unix(sudo:session): session closed for user root
Nov 25 23:28:20 compute-0 sudo[59568]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwlekgekksacjjqjjwelvskptgnwivka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113299.3208683-34-265652927229391/AnsiballZ_copy.py'
Nov 25 23:28:20 compute-0 sudo[59568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:28:20 compute-0 sshd-session[59370]: Invalid user g from 68.183.82.237 port 37110
Nov 25 23:28:21 compute-0 sshd-session[59370]: Received disconnect from 68.183.82.237 port 37110:11: Bye Bye [preauth]
Nov 25 23:28:21 compute-0 sshd-session[59370]: Disconnected from invalid user g 68.183.82.237 port 37110 [preauth]
Nov 25 23:28:21 compute-0 python3.9[59570]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764113299.3208683-34-265652927229391/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:28:21 compute-0 sudo[59568]: pam_unix(sudo:session): session closed for user root
Nov 25 23:28:21 compute-0 sshd-session[59141]: Connection closed by 192.168.122.30 port 49782
Nov 25 23:28:21 compute-0 sshd-session[59138]: pam_unix(sshd:session): session closed for user zuul
Nov 25 23:28:21 compute-0 systemd[1]: session-13.scope: Deactivated successfully.
Nov 25 23:28:21 compute-0 systemd[1]: session-13.scope: Consumed 2.188s CPU time.
Nov 25 23:28:21 compute-0 systemd-logind[789]: Session 13 logged out. Waiting for processes to exit.
Nov 25 23:28:21 compute-0 systemd-logind[789]: Removed session 13.
Nov 25 23:28:24 compute-0 sshd-session[59595]: Invalid user ftpuser1 from 202.83.162.167 port 33346
Nov 25 23:28:24 compute-0 sshd-session[59595]: Received disconnect from 202.83.162.167 port 33346:11: Bye Bye [preauth]
Nov 25 23:28:24 compute-0 sshd-session[59595]: Disconnected from invalid user ftpuser1 202.83.162.167 port 33346 [preauth]
Nov 25 23:28:27 compute-0 sshd-session[59597]: Accepted publickey for zuul from 192.168.122.30 port 38184 ssh2: ECDSA SHA256:7fdySOwLsSW4/MY3qn2D8coSBpPDcuftesbV+ivPlmY
Nov 25 23:28:27 compute-0 systemd-logind[789]: New session 14 of user zuul.
Nov 25 23:28:27 compute-0 systemd[1]: Started Session 14 of User zuul.
Nov 25 23:28:27 compute-0 sshd-session[59597]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 23:28:28 compute-0 python3.9[59750]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 23:28:29 compute-0 sudo[59904]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zyfvqrtrvgwolescrqfhgoevunynrkhz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113308.8779666-33-18021498073488/AnsiballZ_file.py'
Nov 25 23:28:29 compute-0 sudo[59904]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:28:29 compute-0 python3.9[59906]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:28:29 compute-0 sudo[59904]: pam_unix(sudo:session): session closed for user root
Nov 25 23:28:30 compute-0 sudo[60079]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktkyvumxewpivxhsqhdlweljehjbvetj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113310.0955813-41-122739551407590/AnsiballZ_stat.py'
Nov 25 23:28:30 compute-0 sudo[60079]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:28:30 compute-0 python3.9[60081]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:28:30 compute-0 sudo[60079]: pam_unix(sudo:session): session closed for user root
Nov 25 23:28:31 compute-0 sudo[60202]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ssmjgxanaamjtletcsdlcmapybsqdyca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113310.0955813-41-122739551407590/AnsiballZ_copy.py'
Nov 25 23:28:31 compute-0 sudo[60202]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:28:31 compute-0 python3.9[60204]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1764113310.0955813-41-122739551407590/.source.json _original_basename=.ukar7_1n follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:28:31 compute-0 sudo[60202]: pam_unix(sudo:session): session closed for user root
Nov 25 23:28:32 compute-0 sudo[60354]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwpqcqwojcaftzrmxamwdiqpkyusllms ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113312.256838-64-252325806125914/AnsiballZ_stat.py'
Nov 25 23:28:32 compute-0 sudo[60354]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:28:32 compute-0 python3.9[60356]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:28:32 compute-0 sudo[60354]: pam_unix(sudo:session): session closed for user root
Nov 25 23:28:33 compute-0 sudo[60477]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmqxninxyimegswrefmzpsdiytakrqtn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113312.256838-64-252325806125914/AnsiballZ_copy.py'
Nov 25 23:28:33 compute-0 sudo[60477]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:28:33 compute-0 python3.9[60479]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764113312.256838-64-252325806125914/.source _original_basename=.k99ai_wz follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:28:33 compute-0 sudo[60477]: pam_unix(sudo:session): session closed for user root
Nov 25 23:28:34 compute-0 sudo[60629]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-squidfluazaatiylfdrqwaixttuchjlb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113313.850036-80-168567737097241/AnsiballZ_file.py'
Nov 25 23:28:34 compute-0 sudo[60629]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:28:34 compute-0 python3.9[60631]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:28:34 compute-0 sudo[60629]: pam_unix(sudo:session): session closed for user root
Nov 25 23:28:35 compute-0 sudo[60781]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-holqdvmzgonzvmcvnyhuosnhtvbgkdyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113314.6953118-88-192699240340847/AnsiballZ_stat.py'
Nov 25 23:28:35 compute-0 sudo[60781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:28:35 compute-0 python3.9[60783]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:28:35 compute-0 sudo[60781]: pam_unix(sudo:session): session closed for user root
Nov 25 23:28:35 compute-0 sudo[60904]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfutozhlzfahrkbqbbthujbcbwkvpvzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113314.6953118-88-192699240340847/AnsiballZ_copy.py'
Nov 25 23:28:35 compute-0 sudo[60904]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:28:35 compute-0 python3.9[60906]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764113314.6953118-88-192699240340847/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:28:35 compute-0 sudo[60904]: pam_unix(sudo:session): session closed for user root
Nov 25 23:28:36 compute-0 sudo[61056]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmizifyjwmxzyzrvmtspixwngundhvvp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113316.1433115-88-262067154497655/AnsiballZ_stat.py'
Nov 25 23:28:36 compute-0 sudo[61056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:28:36 compute-0 python3.9[61058]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:28:36 compute-0 sudo[61056]: pam_unix(sudo:session): session closed for user root
Nov 25 23:28:37 compute-0 sudo[61179]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhicldpqghszmdwykvvsmdgsxfqpylsc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113316.1433115-88-262067154497655/AnsiballZ_copy.py'
Nov 25 23:28:37 compute-0 sudo[61179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:28:37 compute-0 python3.9[61181]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764113316.1433115-88-262067154497655/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:28:37 compute-0 sudo[61179]: pam_unix(sudo:session): session closed for user root
Nov 25 23:28:38 compute-0 sudo[61331]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qaruttlglbtzlvfadxzfubgbfxmyzdhc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113317.6690962-117-175843668187613/AnsiballZ_file.py'
Nov 25 23:28:38 compute-0 sudo[61331]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:28:38 compute-0 python3.9[61333]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:28:38 compute-0 sudo[61331]: pam_unix(sudo:session): session closed for user root
Nov 25 23:28:38 compute-0 sudo[61483]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fuisgpvfgryoauqgnmllfdoufywtnuee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113318.6002824-125-66351522289677/AnsiballZ_stat.py'
Nov 25 23:28:38 compute-0 sudo[61483]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:28:39 compute-0 python3.9[61485]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:28:39 compute-0 sudo[61483]: pam_unix(sudo:session): session closed for user root
Nov 25 23:28:39 compute-0 sudo[61606]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hphksfjfxiqizzkpnvcfvaebzmzaicqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113318.6002824-125-66351522289677/AnsiballZ_copy.py'
Nov 25 23:28:39 compute-0 sudo[61606]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:28:39 compute-0 python3.9[61608]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764113318.6002824-125-66351522289677/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:28:39 compute-0 sudo[61606]: pam_unix(sudo:session): session closed for user root
Nov 25 23:28:40 compute-0 sudo[61758]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqdqsfxtndhgkieydmwchukzjwokcgkc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113320.090793-140-47688736739004/AnsiballZ_stat.py'
Nov 25 23:28:40 compute-0 sudo[61758]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:28:40 compute-0 python3.9[61760]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:28:40 compute-0 sudo[61758]: pam_unix(sudo:session): session closed for user root
Nov 25 23:28:41 compute-0 sudo[61881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prkcoqkoppvhypistjhusmzqjilfamxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113320.090793-140-47688736739004/AnsiballZ_copy.py'
Nov 25 23:28:41 compute-0 sudo[61881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:28:41 compute-0 python3.9[61883]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764113320.090793-140-47688736739004/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:28:41 compute-0 sudo[61881]: pam_unix(sudo:session): session closed for user root
Nov 25 23:28:42 compute-0 sudo[62033]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghkvcvwgfjocziqyvkongehocxsifoye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113321.710489-155-195693790241059/AnsiballZ_systemd.py'
Nov 25 23:28:42 compute-0 sudo[62033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:28:42 compute-0 python3.9[62035]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 23:28:42 compute-0 systemd[1]: Reloading.
Nov 25 23:28:42 compute-0 systemd-sysv-generator[62066]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 23:28:42 compute-0 systemd-rc-local-generator[62062]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 23:28:42 compute-0 systemd[1]: Reloading.
Nov 25 23:28:43 compute-0 systemd-rc-local-generator[62101]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 23:28:43 compute-0 systemd-sysv-generator[62104]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 23:28:43 compute-0 systemd[1]: Starting EDPM Container Shutdown...
Nov 25 23:28:43 compute-0 systemd[1]: Finished EDPM Container Shutdown.
Nov 25 23:28:43 compute-0 sudo[62033]: pam_unix(sudo:session): session closed for user root
Nov 25 23:28:43 compute-0 sudo[62260]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkhnnhshpoqikgdnkgjaoikxsyygoynh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113323.511341-163-185667835254116/AnsiballZ_stat.py'
Nov 25 23:28:43 compute-0 sudo[62260]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:28:44 compute-0 python3.9[62262]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:28:44 compute-0 sudo[62260]: pam_unix(sudo:session): session closed for user root
Nov 25 23:28:44 compute-0 sudo[62383]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izzshjezxgvceeqxmuhvylyghifayukk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113323.511341-163-185667835254116/AnsiballZ_copy.py'
Nov 25 23:28:44 compute-0 sudo[62383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:28:44 compute-0 python3.9[62385]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764113323.511341-163-185667835254116/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:28:44 compute-0 sudo[62383]: pam_unix(sudo:session): session closed for user root
Nov 25 23:28:45 compute-0 sudo[62535]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dulubskxidginkmexgfogjeqjgqktckv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113325.0181072-178-183837825266431/AnsiballZ_stat.py'
Nov 25 23:28:45 compute-0 sudo[62535]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:28:45 compute-0 python3.9[62537]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:28:45 compute-0 sudo[62535]: pam_unix(sudo:session): session closed for user root
Nov 25 23:28:46 compute-0 sudo[62658]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bixarhtqunfigcthjrqsqkwullejduho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113325.0181072-178-183837825266431/AnsiballZ_copy.py'
Nov 25 23:28:46 compute-0 sudo[62658]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:28:46 compute-0 python3.9[62660]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764113325.0181072-178-183837825266431/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:28:46 compute-0 sudo[62658]: pam_unix(sudo:session): session closed for user root
Nov 25 23:28:46 compute-0 sudo[62810]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lsjenijzhgaktnjxnnwplkerulppgqtz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113326.510693-193-228947211202639/AnsiballZ_systemd.py'
Nov 25 23:28:46 compute-0 sudo[62810]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:28:47 compute-0 python3.9[62812]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 23:28:47 compute-0 systemd[1]: Reloading.
Nov 25 23:28:47 compute-0 systemd-rc-local-generator[62839]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 23:28:47 compute-0 systemd-sysv-generator[62842]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 23:28:47 compute-0 systemd[1]: Reloading.
Nov 25 23:28:47 compute-0 systemd-rc-local-generator[62872]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 23:28:47 compute-0 systemd-sysv-generator[62876]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 23:28:47 compute-0 systemd[1]: Starting Create netns directory...
Nov 25 23:28:47 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 25 23:28:47 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 25 23:28:47 compute-0 systemd[1]: Finished Create netns directory.
Nov 25 23:28:47 compute-0 sudo[62810]: pam_unix(sudo:session): session closed for user root
Nov 25 23:28:48 compute-0 python3.9[63037]: ansible-ansible.builtin.service_facts Invoked
Nov 25 23:28:48 compute-0 network[63054]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 25 23:28:48 compute-0 network[63055]: 'network-scripts' will be removed from distribution in near future.
Nov 25 23:28:48 compute-0 network[63056]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 25 23:28:52 compute-0 sudo[63316]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdokxitfllysmntvbajeecnpkahxylzr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113332.1546648-209-227979846702943/AnsiballZ_systemd.py'
Nov 25 23:28:52 compute-0 sudo[63316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:28:52 compute-0 python3.9[63318]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 23:28:52 compute-0 systemd[1]: Reloading.
Nov 25 23:28:53 compute-0 systemd-rc-local-generator[63348]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 23:28:53 compute-0 systemd-sysv-generator[63351]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 23:28:53 compute-0 systemd[1]: Stopping IPv4 firewall with iptables...
Nov 25 23:28:53 compute-0 iptables.init[63358]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Nov 25 23:28:53 compute-0 iptables.init[63358]: iptables: Flushing firewall rules: [  OK  ]
Nov 25 23:28:53 compute-0 systemd[1]: iptables.service: Deactivated successfully.
Nov 25 23:28:53 compute-0 systemd[1]: Stopped IPv4 firewall with iptables.
Nov 25 23:28:53 compute-0 sudo[63316]: pam_unix(sudo:session): session closed for user root
Nov 25 23:28:54 compute-0 sudo[63552]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdfvnaxiatkwaqsqydmszkvccykwgdvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113333.8887475-209-74164022428381/AnsiballZ_systemd.py'
Nov 25 23:28:54 compute-0 sudo[63552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:28:54 compute-0 python3.9[63554]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 23:28:54 compute-0 sudo[63552]: pam_unix(sudo:session): session closed for user root
Nov 25 23:28:55 compute-0 sudo[63706]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwuzvtgyocdyhafcieqihrxbbwbfwdks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113334.9771695-225-105991427160397/AnsiballZ_systemd.py'
Nov 25 23:28:55 compute-0 sudo[63706]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:28:55 compute-0 python3.9[63708]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 23:28:55 compute-0 systemd[1]: Reloading.
Nov 25 23:28:55 compute-0 systemd-rc-local-generator[63731]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 23:28:55 compute-0 systemd-sysv-generator[63739]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 23:28:56 compute-0 systemd[1]: Starting Netfilter Tables...
Nov 25 23:28:56 compute-0 systemd[1]: Finished Netfilter Tables.
Nov 25 23:28:56 compute-0 sudo[63706]: pam_unix(sudo:session): session closed for user root
Nov 25 23:28:56 compute-0 sudo[63897]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-peilgcxvjcnxjpgbzpoajiaacreipfpq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113336.3721673-233-32806863775297/AnsiballZ_command.py'
Nov 25 23:28:56 compute-0 sudo[63897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:28:57 compute-0 python3.9[63899]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:28:57 compute-0 sudo[63897]: pam_unix(sudo:session): session closed for user root
Nov 25 23:28:58 compute-0 sudo[64050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnlxzoytznjeejvokemysmnseomqskmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113337.7305067-247-244204145258460/AnsiballZ_stat.py'
Nov 25 23:28:58 compute-0 sudo[64050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:28:58 compute-0 python3.9[64052]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:28:58 compute-0 sudo[64050]: pam_unix(sudo:session): session closed for user root
Nov 25 23:28:58 compute-0 sudo[64175]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqtijqvqcqdwfvstzdxfnjfzylzndcde ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113337.7305067-247-244204145258460/AnsiballZ_copy.py'
Nov 25 23:28:58 compute-0 sudo[64175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:28:59 compute-0 python3.9[64177]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764113337.7305067-247-244204145258460/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:28:59 compute-0 sudo[64175]: pam_unix(sudo:session): session closed for user root
Nov 25 23:28:59 compute-0 sudo[64328]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-okwjclgvjfkzxisgkumzzkwgwimpkvpp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113339.384015-262-111008627175449/AnsiballZ_systemd.py'
Nov 25 23:28:59 compute-0 sudo[64328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:29:00 compute-0 python3.9[64330]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 23:29:00 compute-0 systemd[1]: Reloading OpenSSH server daemon...
Nov 25 23:29:00 compute-0 sshd[1002]: Received SIGHUP; restarting.
Nov 25 23:29:00 compute-0 sshd[1002]: Server listening on 0.0.0.0 port 22.
Nov 25 23:29:00 compute-0 sshd[1002]: Server listening on :: port 22.
Nov 25 23:29:00 compute-0 systemd[1]: Reloaded OpenSSH server daemon.
Nov 25 23:29:00 compute-0 sudo[64328]: pam_unix(sudo:session): session closed for user root
Nov 25 23:29:00 compute-0 sudo[64484]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfvfymzuyeyrmstcdezmtqpoczxkjwog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113340.4816096-270-88727131789485/AnsiballZ_file.py'
Nov 25 23:29:00 compute-0 sudo[64484]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:29:01 compute-0 python3.9[64486]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:29:01 compute-0 sudo[64484]: pam_unix(sudo:session): session closed for user root
Nov 25 23:29:01 compute-0 sudo[64636]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-miiojnjilpfxornvnafcujuarbzuaxvb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113341.344018-278-139466720103099/AnsiballZ_stat.py'
Nov 25 23:29:01 compute-0 sudo[64636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:29:01 compute-0 python3.9[64638]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:29:01 compute-0 sudo[64636]: pam_unix(sudo:session): session closed for user root
Nov 25 23:29:02 compute-0 sudo[64759]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xofyjezukwjwsllgutfcvxkmhekzpahw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113341.344018-278-139466720103099/AnsiballZ_copy.py'
Nov 25 23:29:02 compute-0 sudo[64759]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:29:02 compute-0 python3.9[64761]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764113341.344018-278-139466720103099/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:29:02 compute-0 sudo[64759]: pam_unix(sudo:session): session closed for user root
Nov 25 23:29:03 compute-0 sudo[64911]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lspqulpnxyxdipmrswpwugzgskcklmya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113343.033556-296-260680342447500/AnsiballZ_timezone.py'
Nov 25 23:29:03 compute-0 sudo[64911]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:29:03 compute-0 python3.9[64913]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 25 23:29:03 compute-0 systemd[1]: Starting Time & Date Service...
Nov 25 23:29:03 compute-0 systemd[1]: Started Time & Date Service.
Nov 25 23:29:03 compute-0 sudo[64911]: pam_unix(sudo:session): session closed for user root
Nov 25 23:29:04 compute-0 sudo[65067]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkepxsgmbxaxhwzswbjttankzkywatnr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113344.2401283-305-75349790700837/AnsiballZ_file.py'
Nov 25 23:29:04 compute-0 sudo[65067]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:29:04 compute-0 python3.9[65069]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:29:04 compute-0 sudo[65067]: pam_unix(sudo:session): session closed for user root
Nov 25 23:29:05 compute-0 sudo[65219]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzjwuziivyytdjqtgwdhvzgakxdrkshe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113345.1033528-313-277119713110126/AnsiballZ_stat.py'
Nov 25 23:29:05 compute-0 sudo[65219]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:29:05 compute-0 python3.9[65221]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:29:05 compute-0 sudo[65219]: pam_unix(sudo:session): session closed for user root
Nov 25 23:29:06 compute-0 sudo[65342]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kssqqmgsnjkbpgjaazpzekgahyqfuiik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113345.1033528-313-277119713110126/AnsiballZ_copy.py'
Nov 25 23:29:06 compute-0 sudo[65342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:29:06 compute-0 python3.9[65344]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764113345.1033528-313-277119713110126/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:29:06 compute-0 sudo[65342]: pam_unix(sudo:session): session closed for user root
Nov 25 23:29:06 compute-0 sudo[65494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohrocfezmwkgydkdqumaxylrrosdzoco ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113346.5439913-328-227392112409970/AnsiballZ_stat.py'
Nov 25 23:29:06 compute-0 sudo[65494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:29:07 compute-0 python3.9[65496]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:29:07 compute-0 sudo[65494]: pam_unix(sudo:session): session closed for user root
Nov 25 23:29:07 compute-0 sudo[65617]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjvvkasttoavzesaqgofzwficdbeifxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113346.5439913-328-227392112409970/AnsiballZ_copy.py'
Nov 25 23:29:07 compute-0 sudo[65617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:29:07 compute-0 python3.9[65619]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764113346.5439913-328-227392112409970/.source.yaml _original_basename=.4ugmoiyc follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:29:07 compute-0 sudo[65617]: pam_unix(sudo:session): session closed for user root
Nov 25 23:29:08 compute-0 sudo[65769]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxcbyvtodvkzdfxdjsarwiklrqgntxmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113348.0520525-343-156160704396082/AnsiballZ_stat.py'
Nov 25 23:29:08 compute-0 sudo[65769]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:29:08 compute-0 python3.9[65771]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:29:08 compute-0 sudo[65769]: pam_unix(sudo:session): session closed for user root
Nov 25 23:29:09 compute-0 sudo[65892]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmfumufbuwoewjflirdljxrevuycrckq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113348.0520525-343-156160704396082/AnsiballZ_copy.py'
Nov 25 23:29:09 compute-0 sudo[65892]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:29:09 compute-0 python3.9[65894]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764113348.0520525-343-156160704396082/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:29:09 compute-0 sudo[65892]: pam_unix(sudo:session): session closed for user root
Nov 25 23:29:09 compute-0 sudo[66044]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqicbxzsifekbgvydosvzftiplbkkifa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113349.5515943-358-53593554379/AnsiballZ_command.py'
Nov 25 23:29:09 compute-0 sudo[66044]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:29:10 compute-0 python3.9[66046]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:29:10 compute-0 sudo[66044]: pam_unix(sudo:session): session closed for user root
Nov 25 23:29:10 compute-0 sudo[66197]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgrpkldsaukjtetlyednnkkmmnnwehsk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113350.4422908-366-232415449714819/AnsiballZ_command.py'
Nov 25 23:29:10 compute-0 sudo[66197]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:29:11 compute-0 python3.9[66199]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:29:11 compute-0 sudo[66197]: pam_unix(sudo:session): session closed for user root
Nov 25 23:29:11 compute-0 sudo[66350]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjkyjqtsecxwznqtjarayonymphsdpgr ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764113351.32832-374-142936954363923/AnsiballZ_edpm_nftables_from_files.py'
Nov 25 23:29:11 compute-0 sudo[66350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:29:12 compute-0 python3[66352]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 25 23:29:12 compute-0 sudo[66350]: pam_unix(sudo:session): session closed for user root
Nov 25 23:29:12 compute-0 sudo[66502]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmdwzhdafgvpdifuxymmeugcbxvjtmbh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113352.3605719-382-266148416733782/AnsiballZ_stat.py'
Nov 25 23:29:12 compute-0 sudo[66502]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:29:13 compute-0 python3.9[66504]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:29:13 compute-0 sudo[66502]: pam_unix(sudo:session): session closed for user root
Nov 25 23:29:13 compute-0 sudo[66625]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhvdrkfkhqfysibufarattvoxemamjme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113352.3605719-382-266148416733782/AnsiballZ_copy.py'
Nov 25 23:29:13 compute-0 sudo[66625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:29:13 compute-0 python3.9[66627]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764113352.3605719-382-266148416733782/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:29:13 compute-0 sudo[66625]: pam_unix(sudo:session): session closed for user root
Nov 25 23:29:14 compute-0 sudo[66779]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdbqtswyzieeqwwapgxdmlyzttnkeejv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113353.9899564-397-73435920212215/AnsiballZ_stat.py'
Nov 25 23:29:14 compute-0 sudo[66779]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:29:14 compute-0 python3.9[66781]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:29:14 compute-0 sudo[66779]: pam_unix(sudo:session): session closed for user root
Nov 25 23:29:14 compute-0 sshd-session[66652]: Invalid user docker from 124.18.141.70 port 35152
Nov 25 23:29:14 compute-0 sudo[66902]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixexwrdlmtgmoutmhcgjgqqmiflawnij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113353.9899564-397-73435920212215/AnsiballZ_copy.py'
Nov 25 23:29:14 compute-0 sudo[66902]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:29:15 compute-0 sshd-session[66652]: Received disconnect from 124.18.141.70 port 35152:11: Bye Bye [preauth]
Nov 25 23:29:15 compute-0 sshd-session[66652]: Disconnected from invalid user docker 124.18.141.70 port 35152 [preauth]
Nov 25 23:29:15 compute-0 python3.9[66904]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764113353.9899564-397-73435920212215/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:29:15 compute-0 sudo[66902]: pam_unix(sudo:session): session closed for user root
Nov 25 23:29:15 compute-0 sudo[67054]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwogfyigktufbfcmhkvexwfnbjthjjcd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113355.427047-412-95570621061106/AnsiballZ_stat.py'
Nov 25 23:29:15 compute-0 sudo[67054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:29:15 compute-0 python3.9[67056]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:29:15 compute-0 sudo[67054]: pam_unix(sudo:session): session closed for user root
Nov 25 23:29:16 compute-0 sudo[67177]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkuvxzwiojrucdqtonhvesttgezcpeys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113355.427047-412-95570621061106/AnsiballZ_copy.py'
Nov 25 23:29:16 compute-0 sudo[67177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:29:16 compute-0 python3.9[67179]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764113355.427047-412-95570621061106/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:29:16 compute-0 sudo[67177]: pam_unix(sudo:session): session closed for user root
Nov 25 23:29:17 compute-0 sudo[67329]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alwnvvkelgdhpxjojfwvvfjhcsojshfk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113356.769764-427-49414068275958/AnsiballZ_stat.py'
Nov 25 23:29:17 compute-0 sudo[67329]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:29:17 compute-0 python3.9[67331]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:29:17 compute-0 sudo[67329]: pam_unix(sudo:session): session closed for user root
Nov 25 23:29:17 compute-0 sudo[67452]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khejwoocaytybqdvcjrfothhmydgmvjs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113356.769764-427-49414068275958/AnsiballZ_copy.py'
Nov 25 23:29:17 compute-0 sudo[67452]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:29:18 compute-0 python3.9[67454]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764113356.769764-427-49414068275958/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:29:18 compute-0 sudo[67452]: pam_unix(sudo:session): session closed for user root
Nov 25 23:29:18 compute-0 sudo[67604]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdatzgecnrdeswhzixnzisihhunubwyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113358.3091595-442-55445241155620/AnsiballZ_stat.py'
Nov 25 23:29:18 compute-0 sudo[67604]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:29:18 compute-0 python3.9[67606]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:29:18 compute-0 sudo[67604]: pam_unix(sudo:session): session closed for user root
Nov 25 23:29:19 compute-0 sudo[67727]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fantiagqbgqpojwmfkbnkfndmfwaeaao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113358.3091595-442-55445241155620/AnsiballZ_copy.py'
Nov 25 23:29:19 compute-0 sudo[67727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:29:19 compute-0 python3.9[67729]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764113358.3091595-442-55445241155620/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:29:19 compute-0 sudo[67727]: pam_unix(sudo:session): session closed for user root
Nov 25 23:29:20 compute-0 sudo[67879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbucsbvmglkclclsphrcdxflhbqipylx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113359.7546716-457-163251950684917/AnsiballZ_file.py'
Nov 25 23:29:20 compute-0 sudo[67879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:29:20 compute-0 python3.9[67881]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:29:20 compute-0 sudo[67879]: pam_unix(sudo:session): session closed for user root
Nov 25 23:29:20 compute-0 sudo[68031]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efzpmubezzqvzkhjxjzokeslgvzhtzew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113360.56421-465-95361420270292/AnsiballZ_command.py'
Nov 25 23:29:20 compute-0 sudo[68031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:29:21 compute-0 python3.9[68033]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:29:21 compute-0 sudo[68031]: pam_unix(sudo:session): session closed for user root
Nov 25 23:29:22 compute-0 sudo[68190]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdapfszomjwnyypuysuetkdbzaqvbmdq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113361.482632-473-129936163827040/AnsiballZ_blockinfile.py'
Nov 25 23:29:22 compute-0 sudo[68190]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:29:22 compute-0 python3.9[68192]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:29:22 compute-0 sudo[68190]: pam_unix(sudo:session): session closed for user root
Nov 25 23:29:22 compute-0 sudo[68343]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcidwrxvqekmripnjletdvfaaljnvkbk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113362.5424254-482-66730709551209/AnsiballZ_file.py'
Nov 25 23:29:22 compute-0 sudo[68343]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:29:23 compute-0 python3.9[68345]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:29:23 compute-0 sudo[68343]: pam_unix(sudo:session): session closed for user root
Nov 25 23:29:23 compute-0 sudo[68495]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ocnkoufgncfroaiunrletamzjyfxnzwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113363.3386009-482-232035685412031/AnsiballZ_file.py'
Nov 25 23:29:23 compute-0 sudo[68495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:29:23 compute-0 python3.9[68497]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:29:23 compute-0 sudo[68495]: pam_unix(sudo:session): session closed for user root
Nov 25 23:29:24 compute-0 sudo[68647]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eekxzyafifpcpnxfckcohvwokihtvhbg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113364.1731482-497-233234446165222/AnsiballZ_mount.py'
Nov 25 23:29:24 compute-0 sudo[68647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:29:25 compute-0 python3.9[68649]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 25 23:29:25 compute-0 sudo[68647]: pam_unix(sudo:session): session closed for user root
Nov 25 23:29:25 compute-0 sudo[68800]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzthpgfcolrpzjzttpfwnvgifvcveepw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113365.3097475-497-117930957738554/AnsiballZ_mount.py'
Nov 25 23:29:25 compute-0 sudo[68800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:29:25 compute-0 python3.9[68802]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 25 23:29:25 compute-0 sudo[68800]: pam_unix(sudo:session): session closed for user root
Nov 25 23:29:26 compute-0 sshd-session[59600]: Connection closed by 192.168.122.30 port 38184
Nov 25 23:29:26 compute-0 sshd-session[59597]: pam_unix(sshd:session): session closed for user zuul
Nov 25 23:29:26 compute-0 systemd[1]: session-14.scope: Deactivated successfully.
Nov 25 23:29:26 compute-0 systemd[1]: session-14.scope: Consumed 43.757s CPU time.
Nov 25 23:29:26 compute-0 systemd-logind[789]: Session 14 logged out. Waiting for processes to exit.
Nov 25 23:29:26 compute-0 systemd-logind[789]: Removed session 14.
Nov 25 23:29:31 compute-0 sshd-session[68828]: Accepted publickey for zuul from 192.168.122.30 port 55022 ssh2: ECDSA SHA256:7fdySOwLsSW4/MY3qn2D8coSBpPDcuftesbV+ivPlmY
Nov 25 23:29:31 compute-0 systemd-logind[789]: New session 15 of user zuul.
Nov 25 23:29:31 compute-0 systemd[1]: Started Session 15 of User zuul.
Nov 25 23:29:31 compute-0 sshd-session[68828]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 23:29:32 compute-0 sudo[68981]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwuvmtwktrcxexsknozuduxciyfakjxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113371.8848574-16-247426424547128/AnsiballZ_tempfile.py'
Nov 25 23:29:32 compute-0 sudo[68981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:29:32 compute-0 python3.9[68983]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Nov 25 23:29:32 compute-0 sudo[68981]: pam_unix(sudo:session): session closed for user root
Nov 25 23:29:33 compute-0 sudo[69133]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivbrijfltvmqnvqqakvqcjateeovscco ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113372.9725888-28-144215963855597/AnsiballZ_stat.py'
Nov 25 23:29:33 compute-0 sudo[69133]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:29:33 compute-0 python3.9[69135]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 23:29:33 compute-0 sudo[69133]: pam_unix(sudo:session): session closed for user root
Nov 25 23:29:33 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 25 23:29:34 compute-0 sudo[69287]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgzgoldstdtlkdkvgugjlgywgzghtnxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113374.051535-38-125530799206075/AnsiballZ_setup.py'
Nov 25 23:29:34 compute-0 sudo[69287]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:29:34 compute-0 python3.9[69289]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 23:29:34 compute-0 sudo[69287]: pam_unix(sudo:session): session closed for user root
Nov 25 23:29:35 compute-0 sudo[69441]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtjfkepllkmmvovhswqffomnyllrpifb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113375.2993567-47-255988490711649/AnsiballZ_blockinfile.py'
Nov 25 23:29:35 compute-0 sudo[69441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:29:36 compute-0 python3.9[69443]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCxrfdY9cGWIXdy/1Oy3o25kkem+UfkfNZM3QAaYeemr9vZEt0Kpt4rTEaZjtK/HkgMSoli0ko2twHhREfmcDjCZiEvSPhpr9yvJyxLe6m3r7nR2fIVc/1+5SeUdcJGWT8hvgD5okMZtCerl/MiW6+tFRt7Ar6X2TFlwXPjq3wia85WpL7X9vq40wZz0XlbpQxNxcEJWeVajcrd63Qib0m1FmhnmHPUqLHN0WmxXnMtONzo4fUQjq3zn230bIZCmjbFatl10s4NRy2udfAA7Xi0ubCZxQ/E8omg7y4ZxA94dJHZPmkCFSVLZUqdW3S3Ofhcem+PFVKRR2UvfcYHi79G6lS5brk3pbHqdyjd4/3scYp3aXFFt7ErEEhVud762RLGAHeACGlJQxmX8B/FbnWmbkw8BfptrYtzSuSqIXmN3UXrLrmfRrB+IMcIbbs/vzVMk6n6BzUjdXscFfnPltHEyvmdeIEBDyC5FLoJ2bTTrQpLt63pLIU09IA55rhBA+E=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBQX5RNdc24Y/t6cF9q9hL3e4G9bhmnpPT/NJWIujGtr
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKt48jJg/HSNlIL9ftEIQgyUPOj8qZ1KotNNqzrVPi+UhJTDsaDnHI9k4z0iWOz87RQtpHNoPDx9+/vOjXzjj4o=
                                             create=True mode=0644 path=/tmp/ansible.jpe4dkbb state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:29:36 compute-0 sudo[69441]: pam_unix(sudo:session): session closed for user root
Nov 25 23:29:36 compute-0 sshd-session[69290]: Received disconnect from 68.183.82.237 port 35904:11: Bye Bye [preauth]
Nov 25 23:29:36 compute-0 sshd-session[69290]: Disconnected from authenticating user root 68.183.82.237 port 35904 [preauth]
Nov 25 23:29:36 compute-0 sudo[69593]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdrmouwjlurbyuxwlwekwbdffcmgzihl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113376.2818537-55-182032885583138/AnsiballZ_command.py'
Nov 25 23:29:36 compute-0 sudo[69593]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:29:37 compute-0 python3.9[69595]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.jpe4dkbb' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:29:37 compute-0 sudo[69593]: pam_unix(sudo:session): session closed for user root
Nov 25 23:29:37 compute-0 sudo[69747]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grkryppskpigkssxwbujnojkcjuwkgmv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113377.250584-63-85175316797821/AnsiballZ_file.py'
Nov 25 23:29:37 compute-0 sudo[69747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:29:37 compute-0 python3.9[69749]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.jpe4dkbb state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:29:37 compute-0 sudo[69747]: pam_unix(sudo:session): session closed for user root
Nov 25 23:29:38 compute-0 sshd-session[68831]: Connection closed by 192.168.122.30 port 55022
Nov 25 23:29:38 compute-0 sshd-session[68828]: pam_unix(sshd:session): session closed for user zuul
Nov 25 23:29:38 compute-0 systemd[1]: session-15.scope: Deactivated successfully.
Nov 25 23:29:38 compute-0 systemd[1]: session-15.scope: Consumed 4.263s CPU time.
Nov 25 23:29:38 compute-0 systemd-logind[789]: Session 15 logged out. Waiting for processes to exit.
Nov 25 23:29:38 compute-0 systemd-logind[789]: Removed session 15.
Nov 25 23:29:43 compute-0 sshd-session[69774]: Accepted publickey for zuul from 192.168.122.30 port 35748 ssh2: ECDSA SHA256:7fdySOwLsSW4/MY3qn2D8coSBpPDcuftesbV+ivPlmY
Nov 25 23:29:43 compute-0 systemd-logind[789]: New session 16 of user zuul.
Nov 25 23:29:43 compute-0 systemd[1]: Started Session 16 of User zuul.
Nov 25 23:29:43 compute-0 sshd-session[69774]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 23:29:44 compute-0 python3.9[69927]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 23:29:46 compute-0 sudo[70081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvlbdtbbjtepeddyrilzenklxbiwamtx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113385.315683-32-231068714626621/AnsiballZ_systemd.py'
Nov 25 23:29:46 compute-0 sudo[70081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:29:46 compute-0 python3.9[70083]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 25 23:29:46 compute-0 sudo[70081]: pam_unix(sudo:session): session closed for user root
Nov 25 23:29:47 compute-0 sudo[70235]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdciwitlaaqariswecelzvdlijsdrqit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113386.6082969-40-142448798372625/AnsiballZ_systemd.py'
Nov 25 23:29:47 compute-0 sudo[70235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:29:47 compute-0 python3.9[70237]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 23:29:47 compute-0 sudo[70235]: pam_unix(sudo:session): session closed for user root
Nov 25 23:29:48 compute-0 sudo[70388]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-toslxqatvwoxiwtodbaqoeuusilfaguf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113387.5770447-49-62891385521870/AnsiballZ_command.py'
Nov 25 23:29:48 compute-0 sudo[70388]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:29:48 compute-0 python3.9[70390]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:29:48 compute-0 sudo[70388]: pam_unix(sudo:session): session closed for user root
Nov 25 23:29:49 compute-0 sudo[70541]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtdajydoribaigpnngvsrfcvffnglhdh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113388.5672622-57-86151708213561/AnsiballZ_stat.py'
Nov 25 23:29:49 compute-0 sudo[70541]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:29:49 compute-0 python3.9[70543]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 23:29:49 compute-0 sudo[70541]: pam_unix(sudo:session): session closed for user root
Nov 25 23:29:49 compute-0 sudo[70695]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvlijfvhufyucxxrnaqefeohhfgyxzpy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113389.4394374-65-226710345193635/AnsiballZ_command.py'
Nov 25 23:29:49 compute-0 sudo[70695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:29:49 compute-0 python3.9[70697]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:29:50 compute-0 sudo[70695]: pam_unix(sudo:session): session closed for user root
Nov 25 23:29:50 compute-0 sudo[70852]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwlepswmlwrvkbywkycvynaukavlupaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113390.2473292-73-264710792694386/AnsiballZ_file.py'
Nov 25 23:29:50 compute-0 sudo[70852]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:29:50 compute-0 python3.9[70854]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:29:51 compute-0 sudo[70852]: pam_unix(sudo:session): session closed for user root
Nov 25 23:29:51 compute-0 sshd-session[69777]: Connection closed by 192.168.122.30 port 35748
Nov 25 23:29:51 compute-0 sshd-session[69774]: pam_unix(sshd:session): session closed for user zuul
Nov 25 23:29:51 compute-0 systemd[1]: session-16.scope: Deactivated successfully.
Nov 25 23:29:51 compute-0 systemd[1]: session-16.scope: Consumed 5.062s CPU time.
Nov 25 23:29:51 compute-0 systemd-logind[789]: Session 16 logged out. Waiting for processes to exit.
Nov 25 23:29:51 compute-0 systemd-logind[789]: Removed session 16.
Nov 25 23:29:51 compute-0 sshd-session[70778]: Received disconnect from 202.83.162.167 port 48526:11: Bye Bye [preauth]
Nov 25 23:29:51 compute-0 sshd-session[70778]: Disconnected from authenticating user root 202.83.162.167 port 48526 [preauth]
Nov 25 23:29:57 compute-0 sshd-session[70879]: Accepted publickey for zuul from 192.168.122.30 port 45154 ssh2: ECDSA SHA256:7fdySOwLsSW4/MY3qn2D8coSBpPDcuftesbV+ivPlmY
Nov 25 23:29:57 compute-0 systemd-logind[789]: New session 17 of user zuul.
Nov 25 23:29:57 compute-0 systemd[1]: Started Session 17 of User zuul.
Nov 25 23:29:57 compute-0 sshd-session[70879]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 23:29:58 compute-0 python3.9[71032]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 23:29:59 compute-0 sudo[71188]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obhfodfiaxvvmenaengtbzabwxuggqhw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113398.8463435-34-209610358445201/AnsiballZ_setup.py'
Nov 25 23:29:59 compute-0 sudo[71188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:29:59 compute-0 python3.9[71190]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 23:29:59 compute-0 sudo[71188]: pam_unix(sudo:session): session closed for user root
Nov 25 23:30:00 compute-0 sudo[71272]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kladyebrctfrozgurtlpqyagpsztrynq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113398.8463435-34-209610358445201/AnsiballZ_dnf.py'
Nov 25 23:30:00 compute-0 sudo[71272]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:30:00 compute-0 python3.9[71274]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 25 23:30:01 compute-0 sshd-session[71033]: Invalid user 1 from 45.78.216.132 port 60794
Nov 25 23:30:01 compute-0 sshd-session[71033]: Received disconnect from 45.78.216.132 port 60794:11: Bye Bye [preauth]
Nov 25 23:30:01 compute-0 sshd-session[71033]: Disconnected from invalid user 1 45.78.216.132 port 60794 [preauth]
Nov 25 23:30:01 compute-0 sudo[71272]: pam_unix(sudo:session): session closed for user root
Nov 25 23:30:02 compute-0 python3.9[71425]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:30:04 compute-0 python3.9[71576]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 25 23:30:04 compute-0 python3.9[71726]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 23:30:04 compute-0 rsyslogd[1001]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 23:30:05 compute-0 python3.9[71877]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 23:30:06 compute-0 sshd-session[70882]: Connection closed by 192.168.122.30 port 45154
Nov 25 23:30:06 compute-0 sshd-session[70879]: pam_unix(sshd:session): session closed for user zuul
Nov 25 23:30:06 compute-0 systemd[1]: session-17.scope: Deactivated successfully.
Nov 25 23:30:06 compute-0 systemd[1]: session-17.scope: Consumed 6.701s CPU time.
Nov 25 23:30:06 compute-0 systemd-logind[789]: Session 17 logged out. Waiting for processes to exit.
Nov 25 23:30:06 compute-0 systemd-logind[789]: Removed session 17.
Nov 25 23:30:13 compute-0 sshd-session[71902]: Accepted publickey for zuul from 38.102.83.97 port 49484 ssh2: RSA SHA256:JQBEbRWdTRxWWBZFMXTcgpV6tX76cFl0ZQrAk8DySQQ
Nov 25 23:30:13 compute-0 systemd-logind[789]: New session 18 of user zuul.
Nov 25 23:30:13 compute-0 systemd[1]: Started Session 18 of User zuul.
Nov 25 23:30:13 compute-0 sshd-session[71902]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 23:30:13 compute-0 sudo[71978]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgvhqydryvrmzfizjjrunmnlrxeypcfm ; /usr/bin/python3'
Nov 25 23:30:13 compute-0 sudo[71978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:30:14 compute-0 useradd[71982]: new group: name=ceph-admin, GID=42478
Nov 25 23:30:14 compute-0 useradd[71982]: new user: name=ceph-admin, UID=42477, GID=42478, home=/home/ceph-admin, shell=/bin/bash, from=none
Nov 25 23:30:14 compute-0 sudo[71978]: pam_unix(sudo:session): session closed for user root
Nov 25 23:30:14 compute-0 sudo[72064]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-buommpyhtuevisoajachljcuwnvjsgmb ; /usr/bin/python3'
Nov 25 23:30:14 compute-0 sudo[72064]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:30:14 compute-0 sudo[72064]: pam_unix(sudo:session): session closed for user root
Nov 25 23:30:15 compute-0 sudo[72137]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfpmfeabexhqfgnkfzpgzgjfumevqykd ; /usr/bin/python3'
Nov 25 23:30:15 compute-0 sudo[72137]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:30:15 compute-0 sudo[72137]: pam_unix(sudo:session): session closed for user root
Nov 25 23:30:15 compute-0 sudo[72187]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcizottgbycnvhrjcdtuiadrsrwwdibl ; /usr/bin/python3'
Nov 25 23:30:15 compute-0 sudo[72187]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:30:15 compute-0 sudo[72187]: pam_unix(sudo:session): session closed for user root
Nov 25 23:30:16 compute-0 sudo[72213]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqqysabyuqkxiqhopdffndrvmysohyqp ; /usr/bin/python3'
Nov 25 23:30:16 compute-0 sudo[72213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:30:16 compute-0 sudo[72213]: pam_unix(sudo:session): session closed for user root
Nov 25 23:30:16 compute-0 sudo[72239]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkrknvwhathwbjsuuqkulzyqojotlima ; /usr/bin/python3'
Nov 25 23:30:16 compute-0 sudo[72239]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:30:16 compute-0 sudo[72239]: pam_unix(sudo:session): session closed for user root
Nov 25 23:30:16 compute-0 sudo[72265]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzphjwigprxgdzlcmxjvgubjmwnugrmr ; /usr/bin/python3'
Nov 25 23:30:16 compute-0 sudo[72265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:30:17 compute-0 sudo[72265]: pam_unix(sudo:session): session closed for user root
Nov 25 23:30:17 compute-0 sudo[72343]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iekenjulnxpxdfjivtwgoulhtxnvsxjm ; /usr/bin/python3'
Nov 25 23:30:17 compute-0 sudo[72343]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:30:17 compute-0 sudo[72343]: pam_unix(sudo:session): session closed for user root
Nov 25 23:30:17 compute-0 sudo[72416]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztzcnehlfttucgqiunwbuxngaxfzutvy ; /usr/bin/python3'
Nov 25 23:30:17 compute-0 sudo[72416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:30:17 compute-0 sudo[72416]: pam_unix(sudo:session): session closed for user root
Nov 25 23:30:18 compute-0 sudo[72518]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptpkrutrbihghlskuroqmoiollswvtbl ; /usr/bin/python3'
Nov 25 23:30:18 compute-0 sudo[72518]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:30:18 compute-0 sudo[72518]: pam_unix(sudo:session): session closed for user root
Nov 25 23:30:18 compute-0 sudo[72591]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etdbvseuepcsmnjaiotdbhajtfwhflgp ; /usr/bin/python3'
Nov 25 23:30:18 compute-0 sudo[72591]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:30:18 compute-0 sudo[72591]: pam_unix(sudo:session): session closed for user root
Nov 25 23:30:19 compute-0 sudo[72641]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtgslcilgdzwdorzlolsfuepeiqptsju ; /usr/bin/python3'
Nov 25 23:30:19 compute-0 sudo[72641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:30:19 compute-0 python3[72643]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 23:30:20 compute-0 sudo[72641]: pam_unix(sudo:session): session closed for user root
Nov 25 23:30:21 compute-0 sudo[72736]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bsloajqgfoksifcmvomwrrecpxohruhj ; /usr/bin/python3'
Nov 25 23:30:21 compute-0 sudo[72736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:30:21 compute-0 python3[72738]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 25 23:30:22 compute-0 chronyd[59112]: Selected source 206.108.0.133 (pool.ntp.org)
Nov 25 23:30:22 compute-0 sudo[72736]: pam_unix(sudo:session): session closed for user root
Nov 25 23:30:23 compute-0 sudo[72763]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkasmmcdozovoblwyfrcchmgtvjpbklu ; /usr/bin/python3'
Nov 25 23:30:23 compute-0 sudo[72763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:30:23 compute-0 python3[72765]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 25 23:30:23 compute-0 sudo[72763]: pam_unix(sudo:session): session closed for user root
Nov 25 23:30:23 compute-0 sudo[72789]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxqodtatspauyxrbdlpxgbblktgltkuu ; /usr/bin/python3'
Nov 25 23:30:23 compute-0 sudo[72789]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:30:23 compute-0 python3[72791]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G
                                          losetup /dev/loop3 /var/lib/ceph-osd-0.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:30:23 compute-0 kernel: loop: module loaded
Nov 25 23:30:23 compute-0 kernel: loop3: detected capacity change from 0 to 41943040
Nov 25 23:30:23 compute-0 sudo[72789]: pam_unix(sudo:session): session closed for user root
Nov 25 23:30:23 compute-0 sudo[72823]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kiofrseywqvoumokfwtsgmedzvaiuvtp ; /usr/bin/python3'
Nov 25 23:30:23 compute-0 sudo[72823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:30:24 compute-0 python3[72825]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3
                                          vgcreate ceph_vg0 /dev/loop3
                                          lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:30:24 compute-0 lvm[72828]: PV /dev/loop3 not used.
Nov 25 23:30:24 compute-0 lvm[72830]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 23:30:24 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Nov 25 23:30:24 compute-0 lvm[72836]:   1 logical volume(s) in volume group "ceph_vg0" now active
Nov 25 23:30:24 compute-0 lvm[72840]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 23:30:24 compute-0 lvm[72840]: VG ceph_vg0 finished
Nov 25 23:30:24 compute-0 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Nov 25 23:30:24 compute-0 sudo[72823]: pam_unix(sudo:session): session closed for user root
Nov 25 23:30:24 compute-0 sudo[72916]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqzxxqewniekcuwgbwibpjdsffpywmaq ; /usr/bin/python3'
Nov 25 23:30:24 compute-0 sudo[72916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:30:24 compute-0 python3[72918]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 23:30:24 compute-0 sudo[72916]: pam_unix(sudo:session): session closed for user root
Nov 25 23:30:25 compute-0 sudo[72989]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkurlgynfcuapuskernvjojvgabnetyy ; /usr/bin/python3'
Nov 25 23:30:25 compute-0 sudo[72989]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:30:25 compute-0 python3[72991]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764113424.4942021-36371-56012506251373/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:30:25 compute-0 sudo[72989]: pam_unix(sudo:session): session closed for user root
Nov 25 23:30:25 compute-0 sudo[73039]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqlpurlqiacaeadztlzsekvoecsshamc ; /usr/bin/python3'
Nov 25 23:30:25 compute-0 sudo[73039]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:30:26 compute-0 python3[73041]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 23:30:26 compute-0 systemd[1]: Reloading.
Nov 25 23:30:26 compute-0 systemd-rc-local-generator[73073]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 23:30:26 compute-0 systemd-sysv-generator[73076]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 23:30:26 compute-0 systemd[1]: Starting Ceph OSD losetup...
Nov 25 23:30:26 compute-0 bash[73084]: /dev/loop3: [64513]:4194933 (/var/lib/ceph-osd-0.img)
Nov 25 23:30:26 compute-0 systemd[1]: Finished Ceph OSD losetup.
Nov 25 23:30:26 compute-0 lvm[73085]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 23:30:26 compute-0 lvm[73085]: VG ceph_vg0 finished
Nov 25 23:30:26 compute-0 sudo[73039]: pam_unix(sudo:session): session closed for user root
Nov 25 23:30:26 compute-0 sudo[73109]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kuhloctvvtozibjgnxoiltyqlqlmvaip ; /usr/bin/python3'
Nov 25 23:30:26 compute-0 sudo[73109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:30:26 compute-0 python3[73111]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 25 23:30:27 compute-0 sshd-session[73042]: Invalid user sistemas from 171.244.140.22 port 34770
Nov 25 23:30:27 compute-0 sshd-session[73042]: Received disconnect from 171.244.140.22 port 34770:11: Bye Bye [preauth]
Nov 25 23:30:27 compute-0 sshd-session[73042]: Disconnected from invalid user sistemas 171.244.140.22 port 34770 [preauth]
Nov 25 23:30:28 compute-0 sudo[73109]: pam_unix(sudo:session): session closed for user root
Nov 25 23:30:28 compute-0 sudo[73136]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymjqmgdyshwvkkpfvqapolcoeehxbbxu ; /usr/bin/python3'
Nov 25 23:30:28 compute-0 sudo[73136]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:30:28 compute-0 python3[73138]: ansible-ansible.builtin.stat Invoked with path=/dev/loop4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 25 23:30:28 compute-0 sudo[73136]: pam_unix(sudo:session): session closed for user root
Nov 25 23:30:28 compute-0 sudo[73162]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hesnbvgulwjzwmyhwtgtdnlaezretlcs ; /usr/bin/python3'
Nov 25 23:30:28 compute-0 sudo[73162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:30:28 compute-0 python3[73164]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-1.img bs=1 count=0 seek=20G
                                          losetup /dev/loop4 /var/lib/ceph-osd-1.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:30:28 compute-0 kernel: loop4: detected capacity change from 0 to 41943040
Nov 25 23:30:28 compute-0 sudo[73162]: pam_unix(sudo:session): session closed for user root
Nov 25 23:30:29 compute-0 sudo[73194]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amvdydcvcfrukryyljhvzijvwbmjxyol ; /usr/bin/python3'
Nov 25 23:30:29 compute-0 sudo[73194]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:30:29 compute-0 python3[73196]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop4
                                          vgcreate ceph_vg1 /dev/loop4
                                          lvcreate -n ceph_lv1 -l +100%FREE ceph_vg1
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:30:29 compute-0 lvm[73199]: PV /dev/loop4 not used.
Nov 25 23:30:29 compute-0 lvm[73201]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 25 23:30:29 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg1.
Nov 25 23:30:29 compute-0 lvm[73205]:   1 logical volume(s) in volume group "ceph_vg1" now active
Nov 25 23:30:29 compute-0 lvm[73212]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 25 23:30:29 compute-0 lvm[73212]: VG ceph_vg1 finished
Nov 25 23:30:29 compute-0 systemd[1]: lvm-activate-ceph_vg1.service: Deactivated successfully.
Nov 25 23:30:29 compute-0 sudo[73194]: pam_unix(sudo:session): session closed for user root
Nov 25 23:30:30 compute-0 sudo[73288]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umbirasyggvpndlyomdsjlhkgrdkxrci ; /usr/bin/python3'
Nov 25 23:30:30 compute-0 sudo[73288]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:30:30 compute-0 python3[73290]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-1.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 23:30:30 compute-0 sudo[73288]: pam_unix(sudo:session): session closed for user root
Nov 25 23:30:30 compute-0 sudo[73361]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqmmariawuuciobjrvdbzsihufbvcrpe ; /usr/bin/python3'
Nov 25 23:30:30 compute-0 sudo[73361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:30:30 compute-0 python3[73363]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764113429.7840981-36398-8534027427791/source dest=/etc/systemd/system/ceph-osd-losetup-1.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=19612168ea279db4171b94ee1f8625de1ec44b58 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:30:30 compute-0 sudo[73361]: pam_unix(sudo:session): session closed for user root
Nov 25 23:30:30 compute-0 sudo[73411]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jopzxrkobhalpjoqvldllafmmxotwqbj ; /usr/bin/python3'
Nov 25 23:30:30 compute-0 sudo[73411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:30:31 compute-0 python3[73413]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-1.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 23:30:31 compute-0 systemd[1]: Reloading.
Nov 25 23:30:31 compute-0 systemd-rc-local-generator[73443]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 23:30:31 compute-0 systemd-sysv-generator[73448]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 23:30:31 compute-0 systemd[1]: Starting Ceph OSD losetup...
Nov 25 23:30:31 compute-0 bash[73454]: /dev/loop4: [64513]:4327939 (/var/lib/ceph-osd-1.img)
Nov 25 23:30:31 compute-0 systemd[1]: Finished Ceph OSD losetup.
Nov 25 23:30:31 compute-0 lvm[73455]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 25 23:30:31 compute-0 lvm[73455]: VG ceph_vg1 finished
Nov 25 23:30:31 compute-0 sudo[73411]: pam_unix(sudo:session): session closed for user root
Nov 25 23:30:31 compute-0 sudo[73479]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xyzmqdxnyrnnnzjqqgsbgcjljjskwkbd ; /usr/bin/python3'
Nov 25 23:30:31 compute-0 sudo[73479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:30:31 compute-0 python3[73481]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 25 23:30:33 compute-0 sudo[73479]: pam_unix(sudo:session): session closed for user root
Nov 25 23:30:33 compute-0 sudo[73506]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oevnmgsmcychskoiprpwvuziymolhsfn ; /usr/bin/python3'
Nov 25 23:30:33 compute-0 sudo[73506]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:30:33 compute-0 python3[73508]: ansible-ansible.builtin.stat Invoked with path=/dev/loop5 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 25 23:30:33 compute-0 sudo[73506]: pam_unix(sudo:session): session closed for user root
Nov 25 23:30:33 compute-0 sudo[73532]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjjvckugtoxgzgaqpvcyvkyrohoihuqn ; /usr/bin/python3'
Nov 25 23:30:33 compute-0 sudo[73532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:30:33 compute-0 python3[73534]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-2.img bs=1 count=0 seek=20G
                                          losetup /dev/loop5 /var/lib/ceph-osd-2.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:30:33 compute-0 kernel: loop5: detected capacity change from 0 to 41943040
Nov 25 23:30:33 compute-0 sudo[73532]: pam_unix(sudo:session): session closed for user root
Nov 25 23:30:34 compute-0 sudo[73564]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzhcmnqzbnqseismsieyggkxdfppkbja ; /usr/bin/python3'
Nov 25 23:30:34 compute-0 sudo[73564]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:30:34 compute-0 python3[73566]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop5
                                          vgcreate ceph_vg2 /dev/loop5
                                          lvcreate -n ceph_lv2 -l +100%FREE ceph_vg2
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:30:34 compute-0 lvm[73569]: PV /dev/loop5 not used.
Nov 25 23:30:34 compute-0 lvm[73571]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 25 23:30:34 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg2.
Nov 25 23:30:34 compute-0 lvm[73579]:   1 logical volume(s) in volume group "ceph_vg2" now active
Nov 25 23:30:34 compute-0 lvm[73582]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 25 23:30:34 compute-0 lvm[73582]: VG ceph_vg2 finished
Nov 25 23:30:34 compute-0 systemd[1]: lvm-activate-ceph_vg2.service: Deactivated successfully.
Nov 25 23:30:34 compute-0 sudo[73564]: pam_unix(sudo:session): session closed for user root
Nov 25 23:30:34 compute-0 sudo[73658]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eyllzovvatokhooyedrmolfvkxalxfsj ; /usr/bin/python3'
Nov 25 23:30:34 compute-0 sudo[73658]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:30:35 compute-0 python3[73660]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-2.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 23:30:35 compute-0 sudo[73658]: pam_unix(sudo:session): session closed for user root
Nov 25 23:30:35 compute-0 sudo[73731]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iznmpnbmheldywetnzedfwwdnywcvllm ; /usr/bin/python3'
Nov 25 23:30:35 compute-0 sudo[73731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:30:35 compute-0 python3[73733]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764113434.7399487-36425-260303206291209/source dest=/etc/systemd/system/ceph-osd-losetup-2.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=4c5b1bc5693c499ffe2edaa97d63f5df7075d845 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:30:35 compute-0 sudo[73731]: pam_unix(sudo:session): session closed for user root
Nov 25 23:30:35 compute-0 sudo[73781]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bsyhzlvbtqzqnshctwithbblzjdwwpcg ; /usr/bin/python3'
Nov 25 23:30:35 compute-0 sudo[73781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:30:36 compute-0 python3[73783]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-2.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 23:30:36 compute-0 systemd[1]: Reloading.
Nov 25 23:30:36 compute-0 systemd-sysv-generator[73816]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 23:30:36 compute-0 systemd-rc-local-generator[73813]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 23:30:36 compute-0 systemd[1]: Starting Ceph OSD losetup...
Nov 25 23:30:36 compute-0 bash[73823]: /dev/loop5: [64513]:4327952 (/var/lib/ceph-osd-2.img)
Nov 25 23:30:36 compute-0 systemd[1]: Finished Ceph OSD losetup.
Nov 25 23:30:36 compute-0 lvm[73824]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 25 23:30:36 compute-0 lvm[73824]: VG ceph_vg2 finished
Nov 25 23:30:36 compute-0 sudo[73781]: pam_unix(sudo:session): session closed for user root
Nov 25 23:30:38 compute-0 python3[73848]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 23:30:40 compute-0 sudo[73939]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ssdyieqvavxrofabsccizbiquodhdvhz ; /usr/bin/python3'
Nov 25 23:30:40 compute-0 sudo[73939]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:30:40 compute-0 python3[73941]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 25 23:30:42 compute-0 groupadd[73947]: group added to /etc/group: name=cephadm, GID=992
Nov 25 23:30:42 compute-0 groupadd[73947]: group added to /etc/gshadow: name=cephadm
Nov 25 23:30:42 compute-0 groupadd[73947]: new group: name=cephadm, GID=992
Nov 25 23:30:42 compute-0 useradd[73954]: new user: name=cephadm, UID=992, GID=992, home=/var/lib/cephadm, shell=/bin/bash, from=none
Nov 25 23:30:42 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 25 23:30:42 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 25 23:30:42 compute-0 sudo[73939]: pam_unix(sudo:session): session closed for user root
Nov 25 23:30:42 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 25 23:30:42 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 25 23:30:42 compute-0 systemd[1]: run-r1c78604c0ff44ca1b8e351090e5dff8d.service: Deactivated successfully.
Nov 25 23:30:42 compute-0 sudo[74050]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xeddqlvyadsbqwadaijvvkdqqupqdldw ; /usr/bin/python3'
Nov 25 23:30:42 compute-0 sudo[74050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:30:42 compute-0 python3[74052]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 25 23:30:43 compute-0 sudo[74050]: pam_unix(sudo:session): session closed for user root
Nov 25 23:30:43 compute-0 sudo[74078]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aspelijiesvhqcfyythfduoaoeqykavk ; /usr/bin/python3'
Nov 25 23:30:43 compute-0 sudo[74078]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:30:43 compute-0 python3[74080]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:30:43 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 23:30:43 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 23:30:43 compute-0 sudo[74078]: pam_unix(sudo:session): session closed for user root
Nov 25 23:30:44 compute-0 sudo[74142]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzhrsmvvexcapkfmklvyqqofqfqmhkfv ; /usr/bin/python3'
Nov 25 23:30:44 compute-0 sudo[74142]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:30:44 compute-0 python3[74144]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:30:44 compute-0 sudo[74142]: pam_unix(sudo:session): session closed for user root
Nov 25 23:30:44 compute-0 sudo[74168]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbdirskurzqbiamtggigpagtvhvtyyos ; /usr/bin/python3'
Nov 25 23:30:44 compute-0 sudo[74168]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:30:44 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 23:30:44 compute-0 python3[74170]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:30:44 compute-0 sudo[74168]: pam_unix(sudo:session): session closed for user root
Nov 25 23:30:45 compute-0 sudo[74246]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmmenmsxdgucpchxfpriwkcvtgcfqaxj ; /usr/bin/python3'
Nov 25 23:30:45 compute-0 sudo[74246]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:30:45 compute-0 python3[74248]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 23:30:45 compute-0 sudo[74246]: pam_unix(sudo:session): session closed for user root
Nov 25 23:30:45 compute-0 sudo[74319]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lapegdloyrnudcfoypcutwggitezhfah ; /usr/bin/python3'
Nov 25 23:30:45 compute-0 sudo[74319]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:30:45 compute-0 python3[74321]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764113445.2660484-36572-163902209348743/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=bb83c53af4ffd926a3f1eafe26a8be437df6401f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:30:46 compute-0 sudo[74319]: pam_unix(sudo:session): session closed for user root
Nov 25 23:30:46 compute-0 sudo[74421]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zusjlimbraztcbcixismpksydoaxxvkt ; /usr/bin/python3'
Nov 25 23:30:46 compute-0 sudo[74421]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:30:46 compute-0 python3[74423]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 23:30:46 compute-0 sudo[74421]: pam_unix(sudo:session): session closed for user root
Nov 25 23:30:47 compute-0 sudo[74494]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbnbrjxrlfuxwwgwudroxurovchsyivt ; /usr/bin/python3'
Nov 25 23:30:47 compute-0 sudo[74494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:30:47 compute-0 python3[74496]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764113446.525373-36590-62975644289509/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:30:47 compute-0 sudo[74494]: pam_unix(sudo:session): session closed for user root
Nov 25 23:30:47 compute-0 sudo[74544]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spftvzsmzdrnmstpdzephfxqgcjvuxht ; /usr/bin/python3'
Nov 25 23:30:47 compute-0 sudo[74544]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:30:47 compute-0 python3[74546]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 25 23:30:47 compute-0 sudo[74544]: pam_unix(sudo:session): session closed for user root
Nov 25 23:30:48 compute-0 sudo[74572]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzbshafhvwyswblgklbhiyizttbporqf ; /usr/bin/python3'
Nov 25 23:30:48 compute-0 sudo[74572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:30:48 compute-0 python3[74574]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 25 23:30:48 compute-0 sudo[74572]: pam_unix(sudo:session): session closed for user root
Nov 25 23:30:48 compute-0 sudo[74600]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfulikpvwtydwjgydrftillztfujdmhx ; /usr/bin/python3'
Nov 25 23:30:48 compute-0 sudo[74600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:30:48 compute-0 python3[74602]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 25 23:30:48 compute-0 sudo[74600]: pam_unix(sudo:session): session closed for user root
Nov 25 23:30:48 compute-0 sudo[74630]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldirxeinzdougqgqulzhqfreijjatzbe ; /usr/bin/python3'
Nov 25 23:30:48 compute-0 sudo[74630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:30:49 compute-0 python3[74632]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --skip-prepare-host --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid 101922db-575f-58e2-980f-928050464f69 --config /home/ceph-admin/assimilate_ceph.conf \--single-host-defaults \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100
                                           _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:30:49 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 23:30:49 compute-0 sshd-session[74651]: Accepted publickey for ceph-admin from 192.168.122.100 port 40168 ssh2: RSA SHA256:+qskse9WP1l+rCMHNlGkhdiL5W/Nev0fY09y4UtHwh8
Nov 25 23:30:49 compute-0 systemd-logind[789]: New session 19 of user ceph-admin.
Nov 25 23:30:49 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Nov 25 23:30:49 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Nov 25 23:30:49 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Nov 25 23:30:49 compute-0 systemd[1]: Starting User Manager for UID 42477...
Nov 25 23:30:49 compute-0 systemd[74655]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 25 23:30:49 compute-0 systemd[74655]: Queued start job for default target Main User Target.
Nov 25 23:30:49 compute-0 systemd[74655]: Created slice User Application Slice.
Nov 25 23:30:49 compute-0 systemd[74655]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 25 23:30:49 compute-0 systemd[74655]: Started Daily Cleanup of User's Temporary Directories.
Nov 25 23:30:49 compute-0 systemd[74655]: Reached target Paths.
Nov 25 23:30:49 compute-0 systemd[74655]: Reached target Timers.
Nov 25 23:30:49 compute-0 systemd[74655]: Starting D-Bus User Message Bus Socket...
Nov 25 23:30:49 compute-0 systemd[74655]: Starting Create User's Volatile Files and Directories...
Nov 25 23:30:49 compute-0 systemd[74655]: Listening on D-Bus User Message Bus Socket.
Nov 25 23:30:49 compute-0 systemd[74655]: Reached target Sockets.
Nov 25 23:30:49 compute-0 systemd[74655]: Finished Create User's Volatile Files and Directories.
Nov 25 23:30:49 compute-0 systemd[74655]: Reached target Basic System.
Nov 25 23:30:49 compute-0 systemd[74655]: Reached target Main User Target.
Nov 25 23:30:49 compute-0 systemd[74655]: Startup finished in 153ms.
Nov 25 23:30:49 compute-0 systemd[1]: Started User Manager for UID 42477.
Nov 25 23:30:49 compute-0 systemd[1]: Started Session 19 of User ceph-admin.
Nov 25 23:30:49 compute-0 sshd-session[74651]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 25 23:30:49 compute-0 sudo[74671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/echo
Nov 25 23:30:49 compute-0 sudo[74671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:30:49 compute-0 sudo[74671]: pam_unix(sudo:session): session closed for user root
Nov 25 23:30:49 compute-0 sshd-session[74670]: Received disconnect from 192.168.122.100 port 40168:11: disconnected by user
Nov 25 23:30:49 compute-0 sshd-session[74670]: Disconnected from user ceph-admin 192.168.122.100 port 40168
Nov 25 23:30:49 compute-0 sshd-session[74651]: pam_unix(sshd:session): session closed for user ceph-admin
Nov 25 23:30:49 compute-0 systemd[1]: session-19.scope: Deactivated successfully.
Nov 25 23:30:49 compute-0 systemd-logind[789]: Session 19 logged out. Waiting for processes to exit.
Nov 25 23:30:49 compute-0 systemd-logind[789]: Removed session 19.
Nov 25 23:30:49 compute-0 sshd-session[74603]: Invalid user toor from 68.183.82.237 port 49660
Nov 25 23:30:50 compute-0 sshd-session[74603]: Received disconnect from 68.183.82.237 port 49660:11: Bye Bye [preauth]
Nov 25 23:30:50 compute-0 sshd-session[74603]: Disconnected from invalid user toor 68.183.82.237 port 49660 [preauth]
Nov 25 23:30:50 compute-0 sshd-session[74633]: Received disconnect from 124.18.141.70 port 56206:11: Bye Bye [preauth]
Nov 25 23:30:50 compute-0 sshd-session[74633]: Disconnected from authenticating user root 124.18.141.70 port 56206 [preauth]
Nov 25 23:30:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat2864020147-lower\x2dmapped.mount: Deactivated successfully.
Nov 25 23:30:59 compute-0 systemd[1]: Stopping User Manager for UID 42477...
Nov 25 23:30:59 compute-0 systemd[74655]: Activating special unit Exit the Session...
Nov 25 23:30:59 compute-0 systemd[74655]: Stopped target Main User Target.
Nov 25 23:30:59 compute-0 systemd[74655]: Stopped target Basic System.
Nov 25 23:30:59 compute-0 systemd[74655]: Stopped target Paths.
Nov 25 23:30:59 compute-0 systemd[74655]: Stopped target Sockets.
Nov 25 23:30:59 compute-0 systemd[74655]: Stopped target Timers.
Nov 25 23:30:59 compute-0 systemd[74655]: Stopped Mark boot as successful after the user session has run 2 minutes.
Nov 25 23:30:59 compute-0 systemd[74655]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 25 23:30:59 compute-0 systemd[74655]: Closed D-Bus User Message Bus Socket.
Nov 25 23:30:59 compute-0 systemd[74655]: Stopped Create User's Volatile Files and Directories.
Nov 25 23:30:59 compute-0 systemd[74655]: Removed slice User Application Slice.
Nov 25 23:30:59 compute-0 systemd[74655]: Reached target Shutdown.
Nov 25 23:30:59 compute-0 systemd[74655]: Finished Exit the Session.
Nov 25 23:30:59 compute-0 systemd[74655]: Reached target Exit the Session.
Nov 25 23:30:59 compute-0 systemd[1]: user@42477.service: Deactivated successfully.
Nov 25 23:30:59 compute-0 systemd[1]: Stopped User Manager for UID 42477.
Nov 25 23:30:59 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Nov 25 23:30:59 compute-0 systemd[1]: run-user-42477.mount: Deactivated successfully.
Nov 25 23:30:59 compute-0 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Nov 25 23:30:59 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Nov 25 23:30:59 compute-0 systemd[1]: Removed slice User Slice of UID 42477.
Nov 25 23:31:03 compute-0 podman[74708]: 2025-11-25 23:31:03.085755479 +0000 UTC m=+13.246080392 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 23:31:03 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 23:31:03 compute-0 podman[74769]: 2025-11-25 23:31:03.180857648 +0000 UTC m=+0.060922713 container create 35556861b67d25f3b0124461d8a47358bab78b15688f2f69b602ade60fad0653 (image=quay.io/ceph/ceph:v18, name=unruffled_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:31:03 compute-0 systemd[1]: Created slice Virtual Machine and Container Slice.
Nov 25 23:31:03 compute-0 systemd[1]: Started libpod-conmon-35556861b67d25f3b0124461d8a47358bab78b15688f2f69b602ade60fad0653.scope.
Nov 25 23:31:03 compute-0 podman[74769]: 2025-11-25 23:31:03.153954867 +0000 UTC m=+0.034019942 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 23:31:03 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:31:03 compute-0 podman[74769]: 2025-11-25 23:31:03.296104916 +0000 UTC m=+0.176170021 container init 35556861b67d25f3b0124461d8a47358bab78b15688f2f69b602ade60fad0653 (image=quay.io/ceph/ceph:v18, name=unruffled_ganguly, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:31:03 compute-0 podman[74769]: 2025-11-25 23:31:03.304715117 +0000 UTC m=+0.184780192 container start 35556861b67d25f3b0124461d8a47358bab78b15688f2f69b602ade60fad0653 (image=quay.io/ceph/ceph:v18, name=unruffled_ganguly, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:31:03 compute-0 podman[74769]: 2025-11-25 23:31:03.308770995 +0000 UTC m=+0.188836060 container attach 35556861b67d25f3b0124461d8a47358bab78b15688f2f69b602ade60fad0653 (image=quay.io/ceph/ceph:v18, name=unruffled_ganguly, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 25 23:31:03 compute-0 unruffled_ganguly[74786]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Nov 25 23:31:03 compute-0 systemd[1]: libpod-35556861b67d25f3b0124461d8a47358bab78b15688f2f69b602ade60fad0653.scope: Deactivated successfully.
Nov 25 23:31:03 compute-0 podman[74769]: 2025-11-25 23:31:03.585017339 +0000 UTC m=+0.465082444 container died 35556861b67d25f3b0124461d8a47358bab78b15688f2f69b602ade60fad0653 (image=quay.io/ceph/ceph:v18, name=unruffled_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 23:31:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-1a19102f429c23896fb48842039f5ef1057cc31e3a6d81d406dab182aac5b96a-merged.mount: Deactivated successfully.
Nov 25 23:31:03 compute-0 podman[74769]: 2025-11-25 23:31:03.637652159 +0000 UTC m=+0.517717194 container remove 35556861b67d25f3b0124461d8a47358bab78b15688f2f69b602ade60fad0653 (image=quay.io/ceph/ceph:v18, name=unruffled_ganguly, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 23:31:03 compute-0 systemd[1]: libpod-conmon-35556861b67d25f3b0124461d8a47358bab78b15688f2f69b602ade60fad0653.scope: Deactivated successfully.
Nov 25 23:31:03 compute-0 podman[74804]: 2025-11-25 23:31:03.720665184 +0000 UTC m=+0.059295201 container create eca3df3177e217d10e09850b950e6ed8bb95f62dc50daa367d4b49df4b45ddb0 (image=quay.io/ceph/ceph:v18, name=crazy_bell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 23:31:03 compute-0 systemd[1]: Started libpod-conmon-eca3df3177e217d10e09850b950e6ed8bb95f62dc50daa367d4b49df4b45ddb0.scope.
Nov 25 23:31:03 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:31:03 compute-0 podman[74804]: 2025-11-25 23:31:03.693424683 +0000 UTC m=+0.032054740 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 23:31:03 compute-0 podman[74804]: 2025-11-25 23:31:03.799229069 +0000 UTC m=+0.137859076 container init eca3df3177e217d10e09850b950e6ed8bb95f62dc50daa367d4b49df4b45ddb0 (image=quay.io/ceph/ceph:v18, name=crazy_bell, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:31:03 compute-0 podman[74804]: 2025-11-25 23:31:03.806559365 +0000 UTC m=+0.145189342 container start eca3df3177e217d10e09850b950e6ed8bb95f62dc50daa367d4b49df4b45ddb0 (image=quay.io/ceph/ceph:v18, name=crazy_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default)
Nov 25 23:31:03 compute-0 podman[74804]: 2025-11-25 23:31:03.81010755 +0000 UTC m=+0.148737527 container attach eca3df3177e217d10e09850b950e6ed8bb95f62dc50daa367d4b49df4b45ddb0 (image=quay.io/ceph/ceph:v18, name=crazy_bell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:31:03 compute-0 crazy_bell[74821]: 167 167
Nov 25 23:31:03 compute-0 systemd[1]: libpod-eca3df3177e217d10e09850b950e6ed8bb95f62dc50daa367d4b49df4b45ddb0.scope: Deactivated successfully.
Nov 25 23:31:03 compute-0 podman[74804]: 2025-11-25 23:31:03.811526679 +0000 UTC m=+0.150156656 container died eca3df3177e217d10e09850b950e6ed8bb95f62dc50daa367d4b49df4b45ddb0 (image=quay.io/ceph/ceph:v18, name=crazy_bell, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 23:31:03 compute-0 podman[74804]: 2025-11-25 23:31:03.848857049 +0000 UTC m=+0.187487026 container remove eca3df3177e217d10e09850b950e6ed8bb95f62dc50daa367d4b49df4b45ddb0 (image=quay.io/ceph/ceph:v18, name=crazy_bell, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 25 23:31:03 compute-0 systemd[1]: libpod-conmon-eca3df3177e217d10e09850b950e6ed8bb95f62dc50daa367d4b49df4b45ddb0.scope: Deactivated successfully.
Nov 25 23:31:03 compute-0 podman[74836]: 2025-11-25 23:31:03.911150588 +0000 UTC m=+0.043012243 container create a64c86b161f7dc1aa7d379dcda5b597b069978f4d7f94d1a6ffbd749986c5016 (image=quay.io/ceph/ceph:v18, name=suspicious_carver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 23:31:03 compute-0 systemd[1]: Started libpod-conmon-a64c86b161f7dc1aa7d379dcda5b597b069978f4d7f94d1a6ffbd749986c5016.scope.
Nov 25 23:31:03 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:31:03 compute-0 podman[74836]: 2025-11-25 23:31:03.97205715 +0000 UTC m=+0.103918815 container init a64c86b161f7dc1aa7d379dcda5b597b069978f4d7f94d1a6ffbd749986c5016 (image=quay.io/ceph/ceph:v18, name=suspicious_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 25 23:31:03 compute-0 podman[74836]: 2025-11-25 23:31:03.98096892 +0000 UTC m=+0.112830575 container start a64c86b161f7dc1aa7d379dcda5b597b069978f4d7f94d1a6ffbd749986c5016 (image=quay.io/ceph/ceph:v18, name=suspicious_carver, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:31:03 compute-0 podman[74836]: 2025-11-25 23:31:03.984662668 +0000 UTC m=+0.116524393 container attach a64c86b161f7dc1aa7d379dcda5b597b069978f4d7f94d1a6ffbd749986c5016 (image=quay.io/ceph/ceph:v18, name=suspicious_carver, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:31:03 compute-0 podman[74836]: 2025-11-25 23:31:03.89292925 +0000 UTC m=+0.024790885 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 23:31:04 compute-0 suspicious_carver[74853]: AQA4PCZpLy7mABAAbQPwB+Zx7UwVIZajRo9+2Q==
Nov 25 23:31:04 compute-0 systemd[1]: libpod-a64c86b161f7dc1aa7d379dcda5b597b069978f4d7f94d1a6ffbd749986c5016.scope: Deactivated successfully.
Nov 25 23:31:04 compute-0 podman[74836]: 2025-11-25 23:31:04.020253302 +0000 UTC m=+0.152115017 container died a64c86b161f7dc1aa7d379dcda5b597b069978f4d7f94d1a6ffbd749986c5016 (image=quay.io/ceph/ceph:v18, name=suspicious_carver, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:31:04 compute-0 podman[74836]: 2025-11-25 23:31:04.062621887 +0000 UTC m=+0.194483512 container remove a64c86b161f7dc1aa7d379dcda5b597b069978f4d7f94d1a6ffbd749986c5016 (image=quay.io/ceph/ceph:v18, name=suspicious_carver, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 23:31:04 compute-0 systemd[1]: libpod-conmon-a64c86b161f7dc1aa7d379dcda5b597b069978f4d7f94d1a6ffbd749986c5016.scope: Deactivated successfully.
Nov 25 23:31:04 compute-0 podman[74872]: 2025-11-25 23:31:04.159924484 +0000 UTC m=+0.064835008 container create ddec09a04c9ea09c59edd5332c6d930d237b0676bb0cb14aca8e119355202c08 (image=quay.io/ceph/ceph:v18, name=nice_spence, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:31:04 compute-0 systemd[1]: Started libpod-conmon-ddec09a04c9ea09c59edd5332c6d930d237b0676bb0cb14aca8e119355202c08.scope.
Nov 25 23:31:04 compute-0 podman[74872]: 2025-11-25 23:31:04.132472309 +0000 UTC m=+0.037382893 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 23:31:04 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:31:04 compute-0 podman[74872]: 2025-11-25 23:31:04.250713787 +0000 UTC m=+0.155624311 container init ddec09a04c9ea09c59edd5332c6d930d237b0676bb0cb14aca8e119355202c08 (image=quay.io/ceph/ceph:v18, name=nice_spence, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 25 23:31:04 compute-0 podman[74872]: 2025-11-25 23:31:04.260302235 +0000 UTC m=+0.165212749 container start ddec09a04c9ea09c59edd5332c6d930d237b0676bb0cb14aca8e119355202c08 (image=quay.io/ceph/ceph:v18, name=nice_spence, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 25 23:31:04 compute-0 podman[74872]: 2025-11-25 23:31:04.264015374 +0000 UTC m=+0.168925888 container attach ddec09a04c9ea09c59edd5332c6d930d237b0676bb0cb14aca8e119355202c08 (image=quay.io/ceph/ceph:v18, name=nice_spence, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:31:04 compute-0 nice_spence[74888]: AQA4PCZpGKbZERAAlOt17UStSd5ILhCCM/ssoA==
Nov 25 23:31:04 compute-0 systemd[1]: libpod-ddec09a04c9ea09c59edd5332c6d930d237b0676bb0cb14aca8e119355202c08.scope: Deactivated successfully.
Nov 25 23:31:04 compute-0 podman[74872]: 2025-11-25 23:31:04.306534623 +0000 UTC m=+0.211445167 container died ddec09a04c9ea09c59edd5332c6d930d237b0676bb0cb14aca8e119355202c08 (image=quay.io/ceph/ceph:v18, name=nice_spence, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 23:31:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-5f4d3dbb742eb3e0577242299510165100aaca0f82e808700cb8a45a73c10393-merged.mount: Deactivated successfully.
Nov 25 23:31:04 compute-0 podman[74872]: 2025-11-25 23:31:04.345377215 +0000 UTC m=+0.250287739 container remove ddec09a04c9ea09c59edd5332c6d930d237b0676bb0cb14aca8e119355202c08 (image=quay.io/ceph/ceph:v18, name=nice_spence, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 23:31:04 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 23:31:04 compute-0 systemd[1]: libpod-conmon-ddec09a04c9ea09c59edd5332c6d930d237b0676bb0cb14aca8e119355202c08.scope: Deactivated successfully.
Nov 25 23:31:04 compute-0 podman[74908]: 2025-11-25 23:31:04.433357462 +0000 UTC m=+0.059766573 container create 6c423fbf5b3aed4883fbd9cdd56b54e32fe2cb3a2c3108363d9b4c70e8c1d1f2 (image=quay.io/ceph/ceph:v18, name=nice_hermann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 25 23:31:04 compute-0 podman[74908]: 2025-11-25 23:31:04.402009802 +0000 UTC m=+0.028418963 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 23:31:04 compute-0 systemd[1]: Started libpod-conmon-6c423fbf5b3aed4883fbd9cdd56b54e32fe2cb3a2c3108363d9b4c70e8c1d1f2.scope.
Nov 25 23:31:04 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:31:04 compute-0 podman[74908]: 2025-11-25 23:31:04.588443557 +0000 UTC m=+0.214852668 container init 6c423fbf5b3aed4883fbd9cdd56b54e32fe2cb3a2c3108363d9b4c70e8c1d1f2 (image=quay.io/ceph/ceph:v18, name=nice_hermann, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:31:04 compute-0 podman[74908]: 2025-11-25 23:31:04.594373177 +0000 UTC m=+0.220782288 container start 6c423fbf5b3aed4883fbd9cdd56b54e32fe2cb3a2c3108363d9b4c70e8c1d1f2 (image=quay.io/ceph/ceph:v18, name=nice_hermann, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 23:31:04 compute-0 podman[74908]: 2025-11-25 23:31:04.598190489 +0000 UTC m=+0.224599610 container attach 6c423fbf5b3aed4883fbd9cdd56b54e32fe2cb3a2c3108363d9b4c70e8c1d1f2 (image=quay.io/ceph/ceph:v18, name=nice_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:31:04 compute-0 nice_hermann[74924]: AQA4PCZpzCmqJRAA3Tpjw8N/cUTVm1M6XMaoAw==
Nov 25 23:31:04 compute-0 systemd[1]: libpod-6c423fbf5b3aed4883fbd9cdd56b54e32fe2cb3a2c3108363d9b4c70e8c1d1f2.scope: Deactivated successfully.
Nov 25 23:31:04 compute-0 podman[74908]: 2025-11-25 23:31:04.637541723 +0000 UTC m=+0.263950834 container died 6c423fbf5b3aed4883fbd9cdd56b54e32fe2cb3a2c3108363d9b4c70e8c1d1f2 (image=quay.io/ceph/ceph:v18, name=nice_hermann, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 25 23:31:04 compute-0 podman[74908]: 2025-11-25 23:31:04.682713924 +0000 UTC m=+0.309123035 container remove 6c423fbf5b3aed4883fbd9cdd56b54e32fe2cb3a2c3108363d9b4c70e8c1d1f2 (image=quay.io/ceph/ceph:v18, name=nice_hermann, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:31:04 compute-0 systemd[1]: libpod-conmon-6c423fbf5b3aed4883fbd9cdd56b54e32fe2cb3a2c3108363d9b4c70e8c1d1f2.scope: Deactivated successfully.
Nov 25 23:31:04 compute-0 podman[74945]: 2025-11-25 23:31:04.764349562 +0000 UTC m=+0.054639705 container create 1c14d1dc5f44b8886723ef98a2c740b4d72b0c6be561c5463ff222df3886cd25 (image=quay.io/ceph/ceph:v18, name=jolly_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:31:04 compute-0 systemd[1]: Started libpod-conmon-1c14d1dc5f44b8886723ef98a2c740b4d72b0c6be561c5463ff222df3886cd25.scope.
Nov 25 23:31:04 compute-0 podman[74945]: 2025-11-25 23:31:04.736838944 +0000 UTC m=+0.027129157 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 23:31:04 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:31:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7466d65fc3a495166840401897c0c545e94e433ba236b835695e8a77ab7bb3fb/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Nov 25 23:31:04 compute-0 podman[74945]: 2025-11-25 23:31:04.856485711 +0000 UTC m=+0.146775854 container init 1c14d1dc5f44b8886723ef98a2c740b4d72b0c6be561c5463ff222df3886cd25 (image=quay.io/ceph/ceph:v18, name=jolly_mendel, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:31:04 compute-0 podman[74945]: 2025-11-25 23:31:04.865686097 +0000 UTC m=+0.155976270 container start 1c14d1dc5f44b8886723ef98a2c740b4d72b0c6be561c5463ff222df3886cd25 (image=quay.io/ceph/ceph:v18, name=jolly_mendel, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 23:31:04 compute-0 podman[74945]: 2025-11-25 23:31:04.870005643 +0000 UTC m=+0.160295826 container attach 1c14d1dc5f44b8886723ef98a2c740b4d72b0c6be561c5463ff222df3886cd25 (image=quay.io/ceph/ceph:v18, name=jolly_mendel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 23:31:04 compute-0 jolly_mendel[74962]: /usr/bin/monmaptool: monmap file /tmp/monmap
Nov 25 23:31:04 compute-0 jolly_mendel[74962]: setting min_mon_release = pacific
Nov 25 23:31:04 compute-0 jolly_mendel[74962]: /usr/bin/monmaptool: set fsid to 101922db-575f-58e2-980f-928050464f69
Nov 25 23:31:04 compute-0 jolly_mendel[74962]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Nov 25 23:31:04 compute-0 systemd[1]: libpod-1c14d1dc5f44b8886723ef98a2c740b4d72b0c6be561c5463ff222df3886cd25.scope: Deactivated successfully.
Nov 25 23:31:04 compute-0 podman[74945]: 2025-11-25 23:31:04.911006892 +0000 UTC m=+0.201297045 container died 1c14d1dc5f44b8886723ef98a2c740b4d72b0c6be561c5463ff222df3886cd25 (image=quay.io/ceph/ceph:v18, name=jolly_mendel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:31:04 compute-0 podman[74945]: 2025-11-25 23:31:04.949707948 +0000 UTC m=+0.239998091 container remove 1c14d1dc5f44b8886723ef98a2c740b4d72b0c6be561c5463ff222df3886cd25 (image=quay.io/ceph/ceph:v18, name=jolly_mendel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 23:31:04 compute-0 systemd[1]: libpod-conmon-1c14d1dc5f44b8886723ef98a2c740b4d72b0c6be561c5463ff222df3886cd25.scope: Deactivated successfully.
Nov 25 23:31:05 compute-0 podman[74982]: 2025-11-25 23:31:05.023950368 +0000 UTC m=+0.054231624 container create 89d63d534dd20b1ec38e335d2842d399efecd48d86f50a98db828e07ae67073a (image=quay.io/ceph/ceph:v18, name=gifted_mcnulty, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:31:05 compute-0 systemd[1]: Started libpod-conmon-89d63d534dd20b1ec38e335d2842d399efecd48d86f50a98db828e07ae67073a.scope.
Nov 25 23:31:05 compute-0 podman[74982]: 2025-11-25 23:31:04.995018613 +0000 UTC m=+0.025299929 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 23:31:05 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:31:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d939591c70eb84b12bdeb076b72adb3084dbade29d38f1d12ed6f7a4b19df4be/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Nov 25 23:31:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d939591c70eb84b12bdeb076b72adb3084dbade29d38f1d12ed6f7a4b19df4be/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 23:31:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d939591c70eb84b12bdeb076b72adb3084dbade29d38f1d12ed6f7a4b19df4be/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:31:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d939591c70eb84b12bdeb076b72adb3084dbade29d38f1d12ed6f7a4b19df4be/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 25 23:31:05 compute-0 podman[74982]: 2025-11-25 23:31:05.112670696 +0000 UTC m=+0.142951922 container init 89d63d534dd20b1ec38e335d2842d399efecd48d86f50a98db828e07ae67073a (image=quay.io/ceph/ceph:v18, name=gifted_mcnulty, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Nov 25 23:31:05 compute-0 podman[74982]: 2025-11-25 23:31:05.124986796 +0000 UTC m=+0.155268012 container start 89d63d534dd20b1ec38e335d2842d399efecd48d86f50a98db828e07ae67073a (image=quay.io/ceph/ceph:v18, name=gifted_mcnulty, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:31:05 compute-0 podman[74982]: 2025-11-25 23:31:05.127866453 +0000 UTC m=+0.158147669 container attach 89d63d534dd20b1ec38e335d2842d399efecd48d86f50a98db828e07ae67073a (image=quay.io/ceph/ceph:v18, name=gifted_mcnulty, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:31:05 compute-0 systemd[1]: libpod-89d63d534dd20b1ec38e335d2842d399efecd48d86f50a98db828e07ae67073a.scope: Deactivated successfully.
Nov 25 23:31:05 compute-0 podman[75024]: 2025-11-25 23:31:05.239973408 +0000 UTC m=+0.017347227 container died 89d63d534dd20b1ec38e335d2842d399efecd48d86f50a98db828e07ae67073a (image=quay.io/ceph/ceph:v18, name=gifted_mcnulty, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 23:31:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-d939591c70eb84b12bdeb076b72adb3084dbade29d38f1d12ed6f7a4b19df4be-merged.mount: Deactivated successfully.
Nov 25 23:31:05 compute-0 podman[75024]: 2025-11-25 23:31:05.271627915 +0000 UTC m=+0.049001734 container remove 89d63d534dd20b1ec38e335d2842d399efecd48d86f50a98db828e07ae67073a (image=quay.io/ceph/ceph:v18, name=gifted_mcnulty, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 23:31:05 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 23:31:05 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 23:31:05 compute-0 systemd[1]: libpod-conmon-89d63d534dd20b1ec38e335d2842d399efecd48d86f50a98db828e07ae67073a.scope: Deactivated successfully.
Nov 25 23:31:05 compute-0 systemd[1]: Reloading.
Nov 25 23:31:05 compute-0 systemd-sysv-generator[75071]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 23:31:05 compute-0 systemd-rc-local-generator[75066]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 23:31:05 compute-0 systemd[1]: Reloading.
Nov 25 23:31:05 compute-0 systemd-sysv-generator[75105]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 23:31:05 compute-0 systemd-rc-local-generator[75102]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 23:31:05 compute-0 systemd[1]: Reached target All Ceph clusters and services.
Nov 25 23:31:05 compute-0 systemd[1]: Reloading.
Nov 25 23:31:05 compute-0 systemd-rc-local-generator[75136]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 23:31:05 compute-0 systemd-sysv-generator[75142]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 23:31:06 compute-0 systemd[1]: Reached target Ceph cluster 101922db-575f-58e2-980f-928050464f69.
Nov 25 23:31:06 compute-0 systemd[1]: Reloading.
Nov 25 23:31:06 compute-0 systemd-rc-local-generator[75179]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 23:31:06 compute-0 systemd-sysv-generator[75183]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 23:31:06 compute-0 systemd[1]: Reloading.
Nov 25 23:31:06 compute-0 systemd-rc-local-generator[75217]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 23:31:06 compute-0 systemd-sysv-generator[75223]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 23:31:06 compute-0 systemd[1]: Created slice Slice /system/ceph-101922db-575f-58e2-980f-928050464f69.
Nov 25 23:31:06 compute-0 systemd[1]: Reached target System Time Set.
Nov 25 23:31:06 compute-0 systemd[1]: Reached target System Time Synchronized.
Nov 25 23:31:06 compute-0 systemd[1]: Starting Ceph mon.compute-0 for 101922db-575f-58e2-980f-928050464f69...
Nov 25 23:31:06 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 23:31:06 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 23:31:06 compute-0 podman[75278]: 2025-11-25 23:31:06.968970111 +0000 UTC m=+0.073589384 container create ac0a24a6c545ea158fe0e3831004243e4b4c11430a272257b058329d471122c7 (image=quay.io/ceph/ceph:v18, name=ceph-101922db-575f-58e2-980f-928050464f69-mon-compute-0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:31:07 compute-0 podman[75278]: 2025-11-25 23:31:06.939028728 +0000 UTC m=+0.043648041 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 23:31:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/577faa4070015ce3b56f460840662cc6473957b4727542d7db74cfa9c3a73012/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:31:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/577faa4070015ce3b56f460840662cc6473957b4727542d7db74cfa9c3a73012/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:31:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/577faa4070015ce3b56f460840662cc6473957b4727542d7db74cfa9c3a73012/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:31:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/577faa4070015ce3b56f460840662cc6473957b4727542d7db74cfa9c3a73012/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 25 23:31:07 compute-0 podman[75278]: 2025-11-25 23:31:07.06447641 +0000 UTC m=+0.169095713 container init ac0a24a6c545ea158fe0e3831004243e4b4c11430a272257b058329d471122c7 (image=quay.io/ceph/ceph:v18, name=ceph-101922db-575f-58e2-980f-928050464f69-mon-compute-0, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 23:31:07 compute-0 podman[75278]: 2025-11-25 23:31:07.08204715 +0000 UTC m=+0.186666423 container start ac0a24a6c545ea158fe0e3831004243e4b4c11430a272257b058329d471122c7 (image=quay.io/ceph/ceph:v18, name=ceph-101922db-575f-58e2-980f-928050464f69-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:31:07 compute-0 bash[75278]: ac0a24a6c545ea158fe0e3831004243e4b4c11430a272257b058329d471122c7
Nov 25 23:31:07 compute-0 systemd[1]: Started Ceph mon.compute-0 for 101922db-575f-58e2-980f-928050464f69.
Nov 25 23:31:07 compute-0 ceph-mon[75298]: set uid:gid to 167:167 (ceph:ceph)
Nov 25 23:31:07 compute-0 ceph-mon[75298]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Nov 25 23:31:07 compute-0 ceph-mon[75298]: pidfile_write: ignore empty --pid-file
Nov 25 23:31:07 compute-0 ceph-mon[75298]: load: jerasure load: lrc 
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb: RocksDB version: 7.9.2
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb: Git sha 0
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb: DB SUMMARY
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb: DB Session ID:  UB9NOW7HEWWESFP4TBUB
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb: CURRENT file:  CURRENT
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb: IDENTITY file:  IDENTITY
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:                         Options.error_if_exists: 0
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:                       Options.create_if_missing: 0
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:                         Options.paranoid_checks: 1
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:                                     Options.env: 0x55c0809eac40
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:                                      Options.fs: PosixFileSystem
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:                                Options.info_log: 0x55c08216ae80
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:                Options.max_file_opening_threads: 16
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:                              Options.statistics: (nil)
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:                               Options.use_fsync: 0
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:                       Options.max_log_file_size: 0
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:                         Options.allow_fallocate: 1
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:                        Options.use_direct_reads: 0
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:          Options.create_missing_column_families: 0
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:                              Options.db_log_dir: 
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:                                 Options.wal_dir: 
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:                   Options.advise_random_on_open: 1
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:                    Options.write_buffer_manager: 0x55c08217ab40
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:                            Options.rate_limiter: (nil)
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:                  Options.unordered_write: 0
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:                               Options.row_cache: None
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:                              Options.wal_filter: None
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:             Options.allow_ingest_behind: 0
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:             Options.two_write_queues: 0
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:             Options.manual_wal_flush: 0
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:             Options.wal_compression: 0
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:             Options.atomic_flush: 0
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:                 Options.log_readahead_size: 0
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:             Options.allow_data_in_errors: 0
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:             Options.db_host_id: __hostname__
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:             Options.max_background_jobs: 2
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:             Options.max_background_compactions: -1
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:             Options.max_subcompactions: 1
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:             Options.max_total_wal_size: 0
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:                          Options.max_open_files: -1
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:                          Options.bytes_per_sync: 0
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:       Options.compaction_readahead_size: 0
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:                  Options.max_background_flushes: -1
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb: Compression algorithms supported:
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:         kZSTD supported: 0
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:         kXpressCompression supported: 0
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:         kBZip2Compression supported: 0
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:         kLZ4Compression supported: 1
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:         kZlibCompression supported: 1
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:         kLZ4HCCompression supported: 1
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:         kSnappyCompression supported: 1
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:           Options.merge_operator: 
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:        Options.compaction_filter: None
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c08216aa80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c0821631f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:        Options.write_buffer_size: 33554432
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:  Options.max_write_buffer_number: 2
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:          Options.compression: NoCompression
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:             Options.num_levels: 7
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:                           Options.bloom_locality: 0
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:                               Options.ttl: 2592000
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:                       Options.enable_blob_files: false
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:                           Options.min_blob_size: 0
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: cf57a6b1-796f-4cfa-b350-53eb10a4554d
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764113467148059, "job": 1, "event": "recovery_started", "wal_files": [4]}
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764113467150312, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764113467, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cf57a6b1-796f-4cfa-b350-53eb10a4554d", "db_session_id": "UB9NOW7HEWWESFP4TBUB", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764113467150434, "job": 1, "event": "recovery_finished"}
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55c08218ce00
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb: DB pointer 0x55c082216000
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 23:31:07 compute-0 ceph-mon[75298]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.16 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.16 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c0821631f0#2 capacity: 512.00 MB usage: 1.17 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(2,0.95 KB,0.000181794%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 25 23:31:07 compute-0 ceph-mon[75298]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 101922db-575f-58e2-980f-928050464f69
Nov 25 23:31:07 compute-0 ceph-mon[75298]: mon.compute-0@-1(???) e0 preinit fsid 101922db-575f-58e2-980f-928050464f69
Nov 25 23:31:07 compute-0 ceph-mon[75298]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Nov 25 23:31:07 compute-0 ceph-mon[75298]: mon.compute-0@0(probing) e0 win_standalone_election
Nov 25 23:31:07 compute-0 ceph-mon[75298]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Nov 25 23:31:07 compute-0 ceph-mon[75298]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 25 23:31:07 compute-0 ceph-mon[75298]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 25 23:31:07 compute-0 ceph-mon[75298]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Nov 25 23:31:07 compute-0 ceph-mon[75298]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Nov 25 23:31:07 compute-0 ceph-mon[75298]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Nov 25 23:31:07 compute-0 ceph-mon[75298]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Nov 25 23:31:07 compute-0 ceph-mon[75298]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 25 23:31:07 compute-0 ceph-mon[75298]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Nov 25 23:31:07 compute-0 ceph-mon[75298]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Nov 25 23:31:07 compute-0 ceph-mon[75298]: mon.compute-0@0(probing) e1 win_standalone_election
Nov 25 23:31:07 compute-0 ceph-mon[75298]: paxos.0).electionLogic(2) init, last seen epoch 2
Nov 25 23:31:07 compute-0 ceph-mon[75298]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 25 23:31:07 compute-0 ceph-mon[75298]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 25 23:31:07 compute-0 ceph-mon[75298]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 25 23:31:07 compute-0 ceph-mon[75298]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 25 23:31:07 compute-0 ceph-mon[75298]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,ceph_version_when_created=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v18,cpu=AMD EPYC-Rome Processor,created_at=2025-11-25T23:31:05.163110Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025,kernel_version=5.14.0-642.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864320,os=Linux}
Nov 25 23:31:07 compute-0 ceph-mon[75298]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Nov 25 23:31:07 compute-0 ceph-mon[75298]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Nov 25 23:31:07 compute-0 ceph-mon[75298]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Nov 25 23:31:07 compute-0 ceph-mon[75298]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Nov 25 23:31:07 compute-0 ceph-mon[75298]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 25 23:31:07 compute-0 ceph-mon[75298]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout}
Nov 25 23:31:07 compute-0 ceph-mon[75298]: mon.compute-0@0(leader).mds e1 new map
Nov 25 23:31:07 compute-0 podman[75299]: 2025-11-25 23:31:07.192225173 +0000 UTC m=+0.060807270 container create 623c52b4e9efa1c5e241836b425520fd1bed7ca7ca9c2fc9d1ef092c761da636 (image=quay.io/ceph/ceph:v18, name=kind_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 23:31:07 compute-0 ceph-mon[75298]: mon.compute-0@0(leader).mds e1 print_map
                                           e1
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Nov 25 23:31:07 compute-0 ceph-mon[75298]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Nov 25 23:31:07 compute-0 ceph-mon[75298]: log_channel(cluster) log [DBG] : fsmap 
Nov 25 23:31:07 compute-0 ceph-mon[75298]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Nov 25 23:31:07 compute-0 ceph-mon[75298]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Nov 25 23:31:07 compute-0 ceph-mon[75298]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Nov 25 23:31:07 compute-0 ceph-mon[75298]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Nov 25 23:31:07 compute-0 ceph-mon[75298]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 25 23:31:07 compute-0 ceph-mon[75298]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 25 23:31:07 compute-0 ceph-mon[75298]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 25 23:31:07 compute-0 ceph-mon[75298]: mkfs 101922db-575f-58e2-980f-928050464f69
Nov 25 23:31:07 compute-0 ceph-mon[75298]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Nov 25 23:31:07 compute-0 ceph-mon[75298]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Nov 25 23:31:07 compute-0 ceph-mon[75298]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Nov 25 23:31:07 compute-0 ceph-mon[75298]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 25 23:31:07 compute-0 systemd[1]: Started libpod-conmon-623c52b4e9efa1c5e241836b425520fd1bed7ca7ca9c2fc9d1ef092c761da636.scope.
Nov 25 23:31:07 compute-0 podman[75299]: 2025-11-25 23:31:07.170556793 +0000 UTC m=+0.039138930 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 23:31:07 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:31:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b673d072b5d5713d4e9817711233687a70e52c95ae672a3b0cb9b3ee82541571/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 23:31:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b673d072b5d5713d4e9817711233687a70e52c95ae672a3b0cb9b3ee82541571/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:31:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b673d072b5d5713d4e9817711233687a70e52c95ae672a3b0cb9b3ee82541571/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 25 23:31:07 compute-0 podman[75299]: 2025-11-25 23:31:07.278746972 +0000 UTC m=+0.147329079 container init 623c52b4e9efa1c5e241836b425520fd1bed7ca7ca9c2fc9d1ef092c761da636 (image=quay.io/ceph/ceph:v18, name=kind_pascal, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 23:31:07 compute-0 podman[75299]: 2025-11-25 23:31:07.285969375 +0000 UTC m=+0.154551502 container start 623c52b4e9efa1c5e241836b425520fd1bed7ca7ca9c2fc9d1ef092c761da636 (image=quay.io/ceph/ceph:v18, name=kind_pascal, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:31:07 compute-0 podman[75299]: 2025-11-25 23:31:07.289759597 +0000 UTC m=+0.158341684 container attach 623c52b4e9efa1c5e241836b425520fd1bed7ca7ca9c2fc9d1ef092c761da636 (image=quay.io/ceph/ceph:v18, name=kind_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 23:31:07 compute-0 ceph-mon[75298]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Nov 25 23:31:07 compute-0 ceph-mon[75298]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/242569495' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 25 23:31:07 compute-0 kind_pascal[75353]:   cluster:
Nov 25 23:31:07 compute-0 kind_pascal[75353]:     id:     101922db-575f-58e2-980f-928050464f69
Nov 25 23:31:07 compute-0 kind_pascal[75353]:     health: HEALTH_OK
Nov 25 23:31:07 compute-0 kind_pascal[75353]:  
Nov 25 23:31:07 compute-0 kind_pascal[75353]:   services:
Nov 25 23:31:07 compute-0 kind_pascal[75353]:     mon: 1 daemons, quorum compute-0 (age 0.472283s)
Nov 25 23:31:07 compute-0 kind_pascal[75353]:     mgr: no daemons active
Nov 25 23:31:07 compute-0 kind_pascal[75353]:     osd: 0 osds: 0 up, 0 in
Nov 25 23:31:07 compute-0 kind_pascal[75353]:  
Nov 25 23:31:07 compute-0 kind_pascal[75353]:   data:
Nov 25 23:31:07 compute-0 kind_pascal[75353]:     pools:   0 pools, 0 pgs
Nov 25 23:31:07 compute-0 kind_pascal[75353]:     objects: 0 objects, 0 B
Nov 25 23:31:07 compute-0 kind_pascal[75353]:     usage:   0 B used, 0 B / 0 B avail
Nov 25 23:31:07 compute-0 kind_pascal[75353]:     pgs:     
Nov 25 23:31:07 compute-0 kind_pascal[75353]:  
Nov 25 23:31:07 compute-0 systemd[1]: libpod-623c52b4e9efa1c5e241836b425520fd1bed7ca7ca9c2fc9d1ef092c761da636.scope: Deactivated successfully.
Nov 25 23:31:07 compute-0 podman[75299]: 2025-11-25 23:31:07.673565942 +0000 UTC m=+0.542148069 container died 623c52b4e9efa1c5e241836b425520fd1bed7ca7ca9c2fc9d1ef092c761da636 (image=quay.io/ceph/ceph:v18, name=kind_pascal, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:31:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-b673d072b5d5713d4e9817711233687a70e52c95ae672a3b0cb9b3ee82541571-merged.mount: Deactivated successfully.
Nov 25 23:31:07 compute-0 podman[75299]: 2025-11-25 23:31:07.734278439 +0000 UTC m=+0.602860556 container remove 623c52b4e9efa1c5e241836b425520fd1bed7ca7ca9c2fc9d1ef092c761da636 (image=quay.io/ceph/ceph:v18, name=kind_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 25 23:31:07 compute-0 systemd[1]: libpod-conmon-623c52b4e9efa1c5e241836b425520fd1bed7ca7ca9c2fc9d1ef092c761da636.scope: Deactivated successfully.
Nov 25 23:31:07 compute-0 podman[75393]: 2025-11-25 23:31:07.839665622 +0000 UTC m=+0.075555915 container create f17bfe56bb87c13dae79f76c32005e86bff8b0df2bbe82b02b0b1879e8078fdd (image=quay.io/ceph/ceph:v18, name=wizardly_dijkstra, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 23:31:07 compute-0 systemd[1]: Started libpod-conmon-f17bfe56bb87c13dae79f76c32005e86bff8b0df2bbe82b02b0b1879e8078fdd.scope.
Nov 25 23:31:07 compute-0 podman[75393]: 2025-11-25 23:31:07.806491134 +0000 UTC m=+0.042381467 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 23:31:07 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:31:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0c63e0d9966f76285c57f51ad147370a511265e675e2649be9ed64e8433bbd7/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 23:31:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0c63e0d9966f76285c57f51ad147370a511265e675e2649be9ed64e8433bbd7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:31:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0c63e0d9966f76285c57f51ad147370a511265e675e2649be9ed64e8433bbd7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:31:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0c63e0d9966f76285c57f51ad147370a511265e675e2649be9ed64e8433bbd7/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 25 23:31:07 compute-0 podman[75393]: 2025-11-25 23:31:07.938534892 +0000 UTC m=+0.174425155 container init f17bfe56bb87c13dae79f76c32005e86bff8b0df2bbe82b02b0b1879e8078fdd (image=quay.io/ceph/ceph:v18, name=wizardly_dijkstra, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:31:07 compute-0 podman[75393]: 2025-11-25 23:31:07.948845858 +0000 UTC m=+0.184736151 container start f17bfe56bb87c13dae79f76c32005e86bff8b0df2bbe82b02b0b1879e8078fdd (image=quay.io/ceph/ceph:v18, name=wizardly_dijkstra, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:31:07 compute-0 podman[75393]: 2025-11-25 23:31:07.952528247 +0000 UTC m=+0.188418550 container attach f17bfe56bb87c13dae79f76c32005e86bff8b0df2bbe82b02b0b1879e8078fdd (image=quay.io/ceph/ceph:v18, name=wizardly_dijkstra, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:31:08 compute-0 ceph-mon[75298]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 25 23:31:08 compute-0 ceph-mon[75298]: monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 25 23:31:08 compute-0 ceph-mon[75298]: fsmap 
Nov 25 23:31:08 compute-0 ceph-mon[75298]: osdmap e1: 0 total, 0 up, 0 in
Nov 25 23:31:08 compute-0 ceph-mon[75298]: mgrmap e1: no daemons active
Nov 25 23:31:08 compute-0 ceph-mon[75298]: from='client.? 192.168.122.100:0/242569495' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 25 23:31:08 compute-0 ceph-mon[75298]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Nov 25 23:31:08 compute-0 ceph-mon[75298]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2041492923' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 25 23:31:08 compute-0 ceph-mon[75298]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2041492923' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 25 23:31:08 compute-0 wizardly_dijkstra[75409]: 
Nov 25 23:31:08 compute-0 wizardly_dijkstra[75409]: [global]
Nov 25 23:31:08 compute-0 wizardly_dijkstra[75409]:         fsid = 101922db-575f-58e2-980f-928050464f69
Nov 25 23:31:08 compute-0 wizardly_dijkstra[75409]:         mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Nov 25 23:31:08 compute-0 wizardly_dijkstra[75409]:         osd_crush_chooseleaf_type = 0
Nov 25 23:31:08 compute-0 systemd[1]: libpod-f17bfe56bb87c13dae79f76c32005e86bff8b0df2bbe82b02b0b1879e8078fdd.scope: Deactivated successfully.
Nov 25 23:31:08 compute-0 podman[75393]: 2025-11-25 23:31:08.353392109 +0000 UTC m=+0.589282372 container died f17bfe56bb87c13dae79f76c32005e86bff8b0df2bbe82b02b0b1879e8078fdd (image=quay.io/ceph/ceph:v18, name=wizardly_dijkstra, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True)
Nov 25 23:31:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-a0c63e0d9966f76285c57f51ad147370a511265e675e2649be9ed64e8433bbd7-merged.mount: Deactivated successfully.
Nov 25 23:31:08 compute-0 podman[75393]: 2025-11-25 23:31:08.405325351 +0000 UTC m=+0.641215604 container remove f17bfe56bb87c13dae79f76c32005e86bff8b0df2bbe82b02b0b1879e8078fdd (image=quay.io/ceph/ceph:v18, name=wizardly_dijkstra, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:31:08 compute-0 systemd[1]: libpod-conmon-f17bfe56bb87c13dae79f76c32005e86bff8b0df2bbe82b02b0b1879e8078fdd.scope: Deactivated successfully.
Nov 25 23:31:08 compute-0 podman[75449]: 2025-11-25 23:31:08.475124791 +0000 UTC m=+0.046739883 container create 2eee6d3a1e222c2e17649ca9f09222ad31d1cbff86f9cdbc8474cf6169c7a599 (image=quay.io/ceph/ceph:v18, name=exciting_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 25 23:31:08 compute-0 systemd[1]: Started libpod-conmon-2eee6d3a1e222c2e17649ca9f09222ad31d1cbff86f9cdbc8474cf6169c7a599.scope.
Nov 25 23:31:08 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:31:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1e5d45b259153fccdaf6c0188aef920ca0bc9fe466674d7d5d0c6bb78257653/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:31:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1e5d45b259153fccdaf6c0188aef920ca0bc9fe466674d7d5d0c6bb78257653/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:31:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1e5d45b259153fccdaf6c0188aef920ca0bc9fe466674d7d5d0c6bb78257653/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 23:31:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1e5d45b259153fccdaf6c0188aef920ca0bc9fe466674d7d5d0c6bb78257653/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 25 23:31:08 compute-0 podman[75449]: 2025-11-25 23:31:08.450243125 +0000 UTC m=+0.021858257 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 23:31:08 compute-0 podman[75449]: 2025-11-25 23:31:08.568161805 +0000 UTC m=+0.139776947 container init 2eee6d3a1e222c2e17649ca9f09222ad31d1cbff86f9cdbc8474cf6169c7a599 (image=quay.io/ceph/ceph:v18, name=exciting_hawking, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default)
Nov 25 23:31:08 compute-0 podman[75449]: 2025-11-25 23:31:08.57879328 +0000 UTC m=+0.150408412 container start 2eee6d3a1e222c2e17649ca9f09222ad31d1cbff86f9cdbc8474cf6169c7a599 (image=quay.io/ceph/ceph:v18, name=exciting_hawking, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 23:31:08 compute-0 podman[75449]: 2025-11-25 23:31:08.583309781 +0000 UTC m=+0.154924913 container attach 2eee6d3a1e222c2e17649ca9f09222ad31d1cbff86f9cdbc8474cf6169c7a599 (image=quay.io/ceph/ceph:v18, name=exciting_hawking, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 23:31:08 compute-0 ceph-mon[75298]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 23:31:08 compute-0 ceph-mon[75298]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1124780770' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:31:08 compute-0 systemd[1]: libpod-2eee6d3a1e222c2e17649ca9f09222ad31d1cbff86f9cdbc8474cf6169c7a599.scope: Deactivated successfully.
Nov 25 23:31:09 compute-0 podman[75491]: 2025-11-25 23:31:09.02510238 +0000 UTC m=+0.024018075 container died 2eee6d3a1e222c2e17649ca9f09222ad31d1cbff86f9cdbc8474cf6169c7a599 (image=quay.io/ceph/ceph:v18, name=exciting_hawking, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 23:31:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-e1e5d45b259153fccdaf6c0188aef920ca0bc9fe466674d7d5d0c6bb78257653-merged.mount: Deactivated successfully.
Nov 25 23:31:09 compute-0 podman[75491]: 2025-11-25 23:31:09.07660338 +0000 UTC m=+0.075519035 container remove 2eee6d3a1e222c2e17649ca9f09222ad31d1cbff86f9cdbc8474cf6169c7a599 (image=quay.io/ceph/ceph:v18, name=exciting_hawking, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:31:09 compute-0 systemd[1]: libpod-conmon-2eee6d3a1e222c2e17649ca9f09222ad31d1cbff86f9cdbc8474cf6169c7a599.scope: Deactivated successfully.
Nov 25 23:31:09 compute-0 systemd[1]: Stopping Ceph mon.compute-0 for 101922db-575f-58e2-980f-928050464f69...
Nov 25 23:31:09 compute-0 ceph-mon[75298]: from='client.? 192.168.122.100:0/2041492923' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 25 23:31:09 compute-0 ceph-mon[75298]: from='client.? 192.168.122.100:0/2041492923' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 25 23:31:09 compute-0 ceph-mon[75298]: from='client.? 192.168.122.100:0/1124780770' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:31:09 compute-0 ceph-mon[75298]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Nov 25 23:31:09 compute-0 ceph-mon[75298]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Nov 25 23:31:09 compute-0 ceph-mon[75298]: mon.compute-0@0(leader) e1 shutdown
Nov 25 23:31:09 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-mon-compute-0[75293]: 2025-11-25T23:31:09.364+0000 7fba1831d640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Nov 25 23:31:09 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-mon-compute-0[75293]: 2025-11-25T23:31:09.364+0000 7fba1831d640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Nov 25 23:31:09 compute-0 ceph-mon[75298]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 25 23:31:09 compute-0 ceph-mon[75298]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 25 23:31:09 compute-0 podman[75535]: 2025-11-25 23:31:09.426600459 +0000 UTC m=+0.115667901 container died ac0a24a6c545ea158fe0e3831004243e4b4c11430a272257b058329d471122c7 (image=quay.io/ceph/ceph:v18, name=ceph-101922db-575f-58e2-980f-928050464f69-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 23:31:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-577faa4070015ce3b56f460840662cc6473957b4727542d7db74cfa9c3a73012-merged.mount: Deactivated successfully.
Nov 25 23:31:09 compute-0 podman[75535]: 2025-11-25 23:31:09.469431066 +0000 UTC m=+0.158498398 container remove ac0a24a6c545ea158fe0e3831004243e4b4c11430a272257b058329d471122c7 (image=quay.io/ceph/ceph:v18, name=ceph-101922db-575f-58e2-980f-928050464f69-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:31:09 compute-0 bash[75535]: ceph-101922db-575f-58e2-980f-928050464f69-mon-compute-0
Nov 25 23:31:09 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 23:31:09 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 23:31:09 compute-0 systemd[1]: ceph-101922db-575f-58e2-980f-928050464f69@mon.compute-0.service: Deactivated successfully.
Nov 25 23:31:09 compute-0 systemd[1]: Stopped Ceph mon.compute-0 for 101922db-575f-58e2-980f-928050464f69.
Nov 25 23:31:09 compute-0 systemd[1]: ceph-101922db-575f-58e2-980f-928050464f69@mon.compute-0.service: Consumed 1.361s CPU time.
Nov 25 23:31:09 compute-0 systemd[1]: Starting Ceph mon.compute-0 for 101922db-575f-58e2-980f-928050464f69...
Nov 25 23:31:09 compute-0 podman[75634]: 2025-11-25 23:31:09.98337921 +0000 UTC m=+0.053367952 container create 42789e176a5dd28a3c234ac87642cbe4e8e4b6a7a43641214b6da8a496b18af2 (image=quay.io/ceph/ceph:v18, name=ceph-101922db-575f-58e2-980f-928050464f69-mon-compute-0, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:31:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9c2aa8903f56cf001ed4089ced6ff01949bdc00412c5be2ad4fd203969eb304/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:31:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9c2aa8903f56cf001ed4089ced6ff01949bdc00412c5be2ad4fd203969eb304/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:31:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9c2aa8903f56cf001ed4089ced6ff01949bdc00412c5be2ad4fd203969eb304/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:31:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9c2aa8903f56cf001ed4089ced6ff01949bdc00412c5be2ad4fd203969eb304/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 25 23:31:10 compute-0 podman[75634]: 2025-11-25 23:31:09.96098675 +0000 UTC m=+0.030975502 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 23:31:10 compute-0 podman[75634]: 2025-11-25 23:31:10.062579552 +0000 UTC m=+0.132568344 container init 42789e176a5dd28a3c234ac87642cbe4e8e4b6a7a43641214b6da8a496b18af2 (image=quay.io/ceph/ceph:v18, name=ceph-101922db-575f-58e2-980f-928050464f69-mon-compute-0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True)
Nov 25 23:31:10 compute-0 podman[75634]: 2025-11-25 23:31:10.078725205 +0000 UTC m=+0.148713957 container start 42789e176a5dd28a3c234ac87642cbe4e8e4b6a7a43641214b6da8a496b18af2 (image=quay.io/ceph/ceph:v18, name=ceph-101922db-575f-58e2-980f-928050464f69-mon-compute-0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:31:10 compute-0 bash[75634]: 42789e176a5dd28a3c234ac87642cbe4e8e4b6a7a43641214b6da8a496b18af2
Nov 25 23:31:10 compute-0 systemd[1]: Started Ceph mon.compute-0 for 101922db-575f-58e2-980f-928050464f69.
Nov 25 23:31:10 compute-0 ceph-mon[75654]: set uid:gid to 167:167 (ceph:ceph)
Nov 25 23:31:10 compute-0 ceph-mon[75654]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Nov 25 23:31:10 compute-0 ceph-mon[75654]: pidfile_write: ignore empty --pid-file
Nov 25 23:31:10 compute-0 ceph-mon[75654]: load: jerasure load: lrc 
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb: RocksDB version: 7.9.2
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb: Git sha 0
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb: DB SUMMARY
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb: DB Session ID:  Q7VS70283MEZ1V621ZPR
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb: CURRENT file:  CURRENT
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb: IDENTITY file:  IDENTITY
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 55676 ; 
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:                         Options.error_if_exists: 0
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:                       Options.create_if_missing: 0
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:                         Options.paranoid_checks: 1
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:                                     Options.env: 0x55f0eb8aec40
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:                                      Options.fs: PosixFileSystem
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:                                Options.info_log: 0x55f0edccb040
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:                Options.max_file_opening_threads: 16
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:                              Options.statistics: (nil)
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:                               Options.use_fsync: 0
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:                       Options.max_log_file_size: 0
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:                         Options.allow_fallocate: 1
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:                        Options.use_direct_reads: 0
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:          Options.create_missing_column_families: 0
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:                              Options.db_log_dir: 
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:                                 Options.wal_dir: 
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:                   Options.advise_random_on_open: 1
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:                    Options.write_buffer_manager: 0x55f0edcdab40
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:                            Options.rate_limiter: (nil)
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:                  Options.unordered_write: 0
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:                               Options.row_cache: None
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:                              Options.wal_filter: None
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:             Options.allow_ingest_behind: 0
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:             Options.two_write_queues: 0
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:             Options.manual_wal_flush: 0
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:             Options.wal_compression: 0
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:             Options.atomic_flush: 0
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:                 Options.log_readahead_size: 0
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:             Options.allow_data_in_errors: 0
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:             Options.db_host_id: __hostname__
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:             Options.max_background_jobs: 2
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:             Options.max_background_compactions: -1
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:             Options.max_subcompactions: 1
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:             Options.max_total_wal_size: 0
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:                          Options.max_open_files: -1
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:                          Options.bytes_per_sync: 0
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:       Options.compaction_readahead_size: 0
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:                  Options.max_background_flushes: -1
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb: Compression algorithms supported:
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:         kZSTD supported: 0
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:         kXpressCompression supported: 0
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:         kBZip2Compression supported: 0
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:         kLZ4Compression supported: 1
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:         kZlibCompression supported: 1
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:         kLZ4HCCompression supported: 1
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:         kSnappyCompression supported: 1
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:           Options.merge_operator: 
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:        Options.compaction_filter: None
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f0edccac40)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f0edcc31f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:        Options.write_buffer_size: 33554432
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:  Options.max_write_buffer_number: 2
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:          Options.compression: NoCompression
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:             Options.num_levels: 7
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:                           Options.bloom_locality: 0
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:                               Options.ttl: 2592000
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:                       Options.enable_blob_files: false
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:                           Options.min_blob_size: 0
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: cf57a6b1-796f-4cfa-b350-53eb10a4554d
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764113470128961, "job": 1, "event": "recovery_started", "wal_files": [9]}
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764113470132223, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 55257, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 138, "table_properties": {"data_size": 53797, "index_size": 166, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 261, "raw_key_size": 3050, "raw_average_key_size": 30, "raw_value_size": 51386, "raw_average_value_size": 508, "num_data_blocks": 9, "num_entries": 101, "num_filter_entries": 101, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764113470, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cf57a6b1-796f-4cfa-b350-53eb10a4554d", "db_session_id": "Q7VS70283MEZ1V621ZPR", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764113470132346, "job": 1, "event": "recovery_finished"}
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55f0edcece00
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb: DB pointer 0x55f0edd76000
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 23:31:10 compute-0 ceph-mon[75654]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0   55.86 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     18.8      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      2/0   55.86 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     18.8      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     18.8      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     18.8      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 3.91 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 3.91 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f0edcc31f0#2 capacity: 512.00 MB usage: 0.78 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(2,0.42 KB,8.04663e-05%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 25 23:31:10 compute-0 ceph-mon[75654]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 101922db-575f-58e2-980f-928050464f69
Nov 25 23:31:10 compute-0 ceph-mon[75654]: mon.compute-0@-1(???) e1 preinit fsid 101922db-575f-58e2-980f-928050464f69
Nov 25 23:31:10 compute-0 ceph-mon[75654]: mon.compute-0@-1(???).mds e1 new map
Nov 25 23:31:10 compute-0 ceph-mon[75654]: mon.compute-0@-1(???).mds e1 print_map
                                           e1
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Nov 25 23:31:10 compute-0 ceph-mon[75654]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Nov 25 23:31:10 compute-0 ceph-mon[75654]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 25 23:31:10 compute-0 ceph-mon[75654]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 25 23:31:10 compute-0 ceph-mon[75654]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 25 23:31:10 compute-0 ceph-mon[75654]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Nov 25 23:31:10 compute-0 ceph-mon[75654]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Nov 25 23:31:10 compute-0 ceph-mon[75654]: mon.compute-0@0(probing) e1 win_standalone_election
Nov 25 23:31:10 compute-0 ceph-mon[75654]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Nov 25 23:31:10 compute-0 ceph-mon[75654]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 25 23:31:10 compute-0 ceph-mon[75654]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 25 23:31:10 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 25 23:31:10 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 25 23:31:10 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : fsmap 
Nov 25 23:31:10 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Nov 25 23:31:10 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Nov 25 23:31:10 compute-0 podman[75655]: 2025-11-25 23:31:10.190925642 +0000 UTC m=+0.065460136 container create d9aae4e67beff1eb85d27fb85b50419260e3d168f99a54cf7997ef097632f373 (image=quay.io/ceph/ceph:v18, name=optimistic_bhaskara, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:31:10 compute-0 ceph-mon[75654]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 25 23:31:10 compute-0 ceph-mon[75654]: monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 25 23:31:10 compute-0 ceph-mon[75654]: fsmap 
Nov 25 23:31:10 compute-0 ceph-mon[75654]: osdmap e1: 0 total, 0 up, 0 in
Nov 25 23:31:10 compute-0 ceph-mon[75654]: mgrmap e1: no daemons active
Nov 25 23:31:10 compute-0 systemd[1]: Started libpod-conmon-d9aae4e67beff1eb85d27fb85b50419260e3d168f99a54cf7997ef097632f373.scope.
Nov 25 23:31:10 compute-0 podman[75655]: 2025-11-25 23:31:10.16625693 +0000 UTC m=+0.040791474 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 23:31:10 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:31:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6547807f31376b062c8c38efcb0448febf02c8e14ba702a49d8b2b8cc1dbd544/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 23:31:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6547807f31376b062c8c38efcb0448febf02c8e14ba702a49d8b2b8cc1dbd544/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:31:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6547807f31376b062c8c38efcb0448febf02c8e14ba702a49d8b2b8cc1dbd544/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:31:10 compute-0 podman[75655]: 2025-11-25 23:31:10.295675149 +0000 UTC m=+0.170209703 container init d9aae4e67beff1eb85d27fb85b50419260e3d168f99a54cf7997ef097632f373 (image=quay.io/ceph/ceph:v18, name=optimistic_bhaskara, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:31:10 compute-0 podman[75655]: 2025-11-25 23:31:10.306218711 +0000 UTC m=+0.180753205 container start d9aae4e67beff1eb85d27fb85b50419260e3d168f99a54cf7997ef097632f373 (image=quay.io/ceph/ceph:v18, name=optimistic_bhaskara, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 23:31:10 compute-0 podman[75655]: 2025-11-25 23:31:10.30954317 +0000 UTC m=+0.184077734 container attach d9aae4e67beff1eb85d27fb85b50419260e3d168f99a54cf7997ef097632f373 (image=quay.io/ceph/ceph:v18, name=optimistic_bhaskara, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:31:10 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0) v1
Nov 25 23:31:10 compute-0 systemd[1]: libpod-d9aae4e67beff1eb85d27fb85b50419260e3d168f99a54cf7997ef097632f373.scope: Deactivated successfully.
Nov 25 23:31:10 compute-0 podman[75655]: 2025-11-25 23:31:10.724697275 +0000 UTC m=+0.599231769 container died d9aae4e67beff1eb85d27fb85b50419260e3d168f99a54cf7997ef097632f373 (image=quay.io/ceph/ceph:v18, name=optimistic_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:31:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-6547807f31376b062c8c38efcb0448febf02c8e14ba702a49d8b2b8cc1dbd544-merged.mount: Deactivated successfully.
Nov 25 23:31:10 compute-0 podman[75655]: 2025-11-25 23:31:10.779263197 +0000 UTC m=+0.653797691 container remove d9aae4e67beff1eb85d27fb85b50419260e3d168f99a54cf7997ef097632f373 (image=quay.io/ceph/ceph:v18, name=optimistic_bhaskara, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 25 23:31:10 compute-0 systemd[1]: libpod-conmon-d9aae4e67beff1eb85d27fb85b50419260e3d168f99a54cf7997ef097632f373.scope: Deactivated successfully.
Nov 25 23:31:10 compute-0 podman[75748]: 2025-11-25 23:31:10.875089685 +0000 UTC m=+0.063947534 container create 8912c836b1162111928196f7a7b9483fad7398b280c06f9538322dcac2feef20 (image=quay.io/ceph/ceph:v18, name=awesome_dubinsky, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:31:10 compute-0 systemd[1]: Started libpod-conmon-8912c836b1162111928196f7a7b9483fad7398b280c06f9538322dcac2feef20.scope.
Nov 25 23:31:10 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:31:10 compute-0 podman[75748]: 2025-11-25 23:31:10.848903384 +0000 UTC m=+0.037761273 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 23:31:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2ff988a36679ac1dda85b5ec70e2e5270f9539f7e2682a19bff82a26fc118d9/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 23:31:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2ff988a36679ac1dda85b5ec70e2e5270f9539f7e2682a19bff82a26fc118d9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:31:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2ff988a36679ac1dda85b5ec70e2e5270f9539f7e2682a19bff82a26fc118d9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:31:10 compute-0 podman[75748]: 2025-11-25 23:31:10.965051996 +0000 UTC m=+0.153909835 container init 8912c836b1162111928196f7a7b9483fad7398b280c06f9538322dcac2feef20 (image=quay.io/ceph/ceph:v18, name=awesome_dubinsky, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 23:31:10 compute-0 podman[75748]: 2025-11-25 23:31:10.97749208 +0000 UTC m=+0.166349919 container start 8912c836b1162111928196f7a7b9483fad7398b280c06f9538322dcac2feef20 (image=quay.io/ceph/ceph:v18, name=awesome_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:31:10 compute-0 podman[75748]: 2025-11-25 23:31:10.981680592 +0000 UTC m=+0.170538441 container attach 8912c836b1162111928196f7a7b9483fad7398b280c06f9538322dcac2feef20 (image=quay.io/ceph/ceph:v18, name=awesome_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 23:31:11 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0) v1
Nov 25 23:31:11 compute-0 systemd[1]: libpod-8912c836b1162111928196f7a7b9483fad7398b280c06f9538322dcac2feef20.scope: Deactivated successfully.
Nov 25 23:31:11 compute-0 conmon[75764]: conmon 8912c836b11621119281 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8912c836b1162111928196f7a7b9483fad7398b280c06f9538322dcac2feef20.scope/container/memory.events
Nov 25 23:31:11 compute-0 podman[75790]: 2025-11-25 23:31:11.464097539 +0000 UTC m=+0.039558591 container died 8912c836b1162111928196f7a7b9483fad7398b280c06f9538322dcac2feef20 (image=quay.io/ceph/ceph:v18, name=awesome_dubinsky, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3)
Nov 25 23:31:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-a2ff988a36679ac1dda85b5ec70e2e5270f9539f7e2682a19bff82a26fc118d9-merged.mount: Deactivated successfully.
Nov 25 23:31:11 compute-0 podman[75790]: 2025-11-25 23:31:11.507368399 +0000 UTC m=+0.082829411 container remove 8912c836b1162111928196f7a7b9483fad7398b280c06f9538322dcac2feef20 (image=quay.io/ceph/ceph:v18, name=awesome_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:31:11 compute-0 systemd[1]: libpod-conmon-8912c836b1162111928196f7a7b9483fad7398b280c06f9538322dcac2feef20.scope: Deactivated successfully.
Nov 25 23:31:11 compute-0 systemd[1]: Reloading.
Nov 25 23:31:11 compute-0 systemd-rc-local-generator[75831]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 23:31:11 compute-0 systemd-sysv-generator[75834]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 23:31:11 compute-0 systemd[1]: Reloading.
Nov 25 23:31:11 compute-0 systemd-rc-local-generator[75867]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 23:31:11 compute-0 systemd-sysv-generator[75872]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 23:31:12 compute-0 systemd[1]: Starting Ceph mgr.compute-0.gwqfsl for 101922db-575f-58e2-980f-928050464f69...
Nov 25 23:31:12 compute-0 podman[75934]: 2025-11-25 23:31:12.530565238 +0000 UTC m=+0.071277331 container create cb17cd0be6b6c5a9b97ff9dc5584e4a27121c5d35ae63d1e3300d59246c81be2 (image=quay.io/ceph/ceph:v18, name=ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:31:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ac7215356bd862bcf51092d9c1f43e2b2864a20a35f4a69cadddfad036631b0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:31:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ac7215356bd862bcf51092d9c1f43e2b2864a20a35f4a69cadddfad036631b0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:31:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ac7215356bd862bcf51092d9c1f43e2b2864a20a35f4a69cadddfad036631b0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:31:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ac7215356bd862bcf51092d9c1f43e2b2864a20a35f4a69cadddfad036631b0/merged/var/lib/ceph/mgr/ceph-compute-0.gwqfsl supports timestamps until 2038 (0x7fffffff)
Nov 25 23:31:12 compute-0 podman[75934]: 2025-11-25 23:31:12.501434528 +0000 UTC m=+0.042146661 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 23:31:12 compute-0 podman[75934]: 2025-11-25 23:31:12.611661932 +0000 UTC m=+0.152374055 container init cb17cd0be6b6c5a9b97ff9dc5584e4a27121c5d35ae63d1e3300d59246c81be2 (image=quay.io/ceph/ceph:v18, name=ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 25 23:31:12 compute-0 podman[75934]: 2025-11-25 23:31:12.623005516 +0000 UTC m=+0.163717599 container start cb17cd0be6b6c5a9b97ff9dc5584e4a27121c5d35ae63d1e3300d59246c81be2 (image=quay.io/ceph/ceph:v18, name=ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 23:31:12 compute-0 bash[75934]: cb17cd0be6b6c5a9b97ff9dc5584e4a27121c5d35ae63d1e3300d59246c81be2
Nov 25 23:31:12 compute-0 systemd[1]: Started Ceph mgr.compute-0.gwqfsl for 101922db-575f-58e2-980f-928050464f69.
Nov 25 23:31:12 compute-0 ceph-mgr[75954]: set uid:gid to 167:167 (ceph:ceph)
Nov 25 23:31:12 compute-0 ceph-mgr[75954]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Nov 25 23:31:12 compute-0 ceph-mgr[75954]: pidfile_write: ignore empty --pid-file
Nov 25 23:31:12 compute-0 podman[75955]: 2025-11-25 23:31:12.726880359 +0000 UTC m=+0.058558310 container create f58632f9ab32f0fdb08636860a90761cbe2a20e0f86beee1109848a070b8bfff (image=quay.io/ceph/ceph:v18, name=vigilant_proskuriakova, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:31:12 compute-0 systemd[1]: Started libpod-conmon-f58632f9ab32f0fdb08636860a90761cbe2a20e0f86beee1109848a070b8bfff.scope.
Nov 25 23:31:12 compute-0 podman[75955]: 2025-11-25 23:31:12.697167773 +0000 UTC m=+0.028845734 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 23:31:12 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:31:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9756f4a954c25588889f994d4d2e19c4baf1f0a0530e93c4ae7d1788e473981f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:31:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9756f4a954c25588889f994d4d2e19c4baf1f0a0530e93c4ae7d1788e473981f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 23:31:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9756f4a954c25588889f994d4d2e19c4baf1f0a0530e93c4ae7d1788e473981f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:31:12 compute-0 podman[75955]: 2025-11-25 23:31:12.821782002 +0000 UTC m=+0.153459923 container init f58632f9ab32f0fdb08636860a90761cbe2a20e0f86beee1109848a070b8bfff (image=quay.io/ceph/ceph:v18, name=vigilant_proskuriakova, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True)
Nov 25 23:31:12 compute-0 podman[75955]: 2025-11-25 23:31:12.833290181 +0000 UTC m=+0.164968132 container start f58632f9ab32f0fdb08636860a90761cbe2a20e0f86beee1109848a070b8bfff (image=quay.io/ceph/ceph:v18, name=vigilant_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 23:31:12 compute-0 podman[75955]: 2025-11-25 23:31:12.837635487 +0000 UTC m=+0.169313478 container attach f58632f9ab32f0fdb08636860a90761cbe2a20e0f86beee1109848a070b8bfff (image=quay.io/ceph/ceph:v18, name=vigilant_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 23:31:12 compute-0 ceph-mgr[75954]: mgr[py] Loading python module 'alerts'
Nov 25 23:31:13 compute-0 ceph-mgr[75954]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 25 23:31:13 compute-0 ceph-mgr[75954]: mgr[py] Loading python module 'balancer'
Nov 25 23:31:13 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:13.157+0000 7f0aaebe6140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 25 23:31:13 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 25 23:31:13 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1216517223' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 25 23:31:13 compute-0 vigilant_proskuriakova[75994]: 
Nov 25 23:31:13 compute-0 vigilant_proskuriakova[75994]: {
Nov 25 23:31:13 compute-0 vigilant_proskuriakova[75994]:     "fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:31:13 compute-0 vigilant_proskuriakova[75994]:     "health": {
Nov 25 23:31:13 compute-0 vigilant_proskuriakova[75994]:         "status": "HEALTH_OK",
Nov 25 23:31:13 compute-0 vigilant_proskuriakova[75994]:         "checks": {},
Nov 25 23:31:13 compute-0 vigilant_proskuriakova[75994]:         "mutes": []
Nov 25 23:31:13 compute-0 vigilant_proskuriakova[75994]:     },
Nov 25 23:31:13 compute-0 vigilant_proskuriakova[75994]:     "election_epoch": 5,
Nov 25 23:31:13 compute-0 vigilant_proskuriakova[75994]:     "quorum": [
Nov 25 23:31:13 compute-0 vigilant_proskuriakova[75994]:         0
Nov 25 23:31:13 compute-0 vigilant_proskuriakova[75994]:     ],
Nov 25 23:31:13 compute-0 vigilant_proskuriakova[75994]:     "quorum_names": [
Nov 25 23:31:13 compute-0 vigilant_proskuriakova[75994]:         "compute-0"
Nov 25 23:31:13 compute-0 vigilant_proskuriakova[75994]:     ],
Nov 25 23:31:13 compute-0 vigilant_proskuriakova[75994]:     "quorum_age": 3,
Nov 25 23:31:13 compute-0 vigilant_proskuriakova[75994]:     "monmap": {
Nov 25 23:31:13 compute-0 vigilant_proskuriakova[75994]:         "epoch": 1,
Nov 25 23:31:13 compute-0 vigilant_proskuriakova[75994]:         "min_mon_release_name": "reef",
Nov 25 23:31:13 compute-0 vigilant_proskuriakova[75994]:         "num_mons": 1
Nov 25 23:31:13 compute-0 vigilant_proskuriakova[75994]:     },
Nov 25 23:31:13 compute-0 vigilant_proskuriakova[75994]:     "osdmap": {
Nov 25 23:31:13 compute-0 vigilant_proskuriakova[75994]:         "epoch": 1,
Nov 25 23:31:13 compute-0 vigilant_proskuriakova[75994]:         "num_osds": 0,
Nov 25 23:31:13 compute-0 vigilant_proskuriakova[75994]:         "num_up_osds": 0,
Nov 25 23:31:13 compute-0 vigilant_proskuriakova[75994]:         "osd_up_since": 0,
Nov 25 23:31:13 compute-0 vigilant_proskuriakova[75994]:         "num_in_osds": 0,
Nov 25 23:31:13 compute-0 vigilant_proskuriakova[75994]:         "osd_in_since": 0,
Nov 25 23:31:13 compute-0 vigilant_proskuriakova[75994]:         "num_remapped_pgs": 0
Nov 25 23:31:13 compute-0 vigilant_proskuriakova[75994]:     },
Nov 25 23:31:13 compute-0 vigilant_proskuriakova[75994]:     "pgmap": {
Nov 25 23:31:13 compute-0 vigilant_proskuriakova[75994]:         "pgs_by_state": [],
Nov 25 23:31:13 compute-0 vigilant_proskuriakova[75994]:         "num_pgs": 0,
Nov 25 23:31:13 compute-0 vigilant_proskuriakova[75994]:         "num_pools": 0,
Nov 25 23:31:13 compute-0 vigilant_proskuriakova[75994]:         "num_objects": 0,
Nov 25 23:31:13 compute-0 vigilant_proskuriakova[75994]:         "data_bytes": 0,
Nov 25 23:31:13 compute-0 vigilant_proskuriakova[75994]:         "bytes_used": 0,
Nov 25 23:31:13 compute-0 vigilant_proskuriakova[75994]:         "bytes_avail": 0,
Nov 25 23:31:13 compute-0 vigilant_proskuriakova[75994]:         "bytes_total": 0
Nov 25 23:31:13 compute-0 vigilant_proskuriakova[75994]:     },
Nov 25 23:31:13 compute-0 vigilant_proskuriakova[75994]:     "fsmap": {
Nov 25 23:31:13 compute-0 vigilant_proskuriakova[75994]:         "epoch": 1,
Nov 25 23:31:13 compute-0 vigilant_proskuriakova[75994]:         "by_rank": [],
Nov 25 23:31:13 compute-0 vigilant_proskuriakova[75994]:         "up:standby": 0
Nov 25 23:31:13 compute-0 vigilant_proskuriakova[75994]:     },
Nov 25 23:31:13 compute-0 vigilant_proskuriakova[75994]:     "mgrmap": {
Nov 25 23:31:13 compute-0 vigilant_proskuriakova[75994]:         "available": false,
Nov 25 23:31:13 compute-0 vigilant_proskuriakova[75994]:         "num_standbys": 0,
Nov 25 23:31:13 compute-0 vigilant_proskuriakova[75994]:         "modules": [
Nov 25 23:31:13 compute-0 vigilant_proskuriakova[75994]:             "iostat",
Nov 25 23:31:13 compute-0 vigilant_proskuriakova[75994]:             "nfs",
Nov 25 23:31:13 compute-0 vigilant_proskuriakova[75994]:             "restful"
Nov 25 23:31:13 compute-0 vigilant_proskuriakova[75994]:         ],
Nov 25 23:31:13 compute-0 vigilant_proskuriakova[75994]:         "services": {}
Nov 25 23:31:13 compute-0 vigilant_proskuriakova[75994]:     },
Nov 25 23:31:13 compute-0 vigilant_proskuriakova[75994]:     "servicemap": {
Nov 25 23:31:13 compute-0 vigilant_proskuriakova[75994]:         "epoch": 1,
Nov 25 23:31:13 compute-0 vigilant_proskuriakova[75994]:         "modified": "2025-11-25T23:31:07.189601+0000",
Nov 25 23:31:13 compute-0 vigilant_proskuriakova[75994]:         "services": {}
Nov 25 23:31:13 compute-0 vigilant_proskuriakova[75994]:     },
Nov 25 23:31:13 compute-0 vigilant_proskuriakova[75994]:     "progress_events": {}
Nov 25 23:31:13 compute-0 vigilant_proskuriakova[75994]: }
Nov 25 23:31:13 compute-0 systemd[1]: libpod-f58632f9ab32f0fdb08636860a90761cbe2a20e0f86beee1109848a070b8bfff.scope: Deactivated successfully.
Nov 25 23:31:13 compute-0 podman[75955]: 2025-11-25 23:31:13.249468623 +0000 UTC m=+0.581146574 container died f58632f9ab32f0fdb08636860a90761cbe2a20e0f86beee1109848a070b8bfff (image=quay.io/ceph/ceph:v18, name=vigilant_proskuriakova, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:31:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-9756f4a954c25588889f994d4d2e19c4baf1f0a0530e93c4ae7d1788e473981f-merged.mount: Deactivated successfully.
Nov 25 23:31:13 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/1216517223' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 25 23:31:13 compute-0 podman[75955]: 2025-11-25 23:31:13.303933003 +0000 UTC m=+0.635610914 container remove f58632f9ab32f0fdb08636860a90761cbe2a20e0f86beee1109848a070b8bfff (image=quay.io/ceph/ceph:v18, name=vigilant_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 23:31:13 compute-0 systemd[1]: libpod-conmon-f58632f9ab32f0fdb08636860a90761cbe2a20e0f86beee1109848a070b8bfff.scope: Deactivated successfully.
Nov 25 23:31:13 compute-0 ceph-mgr[75954]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 25 23:31:13 compute-0 ceph-mgr[75954]: mgr[py] Loading python module 'cephadm'
Nov 25 23:31:13 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:13.461+0000 7f0aaebe6140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 25 23:31:15 compute-0 ceph-mgr[75954]: mgr[py] Loading python module 'crash'
Nov 25 23:31:15 compute-0 podman[76042]: 2025-11-25 23:31:15.380273154 +0000 UTC m=+0.048329286 container create e23f251efa117e9fb1efc823390495d1f21e7853a7f4dc5296db7d61626831c6 (image=quay.io/ceph/ceph:v18, name=distracted_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 23:31:15 compute-0 systemd[1]: Started libpod-conmon-e23f251efa117e9fb1efc823390495d1f21e7853a7f4dc5296db7d61626831c6.scope.
Nov 25 23:31:15 compute-0 podman[76042]: 2025-11-25 23:31:15.358893111 +0000 UTC m=+0.026949233 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 23:31:15 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:31:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70f82046b61787684eb825545a885c89e630acc89315cc87f8740c098c5f357f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:31:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70f82046b61787684eb825545a885c89e630acc89315cc87f8740c098c5f357f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 23:31:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70f82046b61787684eb825545a885c89e630acc89315cc87f8740c098c5f357f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:31:15 compute-0 podman[76042]: 2025-11-25 23:31:15.480914431 +0000 UTC m=+0.148970603 container init e23f251efa117e9fb1efc823390495d1f21e7853a7f4dc5296db7d61626831c6 (image=quay.io/ceph/ceph:v18, name=distracted_jones, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:31:15 compute-0 podman[76042]: 2025-11-25 23:31:15.494425753 +0000 UTC m=+0.162481865 container start e23f251efa117e9fb1efc823390495d1f21e7853a7f4dc5296db7d61626831c6 (image=quay.io/ceph/ceph:v18, name=distracted_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 23:31:15 compute-0 podman[76042]: 2025-11-25 23:31:15.498422839 +0000 UTC m=+0.166478981 container attach e23f251efa117e9fb1efc823390495d1f21e7853a7f4dc5296db7d61626831c6 (image=quay.io/ceph/ceph:v18, name=distracted_jones, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:31:15 compute-0 ceph-mgr[75954]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 25 23:31:15 compute-0 ceph-mgr[75954]: mgr[py] Loading python module 'dashboard'
Nov 25 23:31:15 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:15.545+0000 7f0aaebe6140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 25 23:31:15 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 25 23:31:15 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2985170876' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 25 23:31:15 compute-0 distracted_jones[76058]: 
Nov 25 23:31:15 compute-0 distracted_jones[76058]: {
Nov 25 23:31:15 compute-0 distracted_jones[76058]:     "fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:31:15 compute-0 distracted_jones[76058]:     "health": {
Nov 25 23:31:15 compute-0 distracted_jones[76058]:         "status": "HEALTH_OK",
Nov 25 23:31:15 compute-0 distracted_jones[76058]:         "checks": {},
Nov 25 23:31:15 compute-0 distracted_jones[76058]:         "mutes": []
Nov 25 23:31:15 compute-0 distracted_jones[76058]:     },
Nov 25 23:31:15 compute-0 distracted_jones[76058]:     "election_epoch": 5,
Nov 25 23:31:15 compute-0 distracted_jones[76058]:     "quorum": [
Nov 25 23:31:15 compute-0 distracted_jones[76058]:         0
Nov 25 23:31:15 compute-0 distracted_jones[76058]:     ],
Nov 25 23:31:15 compute-0 distracted_jones[76058]:     "quorum_names": [
Nov 25 23:31:15 compute-0 distracted_jones[76058]:         "compute-0"
Nov 25 23:31:15 compute-0 distracted_jones[76058]:     ],
Nov 25 23:31:15 compute-0 distracted_jones[76058]:     "quorum_age": 5,
Nov 25 23:31:15 compute-0 distracted_jones[76058]:     "monmap": {
Nov 25 23:31:15 compute-0 distracted_jones[76058]:         "epoch": 1,
Nov 25 23:31:15 compute-0 distracted_jones[76058]:         "min_mon_release_name": "reef",
Nov 25 23:31:15 compute-0 distracted_jones[76058]:         "num_mons": 1
Nov 25 23:31:15 compute-0 distracted_jones[76058]:     },
Nov 25 23:31:15 compute-0 distracted_jones[76058]:     "osdmap": {
Nov 25 23:31:15 compute-0 distracted_jones[76058]:         "epoch": 1,
Nov 25 23:31:15 compute-0 distracted_jones[76058]:         "num_osds": 0,
Nov 25 23:31:15 compute-0 distracted_jones[76058]:         "num_up_osds": 0,
Nov 25 23:31:15 compute-0 distracted_jones[76058]:         "osd_up_since": 0,
Nov 25 23:31:15 compute-0 distracted_jones[76058]:         "num_in_osds": 0,
Nov 25 23:31:15 compute-0 distracted_jones[76058]:         "osd_in_since": 0,
Nov 25 23:31:15 compute-0 distracted_jones[76058]:         "num_remapped_pgs": 0
Nov 25 23:31:15 compute-0 distracted_jones[76058]:     },
Nov 25 23:31:15 compute-0 distracted_jones[76058]:     "pgmap": {
Nov 25 23:31:15 compute-0 distracted_jones[76058]:         "pgs_by_state": [],
Nov 25 23:31:15 compute-0 distracted_jones[76058]:         "num_pgs": 0,
Nov 25 23:31:15 compute-0 distracted_jones[76058]:         "num_pools": 0,
Nov 25 23:31:15 compute-0 distracted_jones[76058]:         "num_objects": 0,
Nov 25 23:31:15 compute-0 distracted_jones[76058]:         "data_bytes": 0,
Nov 25 23:31:15 compute-0 distracted_jones[76058]:         "bytes_used": 0,
Nov 25 23:31:15 compute-0 distracted_jones[76058]:         "bytes_avail": 0,
Nov 25 23:31:15 compute-0 distracted_jones[76058]:         "bytes_total": 0
Nov 25 23:31:15 compute-0 distracted_jones[76058]:     },
Nov 25 23:31:15 compute-0 distracted_jones[76058]:     "fsmap": {
Nov 25 23:31:15 compute-0 distracted_jones[76058]:         "epoch": 1,
Nov 25 23:31:15 compute-0 distracted_jones[76058]:         "by_rank": [],
Nov 25 23:31:15 compute-0 distracted_jones[76058]:         "up:standby": 0
Nov 25 23:31:15 compute-0 distracted_jones[76058]:     },
Nov 25 23:31:15 compute-0 distracted_jones[76058]:     "mgrmap": {
Nov 25 23:31:15 compute-0 distracted_jones[76058]:         "available": false,
Nov 25 23:31:15 compute-0 distracted_jones[76058]:         "num_standbys": 0,
Nov 25 23:31:15 compute-0 distracted_jones[76058]:         "modules": [
Nov 25 23:31:15 compute-0 distracted_jones[76058]:             "iostat",
Nov 25 23:31:15 compute-0 distracted_jones[76058]:             "nfs",
Nov 25 23:31:15 compute-0 distracted_jones[76058]:             "restful"
Nov 25 23:31:15 compute-0 distracted_jones[76058]:         ],
Nov 25 23:31:15 compute-0 distracted_jones[76058]:         "services": {}
Nov 25 23:31:15 compute-0 distracted_jones[76058]:     },
Nov 25 23:31:15 compute-0 distracted_jones[76058]:     "servicemap": {
Nov 25 23:31:15 compute-0 distracted_jones[76058]:         "epoch": 1,
Nov 25 23:31:15 compute-0 distracted_jones[76058]:         "modified": "2025-11-25T23:31:07.189601+0000",
Nov 25 23:31:15 compute-0 distracted_jones[76058]:         "services": {}
Nov 25 23:31:15 compute-0 distracted_jones[76058]:     },
Nov 25 23:31:15 compute-0 distracted_jones[76058]:     "progress_events": {}
Nov 25 23:31:15 compute-0 distracted_jones[76058]: }
Nov 25 23:31:15 compute-0 systemd[1]: libpod-e23f251efa117e9fb1efc823390495d1f21e7853a7f4dc5296db7d61626831c6.scope: Deactivated successfully.
Nov 25 23:31:15 compute-0 podman[76042]: 2025-11-25 23:31:15.901896582 +0000 UTC m=+0.569952694 container died e23f251efa117e9fb1efc823390495d1f21e7853a7f4dc5296db7d61626831c6 (image=quay.io/ceph/ceph:v18, name=distracted_jones, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:31:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-70f82046b61787684eb825545a885c89e630acc89315cc87f8740c098c5f357f-merged.mount: Deactivated successfully.
Nov 25 23:31:15 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/2985170876' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 25 23:31:15 compute-0 podman[76042]: 2025-11-25 23:31:15.944911174 +0000 UTC m=+0.612967276 container remove e23f251efa117e9fb1efc823390495d1f21e7853a7f4dc5296db7d61626831c6 (image=quay.io/ceph/ceph:v18, name=distracted_jones, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:31:15 compute-0 systemd[1]: libpod-conmon-e23f251efa117e9fb1efc823390495d1f21e7853a7f4dc5296db7d61626831c6.scope: Deactivated successfully.
Nov 25 23:31:16 compute-0 ceph-mgr[75954]: mgr[py] Loading python module 'devicehealth'
Nov 25 23:31:17 compute-0 ceph-mgr[75954]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 25 23:31:17 compute-0 ceph-mgr[75954]: mgr[py] Loading python module 'diskprediction_local'
Nov 25 23:31:17 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:17.165+0000 7f0aaebe6140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 25 23:31:17 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 25 23:31:17 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 25 23:31:17 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]:   from numpy import show_config as show_numpy_config
Nov 25 23:31:17 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:17.658+0000 7f0aaebe6140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 25 23:31:17 compute-0 ceph-mgr[75954]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 25 23:31:17 compute-0 ceph-mgr[75954]: mgr[py] Loading python module 'influx'
Nov 25 23:31:17 compute-0 ceph-mgr[75954]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 25 23:31:17 compute-0 ceph-mgr[75954]: mgr[py] Loading python module 'insights'
Nov 25 23:31:17 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:17.880+0000 7f0aaebe6140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 25 23:31:18 compute-0 podman[76097]: 2025-11-25 23:31:18.057723673 +0000 UTC m=+0.079275465 container create e9422ac8a1d6902410a3f9d7e3004a517cac8f8e777a4350e006fd2d0ba8b868 (image=quay.io/ceph/ceph:v18, name=festive_goodall, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 25 23:31:18 compute-0 systemd[1]: Started libpod-conmon-e9422ac8a1d6902410a3f9d7e3004a517cac8f8e777a4350e006fd2d0ba8b868.scope.
Nov 25 23:31:18 compute-0 podman[76097]: 2025-11-25 23:31:18.023124246 +0000 UTC m=+0.044676058 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 23:31:18 compute-0 ceph-mgr[75954]: mgr[py] Loading python module 'iostat'
Nov 25 23:31:18 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:31:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab09c449ced8a14e05233d1a2394f3e0453dc1bd9ead6340eca2d2d580921806/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:31:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab09c449ced8a14e05233d1a2394f3e0453dc1bd9ead6340eca2d2d580921806/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 23:31:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab09c449ced8a14e05233d1a2394f3e0453dc1bd9ead6340eca2d2d580921806/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:31:18 compute-0 podman[76097]: 2025-11-25 23:31:18.163539779 +0000 UTC m=+0.185091631 container init e9422ac8a1d6902410a3f9d7e3004a517cac8f8e777a4350e006fd2d0ba8b868 (image=quay.io/ceph/ceph:v18, name=festive_goodall, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:31:18 compute-0 podman[76097]: 2025-11-25 23:31:18.173196568 +0000 UTC m=+0.194748360 container start e9422ac8a1d6902410a3f9d7e3004a517cac8f8e777a4350e006fd2d0ba8b868 (image=quay.io/ceph/ceph:v18, name=festive_goodall, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 23:31:18 compute-0 podman[76097]: 2025-11-25 23:31:18.176816304 +0000 UTC m=+0.198368106 container attach e9422ac8a1d6902410a3f9d7e3004a517cac8f8e777a4350e006fd2d0ba8b868 (image=quay.io/ceph/ceph:v18, name=festive_goodall, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 23:31:18 compute-0 ceph-mgr[75954]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 25 23:31:18 compute-0 ceph-mgr[75954]: mgr[py] Loading python module 'k8sevents'
Nov 25 23:31:18 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:18.353+0000 7f0aaebe6140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 25 23:31:18 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 25 23:31:18 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2271446657' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 25 23:31:18 compute-0 festive_goodall[76113]: 
Nov 25 23:31:18 compute-0 festive_goodall[76113]: {
Nov 25 23:31:18 compute-0 festive_goodall[76113]:     "fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:31:18 compute-0 festive_goodall[76113]:     "health": {
Nov 25 23:31:18 compute-0 festive_goodall[76113]:         "status": "HEALTH_OK",
Nov 25 23:31:18 compute-0 festive_goodall[76113]:         "checks": {},
Nov 25 23:31:18 compute-0 festive_goodall[76113]:         "mutes": []
Nov 25 23:31:18 compute-0 festive_goodall[76113]:     },
Nov 25 23:31:18 compute-0 festive_goodall[76113]:     "election_epoch": 5,
Nov 25 23:31:18 compute-0 festive_goodall[76113]:     "quorum": [
Nov 25 23:31:18 compute-0 festive_goodall[76113]:         0
Nov 25 23:31:18 compute-0 festive_goodall[76113]:     ],
Nov 25 23:31:18 compute-0 festive_goodall[76113]:     "quorum_names": [
Nov 25 23:31:18 compute-0 festive_goodall[76113]:         "compute-0"
Nov 25 23:31:18 compute-0 festive_goodall[76113]:     ],
Nov 25 23:31:18 compute-0 festive_goodall[76113]:     "quorum_age": 8,
Nov 25 23:31:18 compute-0 festive_goodall[76113]:     "monmap": {
Nov 25 23:31:18 compute-0 festive_goodall[76113]:         "epoch": 1,
Nov 25 23:31:18 compute-0 festive_goodall[76113]:         "min_mon_release_name": "reef",
Nov 25 23:31:18 compute-0 festive_goodall[76113]:         "num_mons": 1
Nov 25 23:31:18 compute-0 festive_goodall[76113]:     },
Nov 25 23:31:18 compute-0 festive_goodall[76113]:     "osdmap": {
Nov 25 23:31:18 compute-0 festive_goodall[76113]:         "epoch": 1,
Nov 25 23:31:18 compute-0 festive_goodall[76113]:         "num_osds": 0,
Nov 25 23:31:18 compute-0 festive_goodall[76113]:         "num_up_osds": 0,
Nov 25 23:31:18 compute-0 festive_goodall[76113]:         "osd_up_since": 0,
Nov 25 23:31:18 compute-0 festive_goodall[76113]:         "num_in_osds": 0,
Nov 25 23:31:18 compute-0 festive_goodall[76113]:         "osd_in_since": 0,
Nov 25 23:31:18 compute-0 festive_goodall[76113]:         "num_remapped_pgs": 0
Nov 25 23:31:18 compute-0 festive_goodall[76113]:     },
Nov 25 23:31:18 compute-0 festive_goodall[76113]:     "pgmap": {
Nov 25 23:31:18 compute-0 festive_goodall[76113]:         "pgs_by_state": [],
Nov 25 23:31:18 compute-0 festive_goodall[76113]:         "num_pgs": 0,
Nov 25 23:31:18 compute-0 festive_goodall[76113]:         "num_pools": 0,
Nov 25 23:31:18 compute-0 festive_goodall[76113]:         "num_objects": 0,
Nov 25 23:31:18 compute-0 festive_goodall[76113]:         "data_bytes": 0,
Nov 25 23:31:18 compute-0 festive_goodall[76113]:         "bytes_used": 0,
Nov 25 23:31:18 compute-0 festive_goodall[76113]:         "bytes_avail": 0,
Nov 25 23:31:18 compute-0 festive_goodall[76113]:         "bytes_total": 0
Nov 25 23:31:18 compute-0 festive_goodall[76113]:     },
Nov 25 23:31:18 compute-0 festive_goodall[76113]:     "fsmap": {
Nov 25 23:31:18 compute-0 festive_goodall[76113]:         "epoch": 1,
Nov 25 23:31:18 compute-0 festive_goodall[76113]:         "by_rank": [],
Nov 25 23:31:18 compute-0 festive_goodall[76113]:         "up:standby": 0
Nov 25 23:31:18 compute-0 festive_goodall[76113]:     },
Nov 25 23:31:18 compute-0 festive_goodall[76113]:     "mgrmap": {
Nov 25 23:31:18 compute-0 festive_goodall[76113]:         "available": false,
Nov 25 23:31:18 compute-0 festive_goodall[76113]:         "num_standbys": 0,
Nov 25 23:31:18 compute-0 festive_goodall[76113]:         "modules": [
Nov 25 23:31:18 compute-0 festive_goodall[76113]:             "iostat",
Nov 25 23:31:18 compute-0 festive_goodall[76113]:             "nfs",
Nov 25 23:31:18 compute-0 festive_goodall[76113]:             "restful"
Nov 25 23:31:18 compute-0 festive_goodall[76113]:         ],
Nov 25 23:31:18 compute-0 festive_goodall[76113]:         "services": {}
Nov 25 23:31:18 compute-0 festive_goodall[76113]:     },
Nov 25 23:31:18 compute-0 festive_goodall[76113]:     "servicemap": {
Nov 25 23:31:18 compute-0 festive_goodall[76113]:         "epoch": 1,
Nov 25 23:31:18 compute-0 festive_goodall[76113]:         "modified": "2025-11-25T23:31:07.189601+0000",
Nov 25 23:31:18 compute-0 festive_goodall[76113]:         "services": {}
Nov 25 23:31:18 compute-0 festive_goodall[76113]:     },
Nov 25 23:31:18 compute-0 festive_goodall[76113]:     "progress_events": {}
Nov 25 23:31:18 compute-0 festive_goodall[76113]: }
Nov 25 23:31:18 compute-0 systemd[1]: libpod-e9422ac8a1d6902410a3f9d7e3004a517cac8f8e777a4350e006fd2d0ba8b868.scope: Deactivated successfully.
Nov 25 23:31:18 compute-0 podman[76097]: 2025-11-25 23:31:18.598005271 +0000 UTC m=+0.619557093 container died e9422ac8a1d6902410a3f9d7e3004a517cac8f8e777a4350e006fd2d0ba8b868 (image=quay.io/ceph/ceph:v18, name=festive_goodall, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:31:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-ab09c449ced8a14e05233d1a2394f3e0453dc1bd9ead6340eca2d2d580921806-merged.mount: Deactivated successfully.
Nov 25 23:31:18 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/2271446657' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 25 23:31:18 compute-0 podman[76097]: 2025-11-25 23:31:18.661816222 +0000 UTC m=+0.683368024 container remove e9422ac8a1d6902410a3f9d7e3004a517cac8f8e777a4350e006fd2d0ba8b868 (image=quay.io/ceph/ceph:v18, name=festive_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 23:31:18 compute-0 systemd[1]: libpod-conmon-e9422ac8a1d6902410a3f9d7e3004a517cac8f8e777a4350e006fd2d0ba8b868.scope: Deactivated successfully.
Nov 25 23:31:20 compute-0 sshd-session[76151]: Invalid user nagios from 202.83.162.167 port 56892
Nov 25 23:31:20 compute-0 podman[76153]: 2025-11-25 23:31:20.71345112 +0000 UTC m=+0.024732734 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 23:31:20 compute-0 ceph-mgr[75954]: mgr[py] Loading python module 'localpool'
Nov 25 23:31:20 compute-0 ceph-mgr[75954]: mgr[py] Loading python module 'mds_autoscaler'
Nov 25 23:31:20 compute-0 ceph-mgr[75954]: mgr[py] Loading python module 'mirroring'
Nov 25 23:31:20 compute-0 podman[76153]: 2025-11-25 23:31:20.815356211 +0000 UTC m=+0.126637775 container create a51ce2f9a390b9baafc0982bb684515aa9ca8b37e94c878f4e93cba399bd28af (image=quay.io/ceph/ceph:v18, name=hungry_cartwright, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:31:20 compute-0 systemd[1]: Started libpod-conmon-a51ce2f9a390b9baafc0982bb684515aa9ca8b37e94c878f4e93cba399bd28af.scope.
Nov 25 23:31:20 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:31:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ab5500194b98892d6a6d41c5981ecba23b63236c5711af3818268c60e0b5921/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:31:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ab5500194b98892d6a6d41c5981ecba23b63236c5711af3818268c60e0b5921/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:31:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ab5500194b98892d6a6d41c5981ecba23b63236c5711af3818268c60e0b5921/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 23:31:20 compute-0 podman[76153]: 2025-11-25 23:31:20.897936274 +0000 UTC m=+0.209217908 container init a51ce2f9a390b9baafc0982bb684515aa9ca8b37e94c878f4e93cba399bd28af (image=quay.io/ceph/ceph:v18, name=hungry_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:31:20 compute-0 podman[76153]: 2025-11-25 23:31:20.908691403 +0000 UTC m=+0.219972937 container start a51ce2f9a390b9baafc0982bb684515aa9ca8b37e94c878f4e93cba399bd28af (image=quay.io/ceph/ceph:v18, name=hungry_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:31:20 compute-0 podman[76153]: 2025-11-25 23:31:20.913134502 +0000 UTC m=+0.224416056 container attach a51ce2f9a390b9baafc0982bb684515aa9ca8b37e94c878f4e93cba399bd28af (image=quay.io/ceph/ceph:v18, name=hungry_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:31:20 compute-0 ceph-mgr[75954]: mgr[py] Loading python module 'nfs'
Nov 25 23:31:21 compute-0 sshd-session[76151]: Received disconnect from 202.83.162.167 port 56892:11: Bye Bye [preauth]
Nov 25 23:31:21 compute-0 sshd-session[76151]: Disconnected from invalid user nagios 202.83.162.167 port 56892 [preauth]
Nov 25 23:31:21 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 25 23:31:21 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2970492701' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 25 23:31:21 compute-0 hungry_cartwright[76167]: 
Nov 25 23:31:21 compute-0 hungry_cartwright[76167]: {
Nov 25 23:31:21 compute-0 hungry_cartwright[76167]:     "fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:31:21 compute-0 hungry_cartwright[76167]:     "health": {
Nov 25 23:31:21 compute-0 hungry_cartwright[76167]:         "status": "HEALTH_OK",
Nov 25 23:31:21 compute-0 hungry_cartwright[76167]:         "checks": {},
Nov 25 23:31:21 compute-0 hungry_cartwright[76167]:         "mutes": []
Nov 25 23:31:21 compute-0 hungry_cartwright[76167]:     },
Nov 25 23:31:21 compute-0 hungry_cartwright[76167]:     "election_epoch": 5,
Nov 25 23:31:21 compute-0 hungry_cartwright[76167]:     "quorum": [
Nov 25 23:31:21 compute-0 hungry_cartwright[76167]:         0
Nov 25 23:31:21 compute-0 hungry_cartwright[76167]:     ],
Nov 25 23:31:21 compute-0 hungry_cartwright[76167]:     "quorum_names": [
Nov 25 23:31:21 compute-0 hungry_cartwright[76167]:         "compute-0"
Nov 25 23:31:21 compute-0 hungry_cartwright[76167]:     ],
Nov 25 23:31:21 compute-0 hungry_cartwright[76167]:     "quorum_age": 11,
Nov 25 23:31:21 compute-0 hungry_cartwright[76167]:     "monmap": {
Nov 25 23:31:21 compute-0 hungry_cartwright[76167]:         "epoch": 1,
Nov 25 23:31:21 compute-0 hungry_cartwright[76167]:         "min_mon_release_name": "reef",
Nov 25 23:31:21 compute-0 hungry_cartwright[76167]:         "num_mons": 1
Nov 25 23:31:21 compute-0 hungry_cartwright[76167]:     },
Nov 25 23:31:21 compute-0 hungry_cartwright[76167]:     "osdmap": {
Nov 25 23:31:21 compute-0 hungry_cartwright[76167]:         "epoch": 1,
Nov 25 23:31:21 compute-0 hungry_cartwright[76167]:         "num_osds": 0,
Nov 25 23:31:21 compute-0 hungry_cartwright[76167]:         "num_up_osds": 0,
Nov 25 23:31:21 compute-0 hungry_cartwright[76167]:         "osd_up_since": 0,
Nov 25 23:31:21 compute-0 hungry_cartwright[76167]:         "num_in_osds": 0,
Nov 25 23:31:21 compute-0 hungry_cartwright[76167]:         "osd_in_since": 0,
Nov 25 23:31:21 compute-0 hungry_cartwright[76167]:         "num_remapped_pgs": 0
Nov 25 23:31:21 compute-0 hungry_cartwright[76167]:     },
Nov 25 23:31:21 compute-0 hungry_cartwright[76167]:     "pgmap": {
Nov 25 23:31:21 compute-0 hungry_cartwright[76167]:         "pgs_by_state": [],
Nov 25 23:31:21 compute-0 hungry_cartwright[76167]:         "num_pgs": 0,
Nov 25 23:31:21 compute-0 hungry_cartwright[76167]:         "num_pools": 0,
Nov 25 23:31:21 compute-0 hungry_cartwright[76167]:         "num_objects": 0,
Nov 25 23:31:21 compute-0 hungry_cartwright[76167]:         "data_bytes": 0,
Nov 25 23:31:21 compute-0 hungry_cartwright[76167]:         "bytes_used": 0,
Nov 25 23:31:21 compute-0 hungry_cartwright[76167]:         "bytes_avail": 0,
Nov 25 23:31:21 compute-0 hungry_cartwright[76167]:         "bytes_total": 0
Nov 25 23:31:21 compute-0 hungry_cartwright[76167]:     },
Nov 25 23:31:21 compute-0 hungry_cartwright[76167]:     "fsmap": {
Nov 25 23:31:21 compute-0 hungry_cartwright[76167]:         "epoch": 1,
Nov 25 23:31:21 compute-0 hungry_cartwright[76167]:         "by_rank": [],
Nov 25 23:31:21 compute-0 hungry_cartwright[76167]:         "up:standby": 0
Nov 25 23:31:21 compute-0 hungry_cartwright[76167]:     },
Nov 25 23:31:21 compute-0 hungry_cartwright[76167]:     "mgrmap": {
Nov 25 23:31:21 compute-0 hungry_cartwright[76167]:         "available": false,
Nov 25 23:31:21 compute-0 hungry_cartwright[76167]:         "num_standbys": 0,
Nov 25 23:31:21 compute-0 hungry_cartwright[76167]:         "modules": [
Nov 25 23:31:21 compute-0 hungry_cartwright[76167]:             "iostat",
Nov 25 23:31:21 compute-0 hungry_cartwright[76167]:             "nfs",
Nov 25 23:31:21 compute-0 hungry_cartwright[76167]:             "restful"
Nov 25 23:31:21 compute-0 hungry_cartwright[76167]:         ],
Nov 25 23:31:21 compute-0 hungry_cartwright[76167]:         "services": {}
Nov 25 23:31:21 compute-0 hungry_cartwright[76167]:     },
Nov 25 23:31:21 compute-0 hungry_cartwright[76167]:     "servicemap": {
Nov 25 23:31:21 compute-0 hungry_cartwright[76167]:         "epoch": 1,
Nov 25 23:31:21 compute-0 hungry_cartwright[76167]:         "modified": "2025-11-25T23:31:07.189601+0000",
Nov 25 23:31:21 compute-0 hungry_cartwright[76167]:         "services": {}
Nov 25 23:31:21 compute-0 hungry_cartwright[76167]:     },
Nov 25 23:31:21 compute-0 hungry_cartwright[76167]:     "progress_events": {}
Nov 25 23:31:21 compute-0 hungry_cartwright[76167]: }
Nov 25 23:31:21 compute-0 systemd[1]: libpod-a51ce2f9a390b9baafc0982bb684515aa9ca8b37e94c878f4e93cba399bd28af.scope: Deactivated successfully.
Nov 25 23:31:21 compute-0 podman[76153]: 2025-11-25 23:31:21.322158673 +0000 UTC m=+0.633440247 container died a51ce2f9a390b9baafc0982bb684515aa9ca8b37e94c878f4e93cba399bd28af (image=quay.io/ceph/ceph:v18, name=hungry_cartwright, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 23:31:21 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/2970492701' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 25 23:31:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-3ab5500194b98892d6a6d41c5981ecba23b63236c5711af3818268c60e0b5921-merged.mount: Deactivated successfully.
Nov 25 23:31:21 compute-0 podman[76153]: 2025-11-25 23:31:21.391357717 +0000 UTC m=+0.702639291 container remove a51ce2f9a390b9baafc0982bb684515aa9ca8b37e94c878f4e93cba399bd28af (image=quay.io/ceph/ceph:v18, name=hungry_cartwright, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 23:31:21 compute-0 systemd[1]: libpod-conmon-a51ce2f9a390b9baafc0982bb684515aa9ca8b37e94c878f4e93cba399bd28af.scope: Deactivated successfully.
Nov 25 23:31:21 compute-0 ceph-mgr[75954]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 25 23:31:21 compute-0 ceph-mgr[75954]: mgr[py] Loading python module 'orchestrator'
Nov 25 23:31:21 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:21.629+0000 7f0aaebe6140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 25 23:31:22 compute-0 ceph-mgr[75954]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 25 23:31:22 compute-0 ceph-mgr[75954]: mgr[py] Loading python module 'osd_perf_query'
Nov 25 23:31:22 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:22.233+0000 7f0aaebe6140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 25 23:31:22 compute-0 ceph-mgr[75954]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 25 23:31:22 compute-0 ceph-mgr[75954]: mgr[py] Loading python module 'osd_support'
Nov 25 23:31:22 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:22.479+0000 7f0aaebe6140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 25 23:31:22 compute-0 ceph-mgr[75954]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 25 23:31:22 compute-0 ceph-mgr[75954]: mgr[py] Loading python module 'pg_autoscaler'
Nov 25 23:31:22 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:22.698+0000 7f0aaebe6140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 25 23:31:22 compute-0 ceph-mgr[75954]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 25 23:31:22 compute-0 ceph-mgr[75954]: mgr[py] Loading python module 'progress'
Nov 25 23:31:22 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:22.963+0000 7f0aaebe6140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 25 23:31:23 compute-0 ceph-mgr[75954]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 25 23:31:23 compute-0 ceph-mgr[75954]: mgr[py] Loading python module 'prometheus'
Nov 25 23:31:23 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:23.209+0000 7f0aaebe6140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 25 23:31:23 compute-0 podman[76207]: 2025-11-25 23:31:23.497863226 +0000 UTC m=+0.071149177 container create 764e28b2a269d94738a3aad46e09537502c15d90711671f63319e27176f6bf53 (image=quay.io/ceph/ceph:v18, name=happy_franklin, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:31:23 compute-0 systemd[1]: Started libpod-conmon-764e28b2a269d94738a3aad46e09537502c15d90711671f63319e27176f6bf53.scope.
Nov 25 23:31:23 compute-0 podman[76207]: 2025-11-25 23:31:23.470509663 +0000 UTC m=+0.043795654 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 23:31:23 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:31:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c25d1b721875761a5f38658318b0dcffd0fd387e64da96a6137b81509443a555/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:31:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c25d1b721875761a5f38658318b0dcffd0fd387e64da96a6137b81509443a555/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 23:31:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c25d1b721875761a5f38658318b0dcffd0fd387e64da96a6137b81509443a555/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:31:23 compute-0 podman[76207]: 2025-11-25 23:31:23.587756615 +0000 UTC m=+0.161042556 container init 764e28b2a269d94738a3aad46e09537502c15d90711671f63319e27176f6bf53 (image=quay.io/ceph/ceph:v18, name=happy_franklin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 23:31:23 compute-0 podman[76207]: 2025-11-25 23:31:23.598078311 +0000 UTC m=+0.171364252 container start 764e28b2a269d94738a3aad46e09537502c15d90711671f63319e27176f6bf53 (image=quay.io/ceph/ceph:v18, name=happy_franklin, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:31:23 compute-0 podman[76207]: 2025-11-25 23:31:23.602481189 +0000 UTC m=+0.175767120 container attach 764e28b2a269d94738a3aad46e09537502c15d90711671f63319e27176f6bf53 (image=quay.io/ceph/ceph:v18, name=happy_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 23:31:23 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 25 23:31:23 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/164624347' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 25 23:31:23 compute-0 happy_franklin[76223]: 
Nov 25 23:31:23 compute-0 happy_franklin[76223]: {
Nov 25 23:31:23 compute-0 happy_franklin[76223]:     "fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:31:23 compute-0 happy_franklin[76223]:     "health": {
Nov 25 23:31:23 compute-0 happy_franklin[76223]:         "status": "HEALTH_OK",
Nov 25 23:31:23 compute-0 happy_franklin[76223]:         "checks": {},
Nov 25 23:31:23 compute-0 happy_franklin[76223]:         "mutes": []
Nov 25 23:31:23 compute-0 happy_franklin[76223]:     },
Nov 25 23:31:23 compute-0 happy_franklin[76223]:     "election_epoch": 5,
Nov 25 23:31:23 compute-0 happy_franklin[76223]:     "quorum": [
Nov 25 23:31:23 compute-0 happy_franklin[76223]:         0
Nov 25 23:31:23 compute-0 happy_franklin[76223]:     ],
Nov 25 23:31:23 compute-0 happy_franklin[76223]:     "quorum_names": [
Nov 25 23:31:23 compute-0 happy_franklin[76223]:         "compute-0"
Nov 25 23:31:23 compute-0 happy_franklin[76223]:     ],
Nov 25 23:31:23 compute-0 happy_franklin[76223]:     "quorum_age": 13,
Nov 25 23:31:23 compute-0 happy_franklin[76223]:     "monmap": {
Nov 25 23:31:23 compute-0 happy_franklin[76223]:         "epoch": 1,
Nov 25 23:31:23 compute-0 happy_franklin[76223]:         "min_mon_release_name": "reef",
Nov 25 23:31:23 compute-0 happy_franklin[76223]:         "num_mons": 1
Nov 25 23:31:23 compute-0 happy_franklin[76223]:     },
Nov 25 23:31:23 compute-0 happy_franklin[76223]:     "osdmap": {
Nov 25 23:31:23 compute-0 happy_franklin[76223]:         "epoch": 1,
Nov 25 23:31:23 compute-0 happy_franklin[76223]:         "num_osds": 0,
Nov 25 23:31:23 compute-0 happy_franklin[76223]:         "num_up_osds": 0,
Nov 25 23:31:23 compute-0 happy_franklin[76223]:         "osd_up_since": 0,
Nov 25 23:31:23 compute-0 happy_franklin[76223]:         "num_in_osds": 0,
Nov 25 23:31:23 compute-0 happy_franklin[76223]:         "osd_in_since": 0,
Nov 25 23:31:23 compute-0 happy_franklin[76223]:         "num_remapped_pgs": 0
Nov 25 23:31:23 compute-0 happy_franklin[76223]:     },
Nov 25 23:31:23 compute-0 happy_franklin[76223]:     "pgmap": {
Nov 25 23:31:23 compute-0 happy_franklin[76223]:         "pgs_by_state": [],
Nov 25 23:31:23 compute-0 happy_franklin[76223]:         "num_pgs": 0,
Nov 25 23:31:23 compute-0 happy_franklin[76223]:         "num_pools": 0,
Nov 25 23:31:23 compute-0 happy_franklin[76223]:         "num_objects": 0,
Nov 25 23:31:23 compute-0 happy_franklin[76223]:         "data_bytes": 0,
Nov 25 23:31:23 compute-0 happy_franklin[76223]:         "bytes_used": 0,
Nov 25 23:31:23 compute-0 happy_franklin[76223]:         "bytes_avail": 0,
Nov 25 23:31:23 compute-0 happy_franklin[76223]:         "bytes_total": 0
Nov 25 23:31:23 compute-0 happy_franklin[76223]:     },
Nov 25 23:31:23 compute-0 happy_franklin[76223]:     "fsmap": {
Nov 25 23:31:23 compute-0 happy_franklin[76223]:         "epoch": 1,
Nov 25 23:31:23 compute-0 happy_franklin[76223]:         "by_rank": [],
Nov 25 23:31:23 compute-0 happy_franklin[76223]:         "up:standby": 0
Nov 25 23:31:23 compute-0 happy_franklin[76223]:     },
Nov 25 23:31:23 compute-0 happy_franklin[76223]:     "mgrmap": {
Nov 25 23:31:23 compute-0 happy_franklin[76223]:         "available": false,
Nov 25 23:31:23 compute-0 happy_franklin[76223]:         "num_standbys": 0,
Nov 25 23:31:23 compute-0 happy_franklin[76223]:         "modules": [
Nov 25 23:31:23 compute-0 happy_franklin[76223]:             "iostat",
Nov 25 23:31:23 compute-0 happy_franklin[76223]:             "nfs",
Nov 25 23:31:23 compute-0 happy_franklin[76223]:             "restful"
Nov 25 23:31:23 compute-0 happy_franklin[76223]:         ],
Nov 25 23:31:23 compute-0 happy_franklin[76223]:         "services": {}
Nov 25 23:31:23 compute-0 happy_franklin[76223]:     },
Nov 25 23:31:23 compute-0 happy_franklin[76223]:     "servicemap": {
Nov 25 23:31:23 compute-0 happy_franklin[76223]:         "epoch": 1,
Nov 25 23:31:23 compute-0 happy_franklin[76223]:         "modified": "2025-11-25T23:31:07.189601+0000",
Nov 25 23:31:23 compute-0 happy_franklin[76223]:         "services": {}
Nov 25 23:31:23 compute-0 happy_franklin[76223]:     },
Nov 25 23:31:23 compute-0 happy_franklin[76223]:     "progress_events": {}
Nov 25 23:31:23 compute-0 happy_franklin[76223]: }
Nov 25 23:31:24 compute-0 systemd[1]: libpod-764e28b2a269d94738a3aad46e09537502c15d90711671f63319e27176f6bf53.scope: Deactivated successfully.
Nov 25 23:31:24 compute-0 podman[76207]: 2025-11-25 23:31:24.014562903 +0000 UTC m=+0.587848854 container died 764e28b2a269d94738a3aad46e09537502c15d90711671f63319e27176f6bf53 (image=quay.io/ceph/ceph:v18, name=happy_franklin, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 23:31:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-c25d1b721875761a5f38658318b0dcffd0fd387e64da96a6137b81509443a555-merged.mount: Deactivated successfully.
Nov 25 23:31:24 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/164624347' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 25 23:31:24 compute-0 podman[76207]: 2025-11-25 23:31:24.064343946 +0000 UTC m=+0.637629857 container remove 764e28b2a269d94738a3aad46e09537502c15d90711671f63319e27176f6bf53 (image=quay.io/ceph/ceph:v18, name=happy_franklin, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:31:24 compute-0 systemd[1]: libpod-conmon-764e28b2a269d94738a3aad46e09537502c15d90711671f63319e27176f6bf53.scope: Deactivated successfully.
Nov 25 23:31:24 compute-0 ceph-mgr[75954]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 25 23:31:24 compute-0 ceph-mgr[75954]: mgr[py] Loading python module 'rbd_support'
Nov 25 23:31:24 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:24.156+0000 7f0aaebe6140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 25 23:31:24 compute-0 ceph-mgr[75954]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 25 23:31:24 compute-0 ceph-mgr[75954]: mgr[py] Loading python module 'restful'
Nov 25 23:31:24 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:24.452+0000 7f0aaebe6140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 25 23:31:25 compute-0 ceph-mgr[75954]: mgr[py] Loading python module 'rgw'
Nov 25 23:31:25 compute-0 ceph-mgr[75954]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 25 23:31:25 compute-0 ceph-mgr[75954]: mgr[py] Loading python module 'rook'
Nov 25 23:31:25 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:25.791+0000 7f0aaebe6140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 25 23:31:26 compute-0 podman[76261]: 2025-11-25 23:31:26.149191165 +0000 UTC m=+0.054117740 container create 5206282d11fa9ef5de7883ceaf119bbe99f349b1804f7f46f874e9e8ce231e99 (image=quay.io/ceph/ceph:v18, name=frosty_mccarthy, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 23:31:26 compute-0 systemd[1]: Started libpod-conmon-5206282d11fa9ef5de7883ceaf119bbe99f349b1804f7f46f874e9e8ce231e99.scope.
Nov 25 23:31:26 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:31:26 compute-0 podman[76261]: 2025-11-25 23:31:26.122706196 +0000 UTC m=+0.027632811 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 23:31:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7f4c267b016974d9db2bb6c5027de81069d06296462d533f1199cb3d5017424/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:31:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7f4c267b016974d9db2bb6c5027de81069d06296462d533f1199cb3d5017424/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:31:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7f4c267b016974d9db2bb6c5027de81069d06296462d533f1199cb3d5017424/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 23:31:26 compute-0 podman[76261]: 2025-11-25 23:31:26.253582533 +0000 UTC m=+0.158509138 container init 5206282d11fa9ef5de7883ceaf119bbe99f349b1804f7f46f874e9e8ce231e99 (image=quay.io/ceph/ceph:v18, name=frosty_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 23:31:26 compute-0 podman[76261]: 2025-11-25 23:31:26.259374998 +0000 UTC m=+0.164301533 container start 5206282d11fa9ef5de7883ceaf119bbe99f349b1804f7f46f874e9e8ce231e99 (image=quay.io/ceph/ceph:v18, name=frosty_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:31:26 compute-0 podman[76261]: 2025-11-25 23:31:26.262908724 +0000 UTC m=+0.167835319 container attach 5206282d11fa9ef5de7883ceaf119bbe99f349b1804f7f46f874e9e8ce231e99 (image=quay.io/ceph/ceph:v18, name=frosty_mccarthy, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:31:26 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 25 23:31:26 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3438032832' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 25 23:31:26 compute-0 frosty_mccarthy[76279]: 
Nov 25 23:31:26 compute-0 frosty_mccarthy[76279]: {
Nov 25 23:31:26 compute-0 frosty_mccarthy[76279]:     "fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:31:26 compute-0 frosty_mccarthy[76279]:     "health": {
Nov 25 23:31:26 compute-0 frosty_mccarthy[76279]:         "status": "HEALTH_OK",
Nov 25 23:31:26 compute-0 frosty_mccarthy[76279]:         "checks": {},
Nov 25 23:31:26 compute-0 frosty_mccarthy[76279]:         "mutes": []
Nov 25 23:31:26 compute-0 frosty_mccarthy[76279]:     },
Nov 25 23:31:26 compute-0 frosty_mccarthy[76279]:     "election_epoch": 5,
Nov 25 23:31:26 compute-0 frosty_mccarthy[76279]:     "quorum": [
Nov 25 23:31:26 compute-0 frosty_mccarthy[76279]:         0
Nov 25 23:31:26 compute-0 frosty_mccarthy[76279]:     ],
Nov 25 23:31:26 compute-0 frosty_mccarthy[76279]:     "quorum_names": [
Nov 25 23:31:26 compute-0 frosty_mccarthy[76279]:         "compute-0"
Nov 25 23:31:26 compute-0 frosty_mccarthy[76279]:     ],
Nov 25 23:31:26 compute-0 frosty_mccarthy[76279]:     "quorum_age": 16,
Nov 25 23:31:26 compute-0 frosty_mccarthy[76279]:     "monmap": {
Nov 25 23:31:26 compute-0 frosty_mccarthy[76279]:         "epoch": 1,
Nov 25 23:31:26 compute-0 frosty_mccarthy[76279]:         "min_mon_release_name": "reef",
Nov 25 23:31:26 compute-0 frosty_mccarthy[76279]:         "num_mons": 1
Nov 25 23:31:26 compute-0 frosty_mccarthy[76279]:     },
Nov 25 23:31:26 compute-0 frosty_mccarthy[76279]:     "osdmap": {
Nov 25 23:31:26 compute-0 frosty_mccarthy[76279]:         "epoch": 1,
Nov 25 23:31:26 compute-0 frosty_mccarthy[76279]:         "num_osds": 0,
Nov 25 23:31:26 compute-0 frosty_mccarthy[76279]:         "num_up_osds": 0,
Nov 25 23:31:26 compute-0 frosty_mccarthy[76279]:         "osd_up_since": 0,
Nov 25 23:31:26 compute-0 frosty_mccarthy[76279]:         "num_in_osds": 0,
Nov 25 23:31:26 compute-0 frosty_mccarthy[76279]:         "osd_in_since": 0,
Nov 25 23:31:26 compute-0 frosty_mccarthy[76279]:         "num_remapped_pgs": 0
Nov 25 23:31:26 compute-0 frosty_mccarthy[76279]:     },
Nov 25 23:31:26 compute-0 frosty_mccarthy[76279]:     "pgmap": {
Nov 25 23:31:26 compute-0 frosty_mccarthy[76279]:         "pgs_by_state": [],
Nov 25 23:31:26 compute-0 frosty_mccarthy[76279]:         "num_pgs": 0,
Nov 25 23:31:26 compute-0 frosty_mccarthy[76279]:         "num_pools": 0,
Nov 25 23:31:26 compute-0 frosty_mccarthy[76279]:         "num_objects": 0,
Nov 25 23:31:26 compute-0 frosty_mccarthy[76279]:         "data_bytes": 0,
Nov 25 23:31:26 compute-0 frosty_mccarthy[76279]:         "bytes_used": 0,
Nov 25 23:31:26 compute-0 frosty_mccarthy[76279]:         "bytes_avail": 0,
Nov 25 23:31:26 compute-0 frosty_mccarthy[76279]:         "bytes_total": 0
Nov 25 23:31:26 compute-0 frosty_mccarthy[76279]:     },
Nov 25 23:31:26 compute-0 frosty_mccarthy[76279]:     "fsmap": {
Nov 25 23:31:26 compute-0 frosty_mccarthy[76279]:         "epoch": 1,
Nov 25 23:31:26 compute-0 frosty_mccarthy[76279]:         "by_rank": [],
Nov 25 23:31:26 compute-0 frosty_mccarthy[76279]:         "up:standby": 0
Nov 25 23:31:26 compute-0 frosty_mccarthy[76279]:     },
Nov 25 23:31:26 compute-0 frosty_mccarthy[76279]:     "mgrmap": {
Nov 25 23:31:26 compute-0 frosty_mccarthy[76279]:         "available": false,
Nov 25 23:31:26 compute-0 frosty_mccarthy[76279]:         "num_standbys": 0,
Nov 25 23:31:26 compute-0 frosty_mccarthy[76279]:         "modules": [
Nov 25 23:31:26 compute-0 frosty_mccarthy[76279]:             "iostat",
Nov 25 23:31:26 compute-0 frosty_mccarthy[76279]:             "nfs",
Nov 25 23:31:26 compute-0 frosty_mccarthy[76279]:             "restful"
Nov 25 23:31:26 compute-0 frosty_mccarthy[76279]:         ],
Nov 25 23:31:26 compute-0 frosty_mccarthy[76279]:         "services": {}
Nov 25 23:31:26 compute-0 frosty_mccarthy[76279]:     },
Nov 25 23:31:26 compute-0 frosty_mccarthy[76279]:     "servicemap": {
Nov 25 23:31:26 compute-0 frosty_mccarthy[76279]:         "epoch": 1,
Nov 25 23:31:26 compute-0 frosty_mccarthy[76279]:         "modified": "2025-11-25T23:31:07.189601+0000",
Nov 25 23:31:26 compute-0 frosty_mccarthy[76279]:         "services": {}
Nov 25 23:31:26 compute-0 frosty_mccarthy[76279]:     },
Nov 25 23:31:26 compute-0 frosty_mccarthy[76279]:     "progress_events": {}
Nov 25 23:31:26 compute-0 frosty_mccarthy[76279]: }
Nov 25 23:31:26 compute-0 systemd[1]: libpod-5206282d11fa9ef5de7883ceaf119bbe99f349b1804f7f46f874e9e8ce231e99.scope: Deactivated successfully.
Nov 25 23:31:26 compute-0 podman[76261]: 2025-11-25 23:31:26.656204012 +0000 UTC m=+0.561130547 container died 5206282d11fa9ef5de7883ceaf119bbe99f349b1804f7f46f874e9e8ce231e99 (image=quay.io/ceph/ceph:v18, name=frosty_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 23:31:26 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/3438032832' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 25 23:31:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-b7f4c267b016974d9db2bb6c5027de81069d06296462d533f1199cb3d5017424-merged.mount: Deactivated successfully.
Nov 25 23:31:26 compute-0 podman[76261]: 2025-11-25 23:31:26.740717087 +0000 UTC m=+0.645643632 container remove 5206282d11fa9ef5de7883ceaf119bbe99f349b1804f7f46f874e9e8ce231e99 (image=quay.io/ceph/ceph:v18, name=frosty_mccarthy, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True)
Nov 25 23:31:26 compute-0 systemd[1]: libpod-conmon-5206282d11fa9ef5de7883ceaf119bbe99f349b1804f7f46f874e9e8ce231e99.scope: Deactivated successfully.
Nov 25 23:31:27 compute-0 ceph-mgr[75954]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 25 23:31:27 compute-0 ceph-mgr[75954]: mgr[py] Loading python module 'selftest'
Nov 25 23:31:27 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:27.807+0000 7f0aaebe6140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 25 23:31:28 compute-0 ceph-mgr[75954]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 25 23:31:28 compute-0 ceph-mgr[75954]: mgr[py] Loading python module 'snap_schedule'
Nov 25 23:31:28 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:28.034+0000 7f0aaebe6140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 25 23:31:28 compute-0 ceph-mgr[75954]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 25 23:31:28 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:28.266+0000 7f0aaebe6140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 25 23:31:28 compute-0 ceph-mgr[75954]: mgr[py] Loading python module 'stats'
Nov 25 23:31:28 compute-0 ceph-mgr[75954]: mgr[py] Loading python module 'status'
Nov 25 23:31:28 compute-0 ceph-mgr[75954]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 25 23:31:28 compute-0 ceph-mgr[75954]: mgr[py] Loading python module 'telegraf'
Nov 25 23:31:28 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:28.735+0000 7f0aaebe6140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 25 23:31:28 compute-0 podman[76316]: 2025-11-25 23:31:28.896919814 +0000 UTC m=+0.125452881 container create db4da38b929c0fbb2e38c6acb40b537a09513a773cdb147bbb179fad0ba07960 (image=quay.io/ceph/ceph:v18, name=xenodochial_snyder, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 23:31:28 compute-0 podman[76316]: 2025-11-25 23:31:28.815090355 +0000 UTC m=+0.043623462 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 23:31:28 compute-0 systemd[1]: Started libpod-conmon-db4da38b929c0fbb2e38c6acb40b537a09513a773cdb147bbb179fad0ba07960.scope.
Nov 25 23:31:28 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:31:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dafbff88ca3e204120d090e4d1f66b749dc2fe6f01dc06751cdf2ad79d7cf0b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 23:31:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dafbff88ca3e204120d090e4d1f66b749dc2fe6f01dc06751cdf2ad79d7cf0b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:31:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dafbff88ca3e204120d090e4d1f66b749dc2fe6f01dc06751cdf2ad79d7cf0b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:31:28 compute-0 ceph-mgr[75954]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 25 23:31:28 compute-0 ceph-mgr[75954]: mgr[py] Loading python module 'telemetry'
Nov 25 23:31:28 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:28.972+0000 7f0aaebe6140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 25 23:31:28 compute-0 podman[76316]: 2025-11-25 23:31:28.995881088 +0000 UTC m=+0.224414145 container init db4da38b929c0fbb2e38c6acb40b537a09513a773cdb147bbb179fad0ba07960 (image=quay.io/ceph/ceph:v18, name=xenodochial_snyder, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:31:29 compute-0 podman[76316]: 2025-11-25 23:31:29.004822814 +0000 UTC m=+0.233355871 container start db4da38b929c0fbb2e38c6acb40b537a09513a773cdb147bbb179fad0ba07960 (image=quay.io/ceph/ceph:v18, name=xenodochial_snyder, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Nov 25 23:31:29 compute-0 podman[76316]: 2025-11-25 23:31:29.009148061 +0000 UTC m=+0.237681158 container attach db4da38b929c0fbb2e38c6acb40b537a09513a773cdb147bbb179fad0ba07960 (image=quay.io/ceph/ceph:v18, name=xenodochial_snyder, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 23:31:29 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 25 23:31:29 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3887798461' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 25 23:31:29 compute-0 xenodochial_snyder[76333]: 
Nov 25 23:31:29 compute-0 xenodochial_snyder[76333]: {
Nov 25 23:31:29 compute-0 xenodochial_snyder[76333]:     "fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:31:29 compute-0 xenodochial_snyder[76333]:     "health": {
Nov 25 23:31:29 compute-0 xenodochial_snyder[76333]:         "status": "HEALTH_OK",
Nov 25 23:31:29 compute-0 xenodochial_snyder[76333]:         "checks": {},
Nov 25 23:31:29 compute-0 xenodochial_snyder[76333]:         "mutes": []
Nov 25 23:31:29 compute-0 xenodochial_snyder[76333]:     },
Nov 25 23:31:29 compute-0 xenodochial_snyder[76333]:     "election_epoch": 5,
Nov 25 23:31:29 compute-0 xenodochial_snyder[76333]:     "quorum": [
Nov 25 23:31:29 compute-0 xenodochial_snyder[76333]:         0
Nov 25 23:31:29 compute-0 xenodochial_snyder[76333]:     ],
Nov 25 23:31:29 compute-0 xenodochial_snyder[76333]:     "quorum_names": [
Nov 25 23:31:29 compute-0 xenodochial_snyder[76333]:         "compute-0"
Nov 25 23:31:29 compute-0 xenodochial_snyder[76333]:     ],
Nov 25 23:31:29 compute-0 xenodochial_snyder[76333]:     "quorum_age": 19,
Nov 25 23:31:29 compute-0 xenodochial_snyder[76333]:     "monmap": {
Nov 25 23:31:29 compute-0 xenodochial_snyder[76333]:         "epoch": 1,
Nov 25 23:31:29 compute-0 xenodochial_snyder[76333]:         "min_mon_release_name": "reef",
Nov 25 23:31:29 compute-0 xenodochial_snyder[76333]:         "num_mons": 1
Nov 25 23:31:29 compute-0 xenodochial_snyder[76333]:     },
Nov 25 23:31:29 compute-0 xenodochial_snyder[76333]:     "osdmap": {
Nov 25 23:31:29 compute-0 xenodochial_snyder[76333]:         "epoch": 1,
Nov 25 23:31:29 compute-0 xenodochial_snyder[76333]:         "num_osds": 0,
Nov 25 23:31:29 compute-0 xenodochial_snyder[76333]:         "num_up_osds": 0,
Nov 25 23:31:29 compute-0 xenodochial_snyder[76333]:         "osd_up_since": 0,
Nov 25 23:31:29 compute-0 xenodochial_snyder[76333]:         "num_in_osds": 0,
Nov 25 23:31:29 compute-0 xenodochial_snyder[76333]:         "osd_in_since": 0,
Nov 25 23:31:29 compute-0 xenodochial_snyder[76333]:         "num_remapped_pgs": 0
Nov 25 23:31:29 compute-0 xenodochial_snyder[76333]:     },
Nov 25 23:31:29 compute-0 xenodochial_snyder[76333]:     "pgmap": {
Nov 25 23:31:29 compute-0 xenodochial_snyder[76333]:         "pgs_by_state": [],
Nov 25 23:31:29 compute-0 xenodochial_snyder[76333]:         "num_pgs": 0,
Nov 25 23:31:29 compute-0 xenodochial_snyder[76333]:         "num_pools": 0,
Nov 25 23:31:29 compute-0 xenodochial_snyder[76333]:         "num_objects": 0,
Nov 25 23:31:29 compute-0 xenodochial_snyder[76333]:         "data_bytes": 0,
Nov 25 23:31:29 compute-0 xenodochial_snyder[76333]:         "bytes_used": 0,
Nov 25 23:31:29 compute-0 xenodochial_snyder[76333]:         "bytes_avail": 0,
Nov 25 23:31:29 compute-0 xenodochial_snyder[76333]:         "bytes_total": 0
Nov 25 23:31:29 compute-0 xenodochial_snyder[76333]:     },
Nov 25 23:31:29 compute-0 xenodochial_snyder[76333]:     "fsmap": {
Nov 25 23:31:29 compute-0 xenodochial_snyder[76333]:         "epoch": 1,
Nov 25 23:31:29 compute-0 xenodochial_snyder[76333]:         "by_rank": [],
Nov 25 23:31:29 compute-0 xenodochial_snyder[76333]:         "up:standby": 0
Nov 25 23:31:29 compute-0 xenodochial_snyder[76333]:     },
Nov 25 23:31:29 compute-0 xenodochial_snyder[76333]:     "mgrmap": {
Nov 25 23:31:29 compute-0 xenodochial_snyder[76333]:         "available": false,
Nov 25 23:31:29 compute-0 xenodochial_snyder[76333]:         "num_standbys": 0,
Nov 25 23:31:29 compute-0 xenodochial_snyder[76333]:         "modules": [
Nov 25 23:31:29 compute-0 xenodochial_snyder[76333]:             "iostat",
Nov 25 23:31:29 compute-0 xenodochial_snyder[76333]:             "nfs",
Nov 25 23:31:29 compute-0 xenodochial_snyder[76333]:             "restful"
Nov 25 23:31:29 compute-0 xenodochial_snyder[76333]:         ],
Nov 25 23:31:29 compute-0 xenodochial_snyder[76333]:         "services": {}
Nov 25 23:31:29 compute-0 xenodochial_snyder[76333]:     },
Nov 25 23:31:29 compute-0 xenodochial_snyder[76333]:     "servicemap": {
Nov 25 23:31:29 compute-0 xenodochial_snyder[76333]:         "epoch": 1,
Nov 25 23:31:29 compute-0 xenodochial_snyder[76333]:         "modified": "2025-11-25T23:31:07.189601+0000",
Nov 25 23:31:29 compute-0 xenodochial_snyder[76333]:         "services": {}
Nov 25 23:31:29 compute-0 xenodochial_snyder[76333]:     },
Nov 25 23:31:29 compute-0 xenodochial_snyder[76333]:     "progress_events": {}
Nov 25 23:31:29 compute-0 xenodochial_snyder[76333]: }
Nov 25 23:31:29 compute-0 systemd[1]: libpod-db4da38b929c0fbb2e38c6acb40b537a09513a773cdb147bbb179fad0ba07960.scope: Deactivated successfully.
Nov 25 23:31:29 compute-0 podman[76316]: 2025-11-25 23:31:29.406791692 +0000 UTC m=+0.635324799 container died db4da38b929c0fbb2e38c6acb40b537a09513a773cdb147bbb179fad0ba07960 (image=quay.io/ceph/ceph:v18, name=xenodochial_snyder, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:31:29 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/3887798461' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 25 23:31:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-2dafbff88ca3e204120d090e4d1f66b749dc2fe6f01dc06751cdf2ad79d7cf0b-merged.mount: Deactivated successfully.
Nov 25 23:31:29 compute-0 podman[76316]: 2025-11-25 23:31:29.463392873 +0000 UTC m=+0.691925920 container remove db4da38b929c0fbb2e38c6acb40b537a09513a773cdb147bbb179fad0ba07960 (image=quay.io/ceph/ceph:v18, name=xenodochial_snyder, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 23:31:29 compute-0 systemd[1]: libpod-conmon-db4da38b929c0fbb2e38c6acb40b537a09513a773cdb147bbb179fad0ba07960.scope: Deactivated successfully.
Nov 25 23:31:29 compute-0 ceph-mgr[75954]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 25 23:31:29 compute-0 ceph-mgr[75954]: mgr[py] Loading python module 'test_orchestrator'
Nov 25 23:31:29 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:29.526+0000 7f0aaebe6140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 25 23:31:30 compute-0 ceph-mgr[75954]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 25 23:31:30 compute-0 ceph-mgr[75954]: mgr[py] Loading python module 'volumes'
Nov 25 23:31:30 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:30.133+0000 7f0aaebe6140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 25 23:31:30 compute-0 ceph-mgr[75954]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 25 23:31:30 compute-0 ceph-mgr[75954]: mgr[py] Loading python module 'zabbix'
Nov 25 23:31:30 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:30.792+0000 7f0aaebe6140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 25 23:31:31 compute-0 ceph-mgr[75954]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 25 23:31:31 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:31.030+0000 7f0aaebe6140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 25 23:31:31 compute-0 ceph-mgr[75954]: ms_deliver_dispatch: unhandled message 0x557a5f9b31e0 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Nov 25 23:31:31 compute-0 ceph-mon[75654]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.gwqfsl
Nov 25 23:31:31 compute-0 ceph-mgr[75954]: mgr handle_mgr_map Activating!
Nov 25 23:31:31 compute-0 ceph-mgr[75954]: mgr handle_mgr_map I am now activating
Nov 25 23:31:31 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.gwqfsl(active, starting, since 0.0126147s)
Nov 25 23:31:31 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Nov 25 23:31:31 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2560913842' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 25 23:31:31 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).mds e1 all = 1
Nov 25 23:31:31 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Nov 25 23:31:31 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2560913842' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 25 23:31:31 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Nov 25 23:31:31 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2560913842' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 25 23:31:31 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Nov 25 23:31:31 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2560913842' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 25 23:31:31 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.gwqfsl", "id": "compute-0.gwqfsl"} v 0) v1
Nov 25 23:31:31 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2560913842' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "mgr metadata", "who": "compute-0.gwqfsl", "id": "compute-0.gwqfsl"}]: dispatch
Nov 25 23:31:31 compute-0 ceph-mgr[75954]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 23:31:31 compute-0 ceph-mgr[75954]: mgr load Constructed class from module: balancer
Nov 25 23:31:31 compute-0 ceph-mgr[75954]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 23:31:31 compute-0 ceph-mgr[75954]: mgr load Constructed class from module: crash
Nov 25 23:31:31 compute-0 ceph-mgr[75954]: [balancer INFO root] Starting
Nov 25 23:31:31 compute-0 ceph-mgr[75954]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 23:31:31 compute-0 ceph-mon[75654]: log_channel(cluster) log [INF] : Manager daemon compute-0.gwqfsl is now available
Nov 25 23:31:31 compute-0 ceph-mgr[75954]: mgr load Constructed class from module: devicehealth
Nov 25 23:31:31 compute-0 ceph-mgr[75954]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 23:31:31 compute-0 ceph-mgr[75954]: mgr load Constructed class from module: iostat
Nov 25 23:31:31 compute-0 ceph-mgr[75954]: [balancer INFO root] Optimize plan auto_2025-11-25_23:31:31
Nov 25 23:31:31 compute-0 ceph-mgr[75954]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 23:31:31 compute-0 ceph-mgr[75954]: [balancer INFO root] do_upmap
Nov 25 23:31:31 compute-0 ceph-mgr[75954]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 23:31:31 compute-0 ceph-mgr[75954]: mgr load Constructed class from module: nfs
Nov 25 23:31:31 compute-0 ceph-mgr[75954]: [devicehealth INFO root] Starting
Nov 25 23:31:31 compute-0 ceph-mgr[75954]: [balancer INFO root] No pools available
Nov 25 23:31:31 compute-0 ceph-mgr[75954]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 23:31:31 compute-0 ceph-mgr[75954]: mgr load Constructed class from module: orchestrator
Nov 25 23:31:31 compute-0 ceph-mgr[75954]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 23:31:31 compute-0 ceph-mgr[75954]: mgr load Constructed class from module: pg_autoscaler
Nov 25 23:31:31 compute-0 ceph-mgr[75954]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 23:31:31 compute-0 ceph-mgr[75954]: mgr load Constructed class from module: progress
Nov 25 23:31:31 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 23:31:31 compute-0 ceph-mgr[75954]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 23:31:31 compute-0 ceph-mgr[75954]: [progress INFO root] Loading...
Nov 25 23:31:31 compute-0 ceph-mgr[75954]: [progress INFO root] No stored events to load
Nov 25 23:31:31 compute-0 ceph-mgr[75954]: [progress INFO root] Loaded [] historic events
Nov 25 23:31:31 compute-0 ceph-mgr[75954]: [progress INFO root] Loaded OSDMap, ready.
Nov 25 23:31:31 compute-0 ceph-mgr[75954]: [rbd_support INFO root] recovery thread starting
Nov 25 23:31:31 compute-0 ceph-mgr[75954]: [rbd_support INFO root] starting setup
Nov 25 23:31:31 compute-0 ceph-mgr[75954]: mgr load Constructed class from module: rbd_support
Nov 25 23:31:31 compute-0 ceph-mgr[75954]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 23:31:31 compute-0 ceph-mgr[75954]: mgr load Constructed class from module: restful
Nov 25 23:31:31 compute-0 ceph-mgr[75954]: [restful INFO root] server_addr: :: server_port: 8003
Nov 25 23:31:31 compute-0 ceph-mgr[75954]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 23:31:31 compute-0 ceph-mgr[75954]: mgr load Constructed class from module: status
Nov 25 23:31:31 compute-0 ceph-mgr[75954]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 23:31:31 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.gwqfsl/mirror_snapshot_schedule"} v 0) v1
Nov 25 23:31:31 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2560913842' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.gwqfsl/mirror_snapshot_schedule"}]: dispatch
Nov 25 23:31:31 compute-0 ceph-mgr[75954]: mgr load Constructed class from module: telemetry
Nov 25 23:31:31 compute-0 ceph-mgr[75954]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 23:31:31 compute-0 ceph-mgr[75954]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 23:31:31 compute-0 ceph-mgr[75954]: [restful WARNING root] server not running: no certificate configured
Nov 25 23:31:31 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0) v1
Nov 25 23:31:31 compute-0 ceph-mgr[75954]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Nov 25 23:31:31 compute-0 ceph-mgr[75954]: [rbd_support INFO root] PerfHandler: starting
Nov 25 23:31:31 compute-0 ceph-mgr[75954]: [rbd_support INFO root] TaskHandler: starting
Nov 25 23:31:31 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.gwqfsl/trash_purge_schedule"} v 0) v1
Nov 25 23:31:31 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2560913842' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.gwqfsl/trash_purge_schedule"}]: dispatch
Nov 25 23:31:31 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2560913842' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:31:31 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0) v1
Nov 25 23:31:31 compute-0 ceph-mgr[75954]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 23:31:31 compute-0 ceph-mgr[75954]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Nov 25 23:31:31 compute-0 ceph-mgr[75954]: [rbd_support INFO root] setup complete
Nov 25 23:31:31 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2560913842' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:31:31 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0) v1
Nov 25 23:31:31 compute-0 ceph-mgr[75954]: mgr load Constructed class from module: volumes
Nov 25 23:31:31 compute-0 ceph-mon[75654]: Activating manager daemon compute-0.gwqfsl
Nov 25 23:31:31 compute-0 ceph-mon[75654]: mgrmap e2: compute-0.gwqfsl(active, starting, since 0.0126147s)
Nov 25 23:31:31 compute-0 ceph-mon[75654]: from='mgr.14102 192.168.122.100:0/2560913842' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 25 23:31:31 compute-0 ceph-mon[75654]: from='mgr.14102 192.168.122.100:0/2560913842' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 25 23:31:31 compute-0 ceph-mon[75654]: from='mgr.14102 192.168.122.100:0/2560913842' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 25 23:31:31 compute-0 ceph-mon[75654]: from='mgr.14102 192.168.122.100:0/2560913842' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 25 23:31:31 compute-0 ceph-mon[75654]: from='mgr.14102 192.168.122.100:0/2560913842' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "mgr metadata", "who": "compute-0.gwqfsl", "id": "compute-0.gwqfsl"}]: dispatch
Nov 25 23:31:31 compute-0 ceph-mon[75654]: Manager daemon compute-0.gwqfsl is now available
Nov 25 23:31:31 compute-0 ceph-mon[75654]: from='mgr.14102 192.168.122.100:0/2560913842' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.gwqfsl/mirror_snapshot_schedule"}]: dispatch
Nov 25 23:31:31 compute-0 ceph-mon[75654]: from='mgr.14102 192.168.122.100:0/2560913842' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.gwqfsl/trash_purge_schedule"}]: dispatch
Nov 25 23:31:31 compute-0 ceph-mon[75654]: from='mgr.14102 192.168.122.100:0/2560913842' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:31:31 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2560913842' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:31:31 compute-0 podman[76451]: 2025-11-25 23:31:31.545613703 +0000 UTC m=+0.056890595 container create 05bd5871e7b1165ca0d117252df7aac4dea1e3e7e2640a1f49e0080c62f04889 (image=quay.io/ceph/ceph:v18, name=charming_noyce, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:31:31 compute-0 systemd[1]: Started libpod-conmon-05bd5871e7b1165ca0d117252df7aac4dea1e3e7e2640a1f49e0080c62f04889.scope.
Nov 25 23:31:31 compute-0 podman[76451]: 2025-11-25 23:31:31.516542522 +0000 UTC m=+0.027819454 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 23:31:31 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:31:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1313fba31b604d924f0f1b48e27ae7ad758d6f2b5e2b0d67c59505feb545b170/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:31:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1313fba31b604d924f0f1b48e27ae7ad758d6f2b5e2b0d67c59505feb545b170/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 23:31:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1313fba31b604d924f0f1b48e27ae7ad758d6f2b5e2b0d67c59505feb545b170/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:31:31 compute-0 podman[76451]: 2025-11-25 23:31:31.644762949 +0000 UTC m=+0.156039851 container init 05bd5871e7b1165ca0d117252df7aac4dea1e3e7e2640a1f49e0080c62f04889 (image=quay.io/ceph/ceph:v18, name=charming_noyce, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:31:31 compute-0 podman[76451]: 2025-11-25 23:31:31.653353903 +0000 UTC m=+0.164630795 container start 05bd5871e7b1165ca0d117252df7aac4dea1e3e7e2640a1f49e0080c62f04889 (image=quay.io/ceph/ceph:v18, name=charming_noyce, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 23:31:31 compute-0 podman[76451]: 2025-11-25 23:31:31.657575849 +0000 UTC m=+0.168852741 container attach 05bd5871e7b1165ca0d117252df7aac4dea1e3e7e2640a1f49e0080c62f04889 (image=quay.io/ceph/ceph:v18, name=charming_noyce, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 23:31:32 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 25 23:31:32 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/172577660' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 25 23:31:32 compute-0 charming_noyce[76468]: 
Nov 25 23:31:32 compute-0 charming_noyce[76468]: {
Nov 25 23:31:32 compute-0 charming_noyce[76468]:     "fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:31:32 compute-0 charming_noyce[76468]:     "health": {
Nov 25 23:31:32 compute-0 charming_noyce[76468]:         "status": "HEALTH_OK",
Nov 25 23:31:32 compute-0 charming_noyce[76468]:         "checks": {},
Nov 25 23:31:32 compute-0 charming_noyce[76468]:         "mutes": []
Nov 25 23:31:32 compute-0 charming_noyce[76468]:     },
Nov 25 23:31:32 compute-0 charming_noyce[76468]:     "election_epoch": 5,
Nov 25 23:31:32 compute-0 charming_noyce[76468]:     "quorum": [
Nov 25 23:31:32 compute-0 charming_noyce[76468]:         0
Nov 25 23:31:32 compute-0 charming_noyce[76468]:     ],
Nov 25 23:31:32 compute-0 charming_noyce[76468]:     "quorum_names": [
Nov 25 23:31:32 compute-0 charming_noyce[76468]:         "compute-0"
Nov 25 23:31:32 compute-0 charming_noyce[76468]:     ],
Nov 25 23:31:32 compute-0 charming_noyce[76468]:     "quorum_age": 21,
Nov 25 23:31:32 compute-0 charming_noyce[76468]:     "monmap": {
Nov 25 23:31:32 compute-0 charming_noyce[76468]:         "epoch": 1,
Nov 25 23:31:32 compute-0 charming_noyce[76468]:         "min_mon_release_name": "reef",
Nov 25 23:31:32 compute-0 charming_noyce[76468]:         "num_mons": 1
Nov 25 23:31:32 compute-0 charming_noyce[76468]:     },
Nov 25 23:31:32 compute-0 charming_noyce[76468]:     "osdmap": {
Nov 25 23:31:32 compute-0 charming_noyce[76468]:         "epoch": 1,
Nov 25 23:31:32 compute-0 charming_noyce[76468]:         "num_osds": 0,
Nov 25 23:31:32 compute-0 charming_noyce[76468]:         "num_up_osds": 0,
Nov 25 23:31:32 compute-0 charming_noyce[76468]:         "osd_up_since": 0,
Nov 25 23:31:32 compute-0 charming_noyce[76468]:         "num_in_osds": 0,
Nov 25 23:31:32 compute-0 charming_noyce[76468]:         "osd_in_since": 0,
Nov 25 23:31:32 compute-0 charming_noyce[76468]:         "num_remapped_pgs": 0
Nov 25 23:31:32 compute-0 charming_noyce[76468]:     },
Nov 25 23:31:32 compute-0 charming_noyce[76468]:     "pgmap": {
Nov 25 23:31:32 compute-0 charming_noyce[76468]:         "pgs_by_state": [],
Nov 25 23:31:32 compute-0 charming_noyce[76468]:         "num_pgs": 0,
Nov 25 23:31:32 compute-0 charming_noyce[76468]:         "num_pools": 0,
Nov 25 23:31:32 compute-0 charming_noyce[76468]:         "num_objects": 0,
Nov 25 23:31:32 compute-0 charming_noyce[76468]:         "data_bytes": 0,
Nov 25 23:31:32 compute-0 charming_noyce[76468]:         "bytes_used": 0,
Nov 25 23:31:32 compute-0 charming_noyce[76468]:         "bytes_avail": 0,
Nov 25 23:31:32 compute-0 charming_noyce[76468]:         "bytes_total": 0
Nov 25 23:31:32 compute-0 charming_noyce[76468]:     },
Nov 25 23:31:32 compute-0 charming_noyce[76468]:     "fsmap": {
Nov 25 23:31:32 compute-0 charming_noyce[76468]:         "epoch": 1,
Nov 25 23:31:32 compute-0 charming_noyce[76468]:         "by_rank": [],
Nov 25 23:31:32 compute-0 charming_noyce[76468]:         "up:standby": 0
Nov 25 23:31:32 compute-0 charming_noyce[76468]:     },
Nov 25 23:31:32 compute-0 charming_noyce[76468]:     "mgrmap": {
Nov 25 23:31:32 compute-0 charming_noyce[76468]:         "available": false,
Nov 25 23:31:32 compute-0 charming_noyce[76468]:         "num_standbys": 0,
Nov 25 23:31:32 compute-0 charming_noyce[76468]:         "modules": [
Nov 25 23:31:32 compute-0 charming_noyce[76468]:             "iostat",
Nov 25 23:31:32 compute-0 charming_noyce[76468]:             "nfs",
Nov 25 23:31:32 compute-0 charming_noyce[76468]:             "restful"
Nov 25 23:31:32 compute-0 charming_noyce[76468]:         ],
Nov 25 23:31:32 compute-0 charming_noyce[76468]:         "services": {}
Nov 25 23:31:32 compute-0 charming_noyce[76468]:     },
Nov 25 23:31:32 compute-0 charming_noyce[76468]:     "servicemap": {
Nov 25 23:31:32 compute-0 charming_noyce[76468]:         "epoch": 1,
Nov 25 23:31:32 compute-0 charming_noyce[76468]:         "modified": "2025-11-25T23:31:07.189601+0000",
Nov 25 23:31:32 compute-0 charming_noyce[76468]:         "services": {}
Nov 25 23:31:32 compute-0 charming_noyce[76468]:     },
Nov 25 23:31:32 compute-0 charming_noyce[76468]:     "progress_events": {}
Nov 25 23:31:32 compute-0 charming_noyce[76468]: }
Nov 25 23:31:32 compute-0 systemd[1]: libpod-05bd5871e7b1165ca0d117252df7aac4dea1e3e7e2640a1f49e0080c62f04889.scope: Deactivated successfully.
Nov 25 23:31:32 compute-0 podman[76451]: 2025-11-25 23:31:32.052692204 +0000 UTC m=+0.563969096 container died 05bd5871e7b1165ca0d117252df7aac4dea1e3e7e2640a1f49e0080c62f04889 (image=quay.io/ceph/ceph:v18, name=charming_noyce, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:31:32 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.gwqfsl(active, since 1.03325s)
Nov 25 23:31:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-1313fba31b604d924f0f1b48e27ae7ad758d6f2b5e2b0d67c59505feb545b170-merged.mount: Deactivated successfully.
Nov 25 23:31:32 compute-0 ceph-mon[75654]: from='mgr.14102 192.168.122.100:0/2560913842' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:31:32 compute-0 ceph-mon[75654]: from='mgr.14102 192.168.122.100:0/2560913842' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:31:32 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/172577660' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 25 23:31:32 compute-0 podman[76451]: 2025-11-25 23:31:32.108855623 +0000 UTC m=+0.620132485 container remove 05bd5871e7b1165ca0d117252df7aac4dea1e3e7e2640a1f49e0080c62f04889 (image=quay.io/ceph/ceph:v18, name=charming_noyce, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:31:32 compute-0 ceph-mon[75654]: mgrmap e3: compute-0.gwqfsl(active, since 1.03325s)
Nov 25 23:31:32 compute-0 systemd[1]: libpod-conmon-05bd5871e7b1165ca0d117252df7aac4dea1e3e7e2640a1f49e0080c62f04889.scope: Deactivated successfully.
Nov 25 23:31:33 compute-0 ceph-mgr[75954]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 25 23:31:33 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.gwqfsl(active, since 2s)
Nov 25 23:31:34 compute-0 ceph-mon[75654]: mgrmap e4: compute-0.gwqfsl(active, since 2s)
Nov 25 23:31:34 compute-0 podman[76506]: 2025-11-25 23:31:34.209578447 +0000 UTC m=+0.065752090 container create a1ae0fe065c29c3719832c8eb4346fe02450d9e7d045c9c080ac15a45a2b2d0b (image=quay.io/ceph/ceph:v18, name=musing_jang, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 23:31:34 compute-0 systemd[1]: Started libpod-conmon-a1ae0fe065c29c3719832c8eb4346fe02450d9e7d045c9c080ac15a45a2b2d0b.scope.
Nov 25 23:31:34 compute-0 podman[76506]: 2025-11-25 23:31:34.182932701 +0000 UTC m=+0.039106394 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 23:31:34 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:31:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/476c73c76b5c3b46a40dd0bd99124bc2404656264ceab6df708f328dda16e7b3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:31:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/476c73c76b5c3b46a40dd0bd99124bc2404656264ceab6df708f328dda16e7b3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:31:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/476c73c76b5c3b46a40dd0bd99124bc2404656264ceab6df708f328dda16e7b3/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 23:31:34 compute-0 podman[76506]: 2025-11-25 23:31:34.308344821 +0000 UTC m=+0.164518454 container init a1ae0fe065c29c3719832c8eb4346fe02450d9e7d045c9c080ac15a45a2b2d0b (image=quay.io/ceph/ceph:v18, name=musing_jang, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:31:34 compute-0 podman[76506]: 2025-11-25 23:31:34.318601754 +0000 UTC m=+0.174775397 container start a1ae0fe065c29c3719832c8eb4346fe02450d9e7d045c9c080ac15a45a2b2d0b (image=quay.io/ceph/ceph:v18, name=musing_jang, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:31:34 compute-0 podman[76506]: 2025-11-25 23:31:34.322449918 +0000 UTC m=+0.178623551 container attach a1ae0fe065c29c3719832c8eb4346fe02450d9e7d045c9c080ac15a45a2b2d0b (image=quay.io/ceph/ceph:v18, name=musing_jang, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 23:31:34 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 25 23:31:34 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2376796191' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 25 23:31:34 compute-0 musing_jang[76522]: 
Nov 25 23:31:34 compute-0 musing_jang[76522]: {
Nov 25 23:31:34 compute-0 musing_jang[76522]:     "fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:31:34 compute-0 musing_jang[76522]:     "health": {
Nov 25 23:31:34 compute-0 musing_jang[76522]:         "status": "HEALTH_OK",
Nov 25 23:31:34 compute-0 musing_jang[76522]:         "checks": {},
Nov 25 23:31:34 compute-0 musing_jang[76522]:         "mutes": []
Nov 25 23:31:34 compute-0 musing_jang[76522]:     },
Nov 25 23:31:34 compute-0 musing_jang[76522]:     "election_epoch": 5,
Nov 25 23:31:34 compute-0 musing_jang[76522]:     "quorum": [
Nov 25 23:31:34 compute-0 musing_jang[76522]:         0
Nov 25 23:31:34 compute-0 musing_jang[76522]:     ],
Nov 25 23:31:34 compute-0 musing_jang[76522]:     "quorum_names": [
Nov 25 23:31:34 compute-0 musing_jang[76522]:         "compute-0"
Nov 25 23:31:34 compute-0 musing_jang[76522]:     ],
Nov 25 23:31:34 compute-0 musing_jang[76522]:     "quorum_age": 24,
Nov 25 23:31:34 compute-0 musing_jang[76522]:     "monmap": {
Nov 25 23:31:34 compute-0 musing_jang[76522]:         "epoch": 1,
Nov 25 23:31:34 compute-0 musing_jang[76522]:         "min_mon_release_name": "reef",
Nov 25 23:31:34 compute-0 musing_jang[76522]:         "num_mons": 1
Nov 25 23:31:34 compute-0 musing_jang[76522]:     },
Nov 25 23:31:34 compute-0 musing_jang[76522]:     "osdmap": {
Nov 25 23:31:34 compute-0 musing_jang[76522]:         "epoch": 1,
Nov 25 23:31:34 compute-0 musing_jang[76522]:         "num_osds": 0,
Nov 25 23:31:34 compute-0 musing_jang[76522]:         "num_up_osds": 0,
Nov 25 23:31:34 compute-0 musing_jang[76522]:         "osd_up_since": 0,
Nov 25 23:31:34 compute-0 musing_jang[76522]:         "num_in_osds": 0,
Nov 25 23:31:34 compute-0 musing_jang[76522]:         "osd_in_since": 0,
Nov 25 23:31:34 compute-0 musing_jang[76522]:         "num_remapped_pgs": 0
Nov 25 23:31:34 compute-0 musing_jang[76522]:     },
Nov 25 23:31:34 compute-0 musing_jang[76522]:     "pgmap": {
Nov 25 23:31:34 compute-0 musing_jang[76522]:         "pgs_by_state": [],
Nov 25 23:31:34 compute-0 musing_jang[76522]:         "num_pgs": 0,
Nov 25 23:31:34 compute-0 musing_jang[76522]:         "num_pools": 0,
Nov 25 23:31:34 compute-0 musing_jang[76522]:         "num_objects": 0,
Nov 25 23:31:34 compute-0 musing_jang[76522]:         "data_bytes": 0,
Nov 25 23:31:34 compute-0 musing_jang[76522]:         "bytes_used": 0,
Nov 25 23:31:34 compute-0 musing_jang[76522]:         "bytes_avail": 0,
Nov 25 23:31:34 compute-0 musing_jang[76522]:         "bytes_total": 0
Nov 25 23:31:34 compute-0 musing_jang[76522]:     },
Nov 25 23:31:34 compute-0 musing_jang[76522]:     "fsmap": {
Nov 25 23:31:34 compute-0 musing_jang[76522]:         "epoch": 1,
Nov 25 23:31:34 compute-0 musing_jang[76522]:         "by_rank": [],
Nov 25 23:31:34 compute-0 musing_jang[76522]:         "up:standby": 0
Nov 25 23:31:34 compute-0 musing_jang[76522]:     },
Nov 25 23:31:34 compute-0 musing_jang[76522]:     "mgrmap": {
Nov 25 23:31:34 compute-0 musing_jang[76522]:         "available": true,
Nov 25 23:31:34 compute-0 musing_jang[76522]:         "num_standbys": 0,
Nov 25 23:31:34 compute-0 musing_jang[76522]:         "modules": [
Nov 25 23:31:34 compute-0 musing_jang[76522]:             "iostat",
Nov 25 23:31:34 compute-0 musing_jang[76522]:             "nfs",
Nov 25 23:31:34 compute-0 musing_jang[76522]:             "restful"
Nov 25 23:31:34 compute-0 musing_jang[76522]:         ],
Nov 25 23:31:34 compute-0 musing_jang[76522]:         "services": {}
Nov 25 23:31:34 compute-0 musing_jang[76522]:     },
Nov 25 23:31:34 compute-0 musing_jang[76522]:     "servicemap": {
Nov 25 23:31:34 compute-0 musing_jang[76522]:         "epoch": 1,
Nov 25 23:31:34 compute-0 musing_jang[76522]:         "modified": "2025-11-25T23:31:07.189601+0000",
Nov 25 23:31:34 compute-0 musing_jang[76522]:         "services": {}
Nov 25 23:31:34 compute-0 musing_jang[76522]:     },
Nov 25 23:31:34 compute-0 musing_jang[76522]:     "progress_events": {}
Nov 25 23:31:34 compute-0 musing_jang[76522]: }
Nov 25 23:31:34 compute-0 systemd[1]: libpod-a1ae0fe065c29c3719832c8eb4346fe02450d9e7d045c9c080ac15a45a2b2d0b.scope: Deactivated successfully.
Nov 25 23:31:35 compute-0 podman[76548]: 2025-11-25 23:31:35.00179049 +0000 UTC m=+0.027846694 container died a1ae0fe065c29c3719832c8eb4346fe02450d9e7d045c9c080ac15a45a2b2d0b (image=quay.io/ceph/ceph:v18, name=musing_jang, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 23:31:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-476c73c76b5c3b46a40dd0bd99124bc2404656264ceab6df708f328dda16e7b3-merged.mount: Deactivated successfully.
Nov 25 23:31:35 compute-0 ceph-mgr[75954]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 25 23:31:35 compute-0 podman[76548]: 2025-11-25 23:31:35.058376641 +0000 UTC m=+0.084432795 container remove a1ae0fe065c29c3719832c8eb4346fe02450d9e7d045c9c080ac15a45a2b2d0b (image=quay.io/ceph/ceph:v18, name=musing_jang, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 23:31:35 compute-0 systemd[1]: libpod-conmon-a1ae0fe065c29c3719832c8eb4346fe02450d9e7d045c9c080ac15a45a2b2d0b.scope: Deactivated successfully.
Nov 25 23:31:35 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/2376796191' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 25 23:31:35 compute-0 podman[76562]: 2025-11-25 23:31:35.165350236 +0000 UTC m=+0.057828070 container create 2b0d2ead18f56ddcab606c0964551cfa9c0da689f2fdb99863c0aa22fe8a69af (image=quay.io/ceph/ceph:v18, name=crazy_raman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:31:35 compute-0 systemd[1]: Started libpod-conmon-2b0d2ead18f56ddcab606c0964551cfa9c0da689f2fdb99863c0aa22fe8a69af.scope.
Nov 25 23:31:35 compute-0 podman[76562]: 2025-11-25 23:31:35.144083874 +0000 UTC m=+0.036561718 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 23:31:35 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:31:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f15075a0e7c9f96a0686de49adcb3593e41c76aa529bf7eae6ffc67e94fd950/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 23:31:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f15075a0e7c9f96a0686de49adcb3593e41c76aa529bf7eae6ffc67e94fd950/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:31:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f15075a0e7c9f96a0686de49adcb3593e41c76aa529bf7eae6ffc67e94fd950/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:31:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f15075a0e7c9f96a0686de49adcb3593e41c76aa529bf7eae6ffc67e94fd950/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:31:35 compute-0 podman[76562]: 2025-11-25 23:31:35.270963302 +0000 UTC m=+0.163441146 container init 2b0d2ead18f56ddcab606c0964551cfa9c0da689f2fdb99863c0aa22fe8a69af (image=quay.io/ceph/ceph:v18, name=crazy_raman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Nov 25 23:31:35 compute-0 podman[76562]: 2025-11-25 23:31:35.285367862 +0000 UTC m=+0.177845666 container start 2b0d2ead18f56ddcab606c0964551cfa9c0da689f2fdb99863c0aa22fe8a69af (image=quay.io/ceph/ceph:v18, name=crazy_raman, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:31:35 compute-0 podman[76562]: 2025-11-25 23:31:35.289235526 +0000 UTC m=+0.181713330 container attach 2b0d2ead18f56ddcab606c0964551cfa9c0da689f2fdb99863c0aa22fe8a69af (image=quay.io/ceph/ceph:v18, name=crazy_raman, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:31:35 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Nov 25 23:31:35 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3582724958' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 25 23:31:35 compute-0 systemd[1]: libpod-2b0d2ead18f56ddcab606c0964551cfa9c0da689f2fdb99863c0aa22fe8a69af.scope: Deactivated successfully.
Nov 25 23:31:35 compute-0 podman[76604]: 2025-11-25 23:31:35.892163773 +0000 UTC m=+0.042167853 container died 2b0d2ead18f56ddcab606c0964551cfa9c0da689f2fdb99863c0aa22fe8a69af (image=quay.io/ceph/ceph:v18, name=crazy_raman, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 23:31:36 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/3582724958' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 25 23:31:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-0f15075a0e7c9f96a0686de49adcb3593e41c76aa529bf7eae6ffc67e94fd950-merged.mount: Deactivated successfully.
Nov 25 23:31:36 compute-0 podman[76604]: 2025-11-25 23:31:36.40202138 +0000 UTC m=+0.552025400 container remove 2b0d2ead18f56ddcab606c0964551cfa9c0da689f2fdb99863c0aa22fe8a69af (image=quay.io/ceph/ceph:v18, name=crazy_raman, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 23:31:36 compute-0 systemd[1]: libpod-conmon-2b0d2ead18f56ddcab606c0964551cfa9c0da689f2fdb99863c0aa22fe8a69af.scope: Deactivated successfully.
Nov 25 23:31:36 compute-0 podman[76619]: 2025-11-25 23:31:36.507509657 +0000 UTC m=+0.068513807 container create 66fe16e4ddd54fd1c0f32bb98f80eacb463c01bd446f4f9b3487594c8f5fd6e1 (image=quay.io/ceph/ceph:v18, name=gifted_hertz, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:31:36 compute-0 systemd[1]: Started libpod-conmon-66fe16e4ddd54fd1c0f32bb98f80eacb463c01bd446f4f9b3487594c8f5fd6e1.scope.
Nov 25 23:31:36 compute-0 podman[76619]: 2025-11-25 23:31:36.474815643 +0000 UTC m=+0.035819883 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 23:31:36 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:31:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27c8a9dabf4c776b4ad498d00bc40198f8f663989f7cf92b0e21d6272c40ebcd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:31:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27c8a9dabf4c776b4ad498d00bc40198f8f663989f7cf92b0e21d6272c40ebcd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:31:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27c8a9dabf4c776b4ad498d00bc40198f8f663989f7cf92b0e21d6272c40ebcd/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 23:31:36 compute-0 podman[76619]: 2025-11-25 23:31:36.598571533 +0000 UTC m=+0.159575753 container init 66fe16e4ddd54fd1c0f32bb98f80eacb463c01bd446f4f9b3487594c8f5fd6e1 (image=quay.io/ceph/ceph:v18, name=gifted_hertz, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:31:36 compute-0 podman[76619]: 2025-11-25 23:31:36.610860008 +0000 UTC m=+0.171864178 container start 66fe16e4ddd54fd1c0f32bb98f80eacb463c01bd446f4f9b3487594c8f5fd6e1 (image=quay.io/ceph/ceph:v18, name=gifted_hertz, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 25 23:31:36 compute-0 podman[76619]: 2025-11-25 23:31:36.6159017 +0000 UTC m=+0.176905880 container attach 66fe16e4ddd54fd1c0f32bb98f80eacb463c01bd446f4f9b3487594c8f5fd6e1 (image=quay.io/ceph/ceph:v18, name=gifted_hertz, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:31:37 compute-0 ceph-mgr[75954]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 25 23:31:37 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0) v1
Nov 25 23:31:37 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1382411213' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Nov 25 23:31:37 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/1382411213' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Nov 25 23:31:37 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1382411213' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Nov 25 23:31:37 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.gwqfsl(active, since 6s)
Nov 25 23:31:37 compute-0 systemd[1]: libpod-66fe16e4ddd54fd1c0f32bb98f80eacb463c01bd446f4f9b3487594c8f5fd6e1.scope: Deactivated successfully.
Nov 25 23:31:37 compute-0 podman[76619]: 2025-11-25 23:31:37.33683611 +0000 UTC m=+0.897840280 container died 66fe16e4ddd54fd1c0f32bb98f80eacb463c01bd446f4f9b3487594c8f5fd6e1 (image=quay.io/ceph/ceph:v18, name=gifted_hertz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 23:31:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-27c8a9dabf4c776b4ad498d00bc40198f8f663989f7cf92b0e21d6272c40ebcd-merged.mount: Deactivated successfully.
Nov 25 23:31:37 compute-0 podman[76619]: 2025-11-25 23:31:37.386270487 +0000 UTC m=+0.947274627 container remove 66fe16e4ddd54fd1c0f32bb98f80eacb463c01bd446f4f9b3487594c8f5fd6e1 (image=quay.io/ceph/ceph:v18, name=gifted_hertz, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 23:31:37 compute-0 systemd[1]: libpod-conmon-66fe16e4ddd54fd1c0f32bb98f80eacb463c01bd446f4f9b3487594c8f5fd6e1.scope: Deactivated successfully.
Nov 25 23:31:37 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: ignoring --setuser ceph since I am not root
Nov 25 23:31:37 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: ignoring --setgroup ceph since I am not root
Nov 25 23:31:37 compute-0 ceph-mgr[75954]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Nov 25 23:31:37 compute-0 ceph-mgr[75954]: pidfile_write: ignore empty --pid-file
Nov 25 23:31:37 compute-0 podman[76676]: 2025-11-25 23:31:37.472404572 +0000 UTC m=+0.058767986 container create 87f5153d427d5d1a1fa373430de3b986624636da4b96df412d9b8933d0e33717 (image=quay.io/ceph/ceph:v18, name=crazy_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:31:37 compute-0 systemd[1]: Started libpod-conmon-87f5153d427d5d1a1fa373430de3b986624636da4b96df412d9b8933d0e33717.scope.
Nov 25 23:31:37 compute-0 podman[76676]: 2025-11-25 23:31:37.443256681 +0000 UTC m=+0.029620145 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 23:31:37 compute-0 ceph-mgr[75954]: mgr[py] Loading python module 'alerts'
Nov 25 23:31:37 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:31:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4ca21aee2973ed1fb5c01fa7680567849e6697452926988a5504a2f8dd2010c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 23:31:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4ca21aee2973ed1fb5c01fa7680567849e6697452926988a5504a2f8dd2010c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:31:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4ca21aee2973ed1fb5c01fa7680567849e6697452926988a5504a2f8dd2010c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:31:37 compute-0 podman[76676]: 2025-11-25 23:31:37.594839143 +0000 UTC m=+0.181202637 container init 87f5153d427d5d1a1fa373430de3b986624636da4b96df412d9b8933d0e33717 (image=quay.io/ceph/ceph:v18, name=crazy_elion, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 23:31:37 compute-0 podman[76676]: 2025-11-25 23:31:37.601041942 +0000 UTC m=+0.187405366 container start 87f5153d427d5d1a1fa373430de3b986624636da4b96df412d9b8933d0e33717 (image=quay.io/ceph/ceph:v18, name=crazy_elion, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:31:37 compute-0 podman[76676]: 2025-11-25 23:31:37.605023096 +0000 UTC m=+0.191386520 container attach 87f5153d427d5d1a1fa373430de3b986624636da4b96df412d9b8933d0e33717 (image=quay.io/ceph/ceph:v18, name=crazy_elion, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:31:37 compute-0 ceph-mgr[75954]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 25 23:31:37 compute-0 ceph-mgr[75954]: mgr[py] Loading python module 'balancer'
Nov 25 23:31:37 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:37.842+0000 7f36d5dd2140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 25 23:31:38 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:38.077+0000 7f36d5dd2140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 25 23:31:38 compute-0 ceph-mgr[75954]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 25 23:31:38 compute-0 ceph-mgr[75954]: mgr[py] Loading python module 'cephadm'
Nov 25 23:31:38 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Nov 25 23:31:38 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/723603241' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 25 23:31:38 compute-0 crazy_elion[76716]: {
Nov 25 23:31:38 compute-0 crazy_elion[76716]:     "epoch": 5,
Nov 25 23:31:38 compute-0 crazy_elion[76716]:     "available": true,
Nov 25 23:31:38 compute-0 crazy_elion[76716]:     "active_name": "compute-0.gwqfsl",
Nov 25 23:31:38 compute-0 crazy_elion[76716]:     "num_standby": 0
Nov 25 23:31:38 compute-0 crazy_elion[76716]: }
Nov 25 23:31:38 compute-0 systemd[1]: libpod-87f5153d427d5d1a1fa373430de3b986624636da4b96df412d9b8933d0e33717.scope: Deactivated successfully.
Nov 25 23:31:38 compute-0 podman[76676]: 2025-11-25 23:31:38.17367948 +0000 UTC m=+0.760042934 container died 87f5153d427d5d1a1fa373430de3b986624636da4b96df412d9b8933d0e33717 (image=quay.io/ceph/ceph:v18, name=crazy_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 23:31:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-b4ca21aee2973ed1fb5c01fa7680567849e6697452926988a5504a2f8dd2010c-merged.mount: Deactivated successfully.
Nov 25 23:31:38 compute-0 podman[76676]: 2025-11-25 23:31:38.221799639 +0000 UTC m=+0.808163023 container remove 87f5153d427d5d1a1fa373430de3b986624636da4b96df412d9b8933d0e33717 (image=quay.io/ceph/ceph:v18, name=crazy_elion, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:31:38 compute-0 systemd[1]: libpod-conmon-87f5153d427d5d1a1fa373430de3b986624636da4b96df412d9b8933d0e33717.scope: Deactivated successfully.
Nov 25 23:31:38 compute-0 podman[76755]: 2025-11-25 23:31:38.297442429 +0000 UTC m=+0.053491873 container create fb485b5be84488f3bee5d4248f8ee8e9e45f78db370fcc5e9505a1ab26396b63 (image=quay.io/ceph/ceph:v18, name=vigorous_keldysh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:31:38 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/1382411213' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Nov 25 23:31:38 compute-0 ceph-mon[75654]: mgrmap e5: compute-0.gwqfsl(active, since 6s)
Nov 25 23:31:38 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/723603241' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 25 23:31:38 compute-0 systemd[1]: Started libpod-conmon-fb485b5be84488f3bee5d4248f8ee8e9e45f78db370fcc5e9505a1ab26396b63.scope.
Nov 25 23:31:38 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:31:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2a6f1c63d6c5d6f8f347ecd2658e47baef3680b19f1c84a5b0876ec47efc61e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:31:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2a6f1c63d6c5d6f8f347ecd2658e47baef3680b19f1c84a5b0876ec47efc61e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 23:31:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2a6f1c63d6c5d6f8f347ecd2658e47baef3680b19f1c84a5b0876ec47efc61e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:31:38 compute-0 podman[76755]: 2025-11-25 23:31:38.357381602 +0000 UTC m=+0.113431026 container init fb485b5be84488f3bee5d4248f8ee8e9e45f78db370fcc5e9505a1ab26396b63 (image=quay.io/ceph/ceph:v18, name=vigorous_keldysh, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 25 23:31:38 compute-0 podman[76755]: 2025-11-25 23:31:38.367601965 +0000 UTC m=+0.123651399 container start fb485b5be84488f3bee5d4248f8ee8e9e45f78db370fcc5e9505a1ab26396b63 (image=quay.io/ceph/ceph:v18, name=vigorous_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:31:38 compute-0 podman[76755]: 2025-11-25 23:31:38.275361272 +0000 UTC m=+0.031410696 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 23:31:38 compute-0 podman[76755]: 2025-11-25 23:31:38.371229618 +0000 UTC m=+0.127279052 container attach fb485b5be84488f3bee5d4248f8ee8e9e45f78db370fcc5e9505a1ab26396b63 (image=quay.io/ceph/ceph:v18, name=vigorous_keldysh, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 23:31:39 compute-0 ceph-mgr[75954]: mgr[py] Loading python module 'crash'
Nov 25 23:31:40 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:40.189+0000 7f36d5dd2140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 25 23:31:40 compute-0 ceph-mgr[75954]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 25 23:31:40 compute-0 ceph-mgr[75954]: mgr[py] Loading python module 'dashboard'
Nov 25 23:31:41 compute-0 ceph-mgr[75954]: mgr[py] Loading python module 'devicehealth'
Nov 25 23:31:41 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:41.745+0000 7f36d5dd2140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 25 23:31:41 compute-0 ceph-mgr[75954]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 25 23:31:41 compute-0 ceph-mgr[75954]: mgr[py] Loading python module 'diskprediction_local'
Nov 25 23:31:42 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 25 23:31:42 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 25 23:31:42 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]:   from numpy import show_config as show_numpy_config
Nov 25 23:31:42 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:42.234+0000 7f36d5dd2140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 25 23:31:42 compute-0 ceph-mgr[75954]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 25 23:31:42 compute-0 ceph-mgr[75954]: mgr[py] Loading python module 'influx'
Nov 25 23:31:42 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:42.455+0000 7f36d5dd2140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 25 23:31:42 compute-0 ceph-mgr[75954]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 25 23:31:42 compute-0 ceph-mgr[75954]: mgr[py] Loading python module 'insights'
Nov 25 23:31:42 compute-0 ceph-mgr[75954]: mgr[py] Loading python module 'iostat'
Nov 25 23:31:42 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:42.915+0000 7f36d5dd2140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 25 23:31:42 compute-0 ceph-mgr[75954]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 25 23:31:42 compute-0 ceph-mgr[75954]: mgr[py] Loading python module 'k8sevents'
Nov 25 23:31:44 compute-0 ceph-mgr[75954]: mgr[py] Loading python module 'localpool'
Nov 25 23:31:44 compute-0 ceph-mgr[75954]: mgr[py] Loading python module 'mds_autoscaler'
Nov 25 23:31:45 compute-0 ceph-mgr[75954]: mgr[py] Loading python module 'mirroring'
Nov 25 23:31:45 compute-0 ceph-mgr[75954]: mgr[py] Loading python module 'nfs'
Nov 25 23:31:46 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:46.267+0000 7f36d5dd2140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 25 23:31:46 compute-0 ceph-mgr[75954]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 25 23:31:46 compute-0 ceph-mgr[75954]: mgr[py] Loading python module 'orchestrator'
Nov 25 23:31:46 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:46.915+0000 7f36d5dd2140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 25 23:31:46 compute-0 ceph-mgr[75954]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 25 23:31:46 compute-0 ceph-mgr[75954]: mgr[py] Loading python module 'osd_perf_query'
Nov 25 23:31:47 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:47.179+0000 7f36d5dd2140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 25 23:31:47 compute-0 ceph-mgr[75954]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 25 23:31:47 compute-0 ceph-mgr[75954]: mgr[py] Loading python module 'osd_support'
Nov 25 23:31:47 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:47.399+0000 7f36d5dd2140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 25 23:31:47 compute-0 ceph-mgr[75954]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 25 23:31:47 compute-0 ceph-mgr[75954]: mgr[py] Loading python module 'pg_autoscaler'
Nov 25 23:31:47 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:47.656+0000 7f36d5dd2140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 25 23:31:47 compute-0 ceph-mgr[75954]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 25 23:31:47 compute-0 ceph-mgr[75954]: mgr[py] Loading python module 'progress'
Nov 25 23:31:47 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:47.877+0000 7f36d5dd2140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 25 23:31:47 compute-0 ceph-mgr[75954]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 25 23:31:47 compute-0 ceph-mgr[75954]: mgr[py] Loading python module 'prometheus'
Nov 25 23:31:48 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:48.836+0000 7f36d5dd2140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 25 23:31:48 compute-0 ceph-mgr[75954]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 25 23:31:48 compute-0 ceph-mgr[75954]: mgr[py] Loading python module 'rbd_support'
Nov 25 23:31:49 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:49.124+0000 7f36d5dd2140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 25 23:31:49 compute-0 ceph-mgr[75954]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 25 23:31:49 compute-0 ceph-mgr[75954]: mgr[py] Loading python module 'restful'
Nov 25 23:31:49 compute-0 ceph-mgr[75954]: mgr[py] Loading python module 'rgw'
Nov 25 23:31:50 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:50.548+0000 7f36d5dd2140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 25 23:31:50 compute-0 ceph-mgr[75954]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 25 23:31:50 compute-0 ceph-mgr[75954]: mgr[py] Loading python module 'rook'
Nov 25 23:31:52 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:52.568+0000 7f36d5dd2140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 25 23:31:52 compute-0 ceph-mgr[75954]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 25 23:31:52 compute-0 ceph-mgr[75954]: mgr[py] Loading python module 'selftest'
Nov 25 23:31:52 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:52.802+0000 7f36d5dd2140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 25 23:31:52 compute-0 ceph-mgr[75954]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 25 23:31:52 compute-0 ceph-mgr[75954]: mgr[py] Loading python module 'snap_schedule'
Nov 25 23:31:53 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:53.034+0000 7f36d5dd2140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 25 23:31:53 compute-0 ceph-mgr[75954]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 25 23:31:53 compute-0 ceph-mgr[75954]: mgr[py] Loading python module 'stats'
Nov 25 23:31:53 compute-0 ceph-mgr[75954]: mgr[py] Loading python module 'status'
Nov 25 23:31:53 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:53.523+0000 7f36d5dd2140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 25 23:31:53 compute-0 ceph-mgr[75954]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 25 23:31:53 compute-0 ceph-mgr[75954]: mgr[py] Loading python module 'telegraf'
Nov 25 23:31:53 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:53.769+0000 7f36d5dd2140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 25 23:31:53 compute-0 ceph-mgr[75954]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 25 23:31:53 compute-0 ceph-mgr[75954]: mgr[py] Loading python module 'telemetry'
Nov 25 23:31:54 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:54.372+0000 7f36d5dd2140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 25 23:31:54 compute-0 ceph-mgr[75954]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 25 23:31:54 compute-0 ceph-mgr[75954]: mgr[py] Loading python module 'test_orchestrator'
Nov 25 23:31:55 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:55.025+0000 7f36d5dd2140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 25 23:31:55 compute-0 ceph-mgr[75954]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 25 23:31:55 compute-0 ceph-mgr[75954]: mgr[py] Loading python module 'volumes'
Nov 25 23:31:55 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:55.725+0000 7f36d5dd2140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 25 23:31:55 compute-0 ceph-mgr[75954]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 25 23:31:55 compute-0 ceph-mgr[75954]: mgr[py] Loading python module 'zabbix'
Nov 25 23:31:55 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:31:55.954+0000 7f36d5dd2140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 25 23:31:55 compute-0 ceph-mgr[75954]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 25 23:31:55 compute-0 ceph-mon[75654]: log_channel(cluster) log [INF] : Active manager daemon compute-0.gwqfsl restarted
Nov 25 23:31:55 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Nov 25 23:31:55 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 25 23:31:55 compute-0 ceph-mon[75654]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.gwqfsl
Nov 25 23:31:55 compute-0 ceph-mgr[75954]: ms_deliver_dispatch: unhandled message 0x564b3d8431e0 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Nov 25 23:31:55 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Nov 25 23:31:55 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Nov 25 23:31:55 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Nov 25 23:31:55 compute-0 ceph-mgr[75954]: mgr handle_mgr_map Activating!
Nov 25 23:31:55 compute-0 ceph-mgr[75954]: mgr handle_mgr_map I am now activating
Nov 25 23:31:55 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Nov 25 23:31:55 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.gwqfsl(active, starting, since 0.0175253s)
Nov 25 23:31:55 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Nov 25 23:31:55 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 25 23:31:55 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.gwqfsl", "id": "compute-0.gwqfsl"} v 0) v1
Nov 25 23:31:55 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "mgr metadata", "who": "compute-0.gwqfsl", "id": "compute-0.gwqfsl"}]: dispatch
Nov 25 23:31:55 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Nov 25 23:31:55 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 25 23:31:55 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).mds e1 all = 1
Nov 25 23:31:55 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Nov 25 23:31:55 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 25 23:31:55 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Nov 25 23:31:55 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 25 23:31:55 compute-0 ceph-mgr[75954]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 23:31:55 compute-0 ceph-mgr[75954]: mgr load Constructed class from module: balancer
Nov 25 23:31:55 compute-0 ceph-mgr[75954]: [balancer INFO root] Starting
Nov 25 23:31:55 compute-0 ceph-mon[75654]: log_channel(cluster) log [INF] : Manager daemon compute-0.gwqfsl is now available
Nov 25 23:31:55 compute-0 ceph-mgr[75954]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 23:31:55 compute-0 ceph-mgr[75954]: [balancer INFO root] Optimize plan auto_2025-11-25_23:31:55
Nov 25 23:31:55 compute-0 ceph-mgr[75954]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 23:31:55 compute-0 ceph-mgr[75954]: [balancer INFO root] do_upmap
Nov 25 23:31:55 compute-0 ceph-mgr[75954]: [balancer INFO root] No pools available
Nov 25 23:31:56 compute-0 ceph-mon[75654]: Active manager daemon compute-0.gwqfsl restarted
Nov 25 23:31:56 compute-0 ceph-mon[75654]: Activating manager daemon compute-0.gwqfsl
Nov 25 23:31:56 compute-0 ceph-mon[75654]: osdmap e2: 0 total, 0 up, 0 in
Nov 25 23:31:56 compute-0 ceph-mon[75654]: mgrmap e6: compute-0.gwqfsl(active, starting, since 0.0175253s)
Nov 25 23:31:56 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 25 23:31:56 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "mgr metadata", "who": "compute-0.gwqfsl", "id": "compute-0.gwqfsl"}]: dispatch
Nov 25 23:31:56 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 25 23:31:56 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 25 23:31:56 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 25 23:31:56 compute-0 ceph-mon[75654]: Manager daemon compute-0.gwqfsl is now available
Nov 25 23:31:56 compute-0 ceph-mgr[75954]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Nov 25 23:31:56 compute-0 ceph-mgr[75954]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Nov 25 23:31:56 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0) v1
Nov 25 23:31:56 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:31:56 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0) v1
Nov 25 23:31:56 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:31:56 compute-0 ceph-mgr[75954]: mgr load Constructed class from module: cephadm
Nov 25 23:31:56 compute-0 ceph-mgr[75954]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 23:31:56 compute-0 ceph-mgr[75954]: mgr load Constructed class from module: crash
Nov 25 23:31:56 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 25 23:31:56 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 25 23:31:56 compute-0 ceph-mgr[75954]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 23:31:56 compute-0 ceph-mgr[75954]: mgr load Constructed class from module: devicehealth
Nov 25 23:31:56 compute-0 ceph-mgr[75954]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 23:31:56 compute-0 ceph-mgr[75954]: mgr load Constructed class from module: iostat
Nov 25 23:31:56 compute-0 ceph-mgr[75954]: [devicehealth INFO root] Starting
Nov 25 23:31:56 compute-0 ceph-mgr[75954]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 23:31:56 compute-0 ceph-mgr[75954]: mgr load Constructed class from module: nfs
Nov 25 23:31:56 compute-0 ceph-mgr[75954]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 23:31:56 compute-0 ceph-mgr[75954]: mgr load Constructed class from module: orchestrator
Nov 25 23:31:56 compute-0 ceph-mgr[75954]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 23:31:56 compute-0 ceph-mgr[75954]: mgr load Constructed class from module: pg_autoscaler
Nov 25 23:31:56 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 23:31:56 compute-0 ceph-mgr[75954]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 23:31:56 compute-0 ceph-mgr[75954]: mgr load Constructed class from module: progress
Nov 25 23:31:56 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 25 23:31:56 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 25 23:31:56 compute-0 ceph-mgr[75954]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 23:31:56 compute-0 ceph-mgr[75954]: [progress INFO root] Loading...
Nov 25 23:31:56 compute-0 ceph-mgr[75954]: [progress INFO root] No stored events to load
Nov 25 23:31:56 compute-0 ceph-mgr[75954]: [progress INFO root] Loaded [] historic events
Nov 25 23:31:56 compute-0 ceph-mgr[75954]: [progress INFO root] Loaded OSDMap, ready.
Nov 25 23:31:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] recovery thread starting
Nov 25 23:31:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] starting setup
Nov 25 23:31:56 compute-0 ceph-mgr[75954]: mgr load Constructed class from module: rbd_support
Nov 25 23:31:56 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.gwqfsl/mirror_snapshot_schedule"} v 0) v1
Nov 25 23:31:56 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.gwqfsl/mirror_snapshot_schedule"}]: dispatch
Nov 25 23:31:56 compute-0 ceph-mgr[75954]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 23:31:56 compute-0 ceph-mgr[75954]: mgr load Constructed class from module: restful
Nov 25 23:31:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 23:31:56 compute-0 ceph-mgr[75954]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 23:31:56 compute-0 ceph-mgr[75954]: mgr load Constructed class from module: status
Nov 25 23:31:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Nov 25 23:31:56 compute-0 ceph-mgr[75954]: [restful INFO root] server_addr: :: server_port: 8003
Nov 25 23:31:56 compute-0 ceph-mgr[75954]: [restful WARNING root] server not running: no certificate configured
Nov 25 23:31:56 compute-0 ceph-mgr[75954]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 23:31:56 compute-0 ceph-mgr[75954]: mgr load Constructed class from module: telemetry
Nov 25 23:31:56 compute-0 ceph-mgr[75954]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 23:31:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] PerfHandler: starting
Nov 25 23:31:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] TaskHandler: starting
Nov 25 23:31:56 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.gwqfsl/trash_purge_schedule"} v 0) v1
Nov 25 23:31:56 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.gwqfsl/trash_purge_schedule"}]: dispatch
Nov 25 23:31:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 23:31:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Nov 25 23:31:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] setup complete
Nov 25 23:31:56 compute-0 ceph-mgr[75954]: mgr load Constructed class from module: volumes
Nov 25 23:31:56 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/cert}] v 0) v1
Nov 25 23:31:56 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:31:56 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/key}] v 0) v1
Nov 25 23:31:56 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:31:56 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Nov 25 23:31:56 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.gwqfsl(active, since 1.02756s)
Nov 25 23:31:56 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Nov 25 23:31:56 compute-0 vigorous_keldysh[76771]: {
Nov 25 23:31:56 compute-0 vigorous_keldysh[76771]:     "mgrmap_epoch": 7,
Nov 25 23:31:56 compute-0 vigorous_keldysh[76771]:     "initialized": true
Nov 25 23:31:56 compute-0 vigorous_keldysh[76771]: }
Nov 25 23:31:57 compute-0 systemd[1]: libpod-fb485b5be84488f3bee5d4248f8ee8e9e45f78db370fcc5e9505a1ab26396b63.scope: Deactivated successfully.
Nov 25 23:31:57 compute-0 podman[76755]: 2025-11-25 23:31:57.013314822 +0000 UTC m=+18.769364276 container died fb485b5be84488f3bee5d4248f8ee8e9e45f78db370fcc5e9505a1ab26396b63 (image=quay.io/ceph/ceph:v18, name=vigorous_keldysh, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 25 23:31:57 compute-0 ceph-mon[75654]: Found migration_current of "None". Setting to last migration.
Nov 25 23:31:57 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:31:57 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:31:57 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 25 23:31:57 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 25 23:31:57 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.gwqfsl/mirror_snapshot_schedule"}]: dispatch
Nov 25 23:31:57 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.gwqfsl/trash_purge_schedule"}]: dispatch
Nov 25 23:31:57 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:31:57 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:31:57 compute-0 ceph-mon[75654]: mgrmap e7: compute-0.gwqfsl(active, since 1.02756s)
Nov 25 23:31:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-a2a6f1c63d6c5d6f8f347ecd2658e47baef3680b19f1c84a5b0876ec47efc61e-merged.mount: Deactivated successfully.
Nov 25 23:31:57 compute-0 podman[76755]: 2025-11-25 23:31:57.073765277 +0000 UTC m=+18.829814721 container remove fb485b5be84488f3bee5d4248f8ee8e9e45f78db370fcc5e9505a1ab26396b63 (image=quay.io/ceph/ceph:v18, name=vigorous_keldysh, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 23:31:57 compute-0 systemd[1]: libpod-conmon-fb485b5be84488f3bee5d4248f8ee8e9e45f78db370fcc5e9505a1ab26396b63.scope: Deactivated successfully.
Nov 25 23:31:57 compute-0 podman[76930]: 2025-11-25 23:31:57.177830265 +0000 UTC m=+0.073454598 container create e816d63afa7064a876782f649bbcbfabef9f8dbf90addf4d48310aabbc27d2ce (image=quay.io/ceph/ceph:v18, name=heuristic_yalow, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 23:31:57 compute-0 systemd[1]: Started libpod-conmon-e816d63afa7064a876782f649bbcbfabef9f8dbf90addf4d48310aabbc27d2ce.scope.
Nov 25 23:31:57 compute-0 podman[76930]: 2025-11-25 23:31:57.138096107 +0000 UTC m=+0.033720490 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 23:31:57 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:31:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/662330ff237cf6dda9ea9d42b56274a7f96301fbaf6207fa352ec6f794faa7dc/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 23:31:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/662330ff237cf6dda9ea9d42b56274a7f96301fbaf6207fa352ec6f794faa7dc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:31:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/662330ff237cf6dda9ea9d42b56274a7f96301fbaf6207fa352ec6f794faa7dc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:31:57 compute-0 podman[76930]: 2025-11-25 23:31:57.269086782 +0000 UTC m=+0.164711095 container init e816d63afa7064a876782f649bbcbfabef9f8dbf90addf4d48310aabbc27d2ce (image=quay.io/ceph/ceph:v18, name=heuristic_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 23:31:57 compute-0 podman[76930]: 2025-11-25 23:31:57.274718227 +0000 UTC m=+0.170342520 container start e816d63afa7064a876782f649bbcbfabef9f8dbf90addf4d48310aabbc27d2ce (image=quay.io/ceph/ceph:v18, name=heuristic_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:31:57 compute-0 podman[76930]: 2025-11-25 23:31:57.277550824 +0000 UTC m=+0.173175147 container attach e816d63afa7064a876782f649bbcbfabef9f8dbf90addf4d48310aabbc27d2ce (image=quay.io/ceph/ceph:v18, name=heuristic_yalow, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 23:31:57 compute-0 ceph-mgr[75954]: [cephadm INFO cherrypy.error] [25/Nov/2025:23:31:57] ENGINE Bus STARTING
Nov 25 23:31:57 compute-0 ceph-mgr[75954]: log_channel(cephadm) log [INF] : [25/Nov/2025:23:31:57] ENGINE Bus STARTING
Nov 25 23:31:57 compute-0 ceph-mgr[75954]: [cephadm INFO cherrypy.error] [25/Nov/2025:23:31:57] ENGINE Serving on http://192.168.122.100:8765
Nov 25 23:31:57 compute-0 ceph-mgr[75954]: log_channel(cephadm) log [INF] : [25/Nov/2025:23:31:57] ENGINE Serving on http://192.168.122.100:8765
Nov 25 23:31:57 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 23:31:57 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0) v1
Nov 25 23:31:57 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:31:57 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 25 23:31:57 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 25 23:31:57 compute-0 systemd[1]: libpod-e816d63afa7064a876782f649bbcbfabef9f8dbf90addf4d48310aabbc27d2ce.scope: Deactivated successfully.
Nov 25 23:31:57 compute-0 podman[76996]: 2025-11-25 23:31:57.864688602 +0000 UTC m=+0.023653868 container died e816d63afa7064a876782f649bbcbfabef9f8dbf90addf4d48310aabbc27d2ce (image=quay.io/ceph/ceph:v18, name=heuristic_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 23:31:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-662330ff237cf6dda9ea9d42b56274a7f96301fbaf6207fa352ec6f794faa7dc-merged.mount: Deactivated successfully.
Nov 25 23:31:57 compute-0 ceph-mgr[75954]: [cephadm INFO cherrypy.error] [25/Nov/2025:23:31:57] ENGINE Serving on https://192.168.122.100:7150
Nov 25 23:31:57 compute-0 ceph-mgr[75954]: log_channel(cephadm) log [INF] : [25/Nov/2025:23:31:57] ENGINE Serving on https://192.168.122.100:7150
Nov 25 23:31:57 compute-0 ceph-mgr[75954]: [cephadm INFO cherrypy.error] [25/Nov/2025:23:31:57] ENGINE Bus STARTED
Nov 25 23:31:57 compute-0 ceph-mgr[75954]: log_channel(cephadm) log [INF] : [25/Nov/2025:23:31:57] ENGINE Bus STARTED
Nov 25 23:31:57 compute-0 ceph-mgr[75954]: [cephadm INFO cherrypy.error] [25/Nov/2025:23:31:57] ENGINE Client ('192.168.122.100', 50912) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 25 23:31:57 compute-0 ceph-mgr[75954]: log_channel(cephadm) log [INF] : [25/Nov/2025:23:31:57] ENGINE Client ('192.168.122.100', 50912) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 25 23:31:57 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 25 23:31:57 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 25 23:31:57 compute-0 podman[76996]: 2025-11-25 23:31:57.900457945 +0000 UTC m=+0.059423191 container remove e816d63afa7064a876782f649bbcbfabef9f8dbf90addf4d48310aabbc27d2ce (image=quay.io/ceph/ceph:v18, name=heuristic_yalow, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:31:57 compute-0 systemd[1]: libpod-conmon-e816d63afa7064a876782f649bbcbfabef9f8dbf90addf4d48310aabbc27d2ce.scope: Deactivated successfully.
Nov 25 23:31:57 compute-0 podman[77011]: 2025-11-25 23:31:57.973236737 +0000 UTC m=+0.044434767 container create 998dcd943335fb0b3892b7f637e69784bdb834c95c82b74221c0c102520a0df8 (image=quay.io/ceph/ceph:v18, name=trusting_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 25 23:31:57 compute-0 ceph-mgr[75954]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 25 23:31:58 compute-0 systemd[1]: Started libpod-conmon-998dcd943335fb0b3892b7f637e69784bdb834c95c82b74221c0c102520a0df8.scope.
Nov 25 23:31:58 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:31:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b98067781e038edb030e3d4a4934358ac018e601f2573e05c3ed3906800b7b71/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:31:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b98067781e038edb030e3d4a4934358ac018e601f2573e05c3ed3906800b7b71/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 23:31:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b98067781e038edb030e3d4a4934358ac018e601f2573e05c3ed3906800b7b71/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:31:58 compute-0 podman[77011]: 2025-11-25 23:31:57.949008557 +0000 UTC m=+0.020206567 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 23:31:58 compute-0 ceph-mon[75654]: from='client.14136 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Nov 25 23:31:58 compute-0 ceph-mon[75654]: from='client.14136 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Nov 25 23:31:58 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:31:58 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 25 23:31:58 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 25 23:31:58 compute-0 podman[77011]: 2025-11-25 23:31:58.05743947 +0000 UTC m=+0.128637480 container init 998dcd943335fb0b3892b7f637e69784bdb834c95c82b74221c0c102520a0df8 (image=quay.io/ceph/ceph:v18, name=trusting_easley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 23:31:58 compute-0 podman[77011]: 2025-11-25 23:31:58.062762044 +0000 UTC m=+0.133960064 container start 998dcd943335fb0b3892b7f637e69784bdb834c95c82b74221c0c102520a0df8 (image=quay.io/ceph/ceph:v18, name=trusting_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 23:31:58 compute-0 podman[77011]: 2025-11-25 23:31:58.068030177 +0000 UTC m=+0.139228167 container attach 998dcd943335fb0b3892b7f637e69784bdb834c95c82b74221c0c102520a0df8 (image=quay.io/ceph/ceph:v18, name=trusting_easley, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 25 23:31:58 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 23:31:58 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0) v1
Nov 25 23:31:58 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:31:58 compute-0 ceph-mgr[75954]: [cephadm INFO root] Set ssh ssh_user
Nov 25 23:31:58 compute-0 ceph-mgr[75954]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Nov 25 23:31:58 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0) v1
Nov 25 23:31:58 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:31:58 compute-0 ceph-mgr[75954]: [cephadm INFO root] Set ssh ssh_config
Nov 25 23:31:58 compute-0 ceph-mgr[75954]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Nov 25 23:31:58 compute-0 ceph-mgr[75954]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Nov 25 23:31:58 compute-0 ceph-mgr[75954]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Nov 25 23:31:58 compute-0 trusting_easley[77027]: ssh user set to ceph-admin. sudo will be used
Nov 25 23:31:58 compute-0 systemd[1]: libpod-998dcd943335fb0b3892b7f637e69784bdb834c95c82b74221c0c102520a0df8.scope: Deactivated successfully.
Nov 25 23:31:58 compute-0 podman[77055]: 2025-11-25 23:31:58.655661368 +0000 UTC m=+0.036532808 container died 998dcd943335fb0b3892b7f637e69784bdb834c95c82b74221c0c102520a0df8 (image=quay.io/ceph/ceph:v18, name=trusting_easley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 23:31:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-b98067781e038edb030e3d4a4934358ac018e601f2573e05c3ed3906800b7b71-merged.mount: Deactivated successfully.
Nov 25 23:31:58 compute-0 podman[77055]: 2025-11-25 23:31:58.705684309 +0000 UTC m=+0.086555759 container remove 998dcd943335fb0b3892b7f637e69784bdb834c95c82b74221c0c102520a0df8 (image=quay.io/ceph/ceph:v18, name=trusting_easley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:31:58 compute-0 systemd[1]: libpod-conmon-998dcd943335fb0b3892b7f637e69784bdb834c95c82b74221c0c102520a0df8.scope: Deactivated successfully.
Nov 25 23:31:58 compute-0 podman[77070]: 2025-11-25 23:31:58.802044108 +0000 UTC m=+0.056619584 container create e99595a12441ead945b2ea602a8abfbcc94bae23acbd98aeb581134e78184892 (image=quay.io/ceph/ceph:v18, name=gracious_heisenberg, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:31:58 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.gwqfsl(active, since 2s)
Nov 25 23:31:58 compute-0 systemd[1]: Started libpod-conmon-e99595a12441ead945b2ea602a8abfbcc94bae23acbd98aeb581134e78184892.scope.
Nov 25 23:31:58 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:31:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/323513e0de7de9f204a81c3d643ea127bf7134ac6b6a7368908867ddb2cbd4b6/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Nov 25 23:31:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/323513e0de7de9f204a81c3d643ea127bf7134ac6b6a7368908867ddb2cbd4b6/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Nov 25 23:31:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/323513e0de7de9f204a81c3d643ea127bf7134ac6b6a7368908867ddb2cbd4b6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:31:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/323513e0de7de9f204a81c3d643ea127bf7134ac6b6a7368908867ddb2cbd4b6/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 23:31:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/323513e0de7de9f204a81c3d643ea127bf7134ac6b6a7368908867ddb2cbd4b6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:31:58 compute-0 podman[77070]: 2025-11-25 23:31:58.782090114 +0000 UTC m=+0.036665600 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 23:31:58 compute-0 podman[77070]: 2025-11-25 23:31:58.890831649 +0000 UTC m=+0.145407175 container init e99595a12441ead945b2ea602a8abfbcc94bae23acbd98aeb581134e78184892 (image=quay.io/ceph/ceph:v18, name=gracious_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:31:58 compute-0 podman[77070]: 2025-11-25 23:31:58.901512966 +0000 UTC m=+0.156088452 container start e99595a12441ead945b2ea602a8abfbcc94bae23acbd98aeb581134e78184892 (image=quay.io/ceph/ceph:v18, name=gracious_heisenberg, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 23:31:58 compute-0 podman[77070]: 2025-11-25 23:31:58.905788103 +0000 UTC m=+0.160363579 container attach e99595a12441ead945b2ea602a8abfbcc94bae23acbd98aeb581134e78184892 (image=quay.io/ceph/ceph:v18, name=gracious_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 23:31:59 compute-0 ceph-mon[75654]: [25/Nov/2025:23:31:57] ENGINE Bus STARTING
Nov 25 23:31:59 compute-0 ceph-mon[75654]: [25/Nov/2025:23:31:57] ENGINE Serving on http://192.168.122.100:8765
Nov 25 23:31:59 compute-0 ceph-mon[75654]: from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 23:31:59 compute-0 ceph-mon[75654]: [25/Nov/2025:23:31:57] ENGINE Serving on https://192.168.122.100:7150
Nov 25 23:31:59 compute-0 ceph-mon[75654]: [25/Nov/2025:23:31:57] ENGINE Bus STARTED
Nov 25 23:31:59 compute-0 ceph-mon[75654]: [25/Nov/2025:23:31:57] ENGINE Client ('192.168.122.100', 50912) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 25 23:31:59 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:31:59 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:31:59 compute-0 ceph-mon[75654]: mgrmap e8: compute-0.gwqfsl(active, since 2s)
Nov 25 23:31:59 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 23:31:59 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0) v1
Nov 25 23:31:59 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:31:59 compute-0 ceph-mgr[75954]: [cephadm INFO root] Set ssh ssh_identity_key
Nov 25 23:31:59 compute-0 ceph-mgr[75954]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Nov 25 23:31:59 compute-0 ceph-mgr[75954]: [cephadm INFO root] Set ssh private key
Nov 25 23:31:59 compute-0 ceph-mgr[75954]: log_channel(cephadm) log [INF] : Set ssh private key
Nov 25 23:31:59 compute-0 systemd[1]: libpod-e99595a12441ead945b2ea602a8abfbcc94bae23acbd98aeb581134e78184892.scope: Deactivated successfully.
Nov 25 23:31:59 compute-0 podman[77070]: 2025-11-25 23:31:59.496136002 +0000 UTC m=+0.750711458 container died e99595a12441ead945b2ea602a8abfbcc94bae23acbd98aeb581134e78184892 (image=quay.io/ceph/ceph:v18, name=gracious_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 23:31:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-323513e0de7de9f204a81c3d643ea127bf7134ac6b6a7368908867ddb2cbd4b6-merged.mount: Deactivated successfully.
Nov 25 23:31:59 compute-0 podman[77070]: 2025-11-25 23:31:59.53624068 +0000 UTC m=+0.790816126 container remove e99595a12441ead945b2ea602a8abfbcc94bae23acbd98aeb581134e78184892 (image=quay.io/ceph/ceph:v18, name=gracious_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:31:59 compute-0 systemd[1]: libpod-conmon-e99595a12441ead945b2ea602a8abfbcc94bae23acbd98aeb581134e78184892.scope: Deactivated successfully.
Nov 25 23:31:59 compute-0 podman[77122]: 2025-11-25 23:31:59.629688041 +0000 UTC m=+0.059435140 container create 8c62c1b85709ced067be323f2c3529086b920b89d7c0c9118a1487f16484dd8c (image=quay.io/ceph/ceph:v18, name=ecstatic_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 23:31:59 compute-0 systemd[1]: Started libpod-conmon-8c62c1b85709ced067be323f2c3529086b920b89d7c0c9118a1487f16484dd8c.scope.
Nov 25 23:31:59 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:31:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d54d73bd5714a8dac2225f47705eaff4016998969a565b75a9bc97a753d1db8/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Nov 25 23:31:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d54d73bd5714a8dac2225f47705eaff4016998969a565b75a9bc97a753d1db8/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Nov 25 23:31:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d54d73bd5714a8dac2225f47705eaff4016998969a565b75a9bc97a753d1db8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:31:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d54d73bd5714a8dac2225f47705eaff4016998969a565b75a9bc97a753d1db8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 23:31:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d54d73bd5714a8dac2225f47705eaff4016998969a565b75a9bc97a753d1db8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:31:59 compute-0 podman[77122]: 2025-11-25 23:31:59.612384674 +0000 UTC m=+0.042131793 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 23:31:59 compute-0 podman[77122]: 2025-11-25 23:31:59.715674695 +0000 UTC m=+0.145421834 container init 8c62c1b85709ced067be323f2c3529086b920b89d7c0c9118a1487f16484dd8c (image=quay.io/ceph/ceph:v18, name=ecstatic_lederberg, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 23:31:59 compute-0 podman[77122]: 2025-11-25 23:31:59.727133107 +0000 UTC m=+0.156880256 container start 8c62c1b85709ced067be323f2c3529086b920b89d7c0c9118a1487f16484dd8c (image=quay.io/ceph/ceph:v18, name=ecstatic_lederberg, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Nov 25 23:31:59 compute-0 podman[77122]: 2025-11-25 23:31:59.731933996 +0000 UTC m=+0.161681125 container attach 8c62c1b85709ced067be323f2c3529086b920b89d7c0c9118a1487f16484dd8c (image=quay.io/ceph/ceph:v18, name=ecstatic_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 23:31:59 compute-0 ceph-mgr[75954]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 25 23:32:00 compute-0 ceph-mon[75654]: from='client.14146 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 23:32:00 compute-0 ceph-mon[75654]: Set ssh ssh_user
Nov 25 23:32:00 compute-0 ceph-mon[75654]: Set ssh ssh_config
Nov 25 23:32:00 compute-0 ceph-mon[75654]: ssh user set to ceph-admin. sudo will be used
Nov 25 23:32:00 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:00 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019918601 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:32:00 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 23:32:00 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0) v1
Nov 25 23:32:00 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:00 compute-0 ceph-mgr[75954]: [cephadm INFO root] Set ssh ssh_identity_pub
Nov 25 23:32:00 compute-0 ceph-mgr[75954]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Nov 25 23:32:00 compute-0 systemd[1]: libpod-8c62c1b85709ced067be323f2c3529086b920b89d7c0c9118a1487f16484dd8c.scope: Deactivated successfully.
Nov 25 23:32:00 compute-0 conmon[77138]: conmon 8c62c1b85709ced067be <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8c62c1b85709ced067be323f2c3529086b920b89d7c0c9118a1487f16484dd8c.scope/container/memory.events
Nov 25 23:32:00 compute-0 podman[77122]: 2025-11-25 23:32:00.304453215 +0000 UTC m=+0.734200324 container died 8c62c1b85709ced067be323f2c3529086b920b89d7c0c9118a1487f16484dd8c (image=quay.io/ceph/ceph:v18, name=ecstatic_lederberg, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 23:32:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-6d54d73bd5714a8dac2225f47705eaff4016998969a565b75a9bc97a753d1db8-merged.mount: Deactivated successfully.
Nov 25 23:32:00 compute-0 podman[77122]: 2025-11-25 23:32:00.347447462 +0000 UTC m=+0.777194561 container remove 8c62c1b85709ced067be323f2c3529086b920b89d7c0c9118a1487f16484dd8c (image=quay.io/ceph/ceph:v18, name=ecstatic_lederberg, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:32:00 compute-0 systemd[1]: libpod-conmon-8c62c1b85709ced067be323f2c3529086b920b89d7c0c9118a1487f16484dd8c.scope: Deactivated successfully.
Nov 25 23:32:00 compute-0 podman[77177]: 2025-11-25 23:32:00.406953991 +0000 UTC m=+0.038052577 container create a4926ef55910212f2abbcf8b621d3ed644f9ad5a850847a1ed1e5b83f9ddf89d (image=quay.io/ceph/ceph:v18, name=stupefied_chaplygin, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:32:00 compute-0 systemd[1]: Started libpod-conmon-a4926ef55910212f2abbcf8b621d3ed644f9ad5a850847a1ed1e5b83f9ddf89d.scope.
Nov 25 23:32:00 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:32:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/651002a038bad2846e8708414c55d7a4b9fb28be9c71571dc470a207340a3436/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/651002a038bad2846e8708414c55d7a4b9fb28be9c71571dc470a207340a3436/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/651002a038bad2846e8708414c55d7a4b9fb28be9c71571dc470a207340a3436/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:00 compute-0 podman[77177]: 2025-11-25 23:32:00.38916553 +0000 UTC m=+0.020264106 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 23:32:00 compute-0 podman[77177]: 2025-11-25 23:32:00.488072996 +0000 UTC m=+0.119171612 container init a4926ef55910212f2abbcf8b621d3ed644f9ad5a850847a1ed1e5b83f9ddf89d (image=quay.io/ceph/ceph:v18, name=stupefied_chaplygin, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:32:00 compute-0 podman[77177]: 2025-11-25 23:32:00.500437652 +0000 UTC m=+0.131536248 container start a4926ef55910212f2abbcf8b621d3ed644f9ad5a850847a1ed1e5b83f9ddf89d (image=quay.io/ceph/ceph:v18, name=stupefied_chaplygin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 23:32:00 compute-0 podman[77177]: 2025-11-25 23:32:00.504410947 +0000 UTC m=+0.135509533 container attach a4926ef55910212f2abbcf8b621d3ed644f9ad5a850847a1ed1e5b83f9ddf89d (image=quay.io/ceph/ceph:v18, name=stupefied_chaplygin, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 23:32:01 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14152 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 23:32:01 compute-0 stupefied_chaplygin[77193]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3THDlDiSF6PqONtAShZXCwFchR4rHTy5LPyheMSOgxaFzRe03pZG6osav83q8oZrqgKFplCm+9dnQIrFcmd7iLbDo24HWeXXUfzrsvTZgIVI8UN/6hnGxYC+5h3E5m4klPM9IcXV9KAVHDNunhWU2jIRJzASluLI0kbZt2MJ4nuPgoD03C5hBPXlTc0ndBuSmPCNz6GYl+sX0h95buuaLUKCm7G/cnKmNyXsqPZP5FdeXco80uvTdhIbEGqTRKqWph3FI18LoAWzWV0yPMthlcnNRy3ieGkVLO/IYzthfRxVEFtCMLFX12YH302IwsnQaxf8vStRKCrPT0z4DEPwU/5gxK+2W4pKStrPtaFR+zaMkyUUbzGPynrcln+k4szFjrcLUCK5aogyDdVEDxGP06YshuteHcUD+aiwo38MyEJXrVT8BhnI7TawvHvIpDNzD34yERn2J6wCMS41TAsAqTh7oE0P3kMw1DuCEQNMzJETLNmYLOD416M3lDsQz4vU= zuul@controller
Nov 25 23:32:01 compute-0 systemd[1]: libpod-a4926ef55910212f2abbcf8b621d3ed644f9ad5a850847a1ed1e5b83f9ddf89d.scope: Deactivated successfully.
Nov 25 23:32:01 compute-0 podman[77177]: 2025-11-25 23:32:01.035504237 +0000 UTC m=+0.666602863 container died a4926ef55910212f2abbcf8b621d3ed644f9ad5a850847a1ed1e5b83f9ddf89d (image=quay.io/ceph/ceph:v18, name=stupefied_chaplygin, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 23:32:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-651002a038bad2846e8708414c55d7a4b9fb28be9c71571dc470a207340a3436-merged.mount: Deactivated successfully.
Nov 25 23:32:01 compute-0 podman[77177]: 2025-11-25 23:32:01.083884764 +0000 UTC m=+0.714983320 container remove a4926ef55910212f2abbcf8b621d3ed644f9ad5a850847a1ed1e5b83f9ddf89d (image=quay.io/ceph/ceph:v18, name=stupefied_chaplygin, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:32:01 compute-0 systemd[1]: libpod-conmon-a4926ef55910212f2abbcf8b621d3ed644f9ad5a850847a1ed1e5b83f9ddf89d.scope: Deactivated successfully.
Nov 25 23:32:01 compute-0 podman[77231]: 2025-11-25 23:32:01.147705543 +0000 UTC m=+0.041713969 container create 0e86ee7925a8d6bf6b17756b1309e4cf2d3b47b4ae96c5e14d7dbb8c76effc53 (image=quay.io/ceph/ceph:v18, name=fervent_carver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:32:01 compute-0 systemd[1]: Started libpod-conmon-0e86ee7925a8d6bf6b17756b1309e4cf2d3b47b4ae96c5e14d7dbb8c76effc53.scope.
Nov 25 23:32:01 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:32:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f38e401c6be48f144acbe3d96348e5770b32d2ce483063e17922ca325ee76079/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f38e401c6be48f144acbe3d96348e5770b32d2ce483063e17922ca325ee76079/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f38e401c6be48f144acbe3d96348e5770b32d2ce483063e17922ca325ee76079/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:01 compute-0 podman[77231]: 2025-11-25 23:32:01.21922844 +0000 UTC m=+0.113236896 container init 0e86ee7925a8d6bf6b17756b1309e4cf2d3b47b4ae96c5e14d7dbb8c76effc53 (image=quay.io/ceph/ceph:v18, name=fervent_carver, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 23:32:01 compute-0 podman[77231]: 2025-11-25 23:32:01.129022643 +0000 UTC m=+0.023031079 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 23:32:01 compute-0 podman[77231]: 2025-11-25 23:32:01.227180973 +0000 UTC m=+0.121189389 container start 0e86ee7925a8d6bf6b17756b1309e4cf2d3b47b4ae96c5e14d7dbb8c76effc53 (image=quay.io/ceph/ceph:v18, name=fervent_carver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:32:01 compute-0 podman[77231]: 2025-11-25 23:32:01.231133469 +0000 UTC m=+0.125141935 container attach 0e86ee7925a8d6bf6b17756b1309e4cf2d3b47b4ae96c5e14d7dbb8c76effc53 (image=quay.io/ceph/ceph:v18, name=fervent_carver, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 23:32:01 compute-0 ceph-mon[75654]: from='client.14148 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 23:32:01 compute-0 ceph-mon[75654]: Set ssh ssh_identity_key
Nov 25 23:32:01 compute-0 ceph-mon[75654]: Set ssh private key
Nov 25 23:32:01 compute-0 ceph-mon[75654]: from='client.14150 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 23:32:01 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:01 compute-0 ceph-mon[75654]: Set ssh ssh_identity_pub
Nov 25 23:32:01 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 23:32:01 compute-0 anacron[7485]: Job `cron.daily' started
Nov 25 23:32:01 compute-0 anacron[7485]: Job `cron.daily' terminated
Nov 25 23:32:01 compute-0 sshd-session[77277]: Accepted publickey for ceph-admin from 192.168.122.100 port 42520 ssh2: RSA SHA256:+qskse9WP1l+rCMHNlGkhdiL5W/Nev0fY09y4UtHwh8
Nov 25 23:32:01 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Nov 25 23:32:01 compute-0 ceph-mgr[75954]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 25 23:32:01 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Nov 25 23:32:01 compute-0 systemd-logind[789]: New session 21 of user ceph-admin.
Nov 25 23:32:02 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Nov 25 23:32:02 compute-0 systemd[1]: Starting User Manager for UID 42477...
Nov 25 23:32:02 compute-0 systemd[77281]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 25 23:32:02 compute-0 systemd[77281]: Queued start job for default target Main User Target.
Nov 25 23:32:02 compute-0 systemd[77281]: Created slice User Application Slice.
Nov 25 23:32:02 compute-0 systemd[77281]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 25 23:32:02 compute-0 systemd[77281]: Started Daily Cleanup of User's Temporary Directories.
Nov 25 23:32:02 compute-0 systemd[77281]: Reached target Paths.
Nov 25 23:32:02 compute-0 systemd[77281]: Reached target Timers.
Nov 25 23:32:02 compute-0 systemd[77281]: Starting D-Bus User Message Bus Socket...
Nov 25 23:32:02 compute-0 systemd[77281]: Starting Create User's Volatile Files and Directories...
Nov 25 23:32:02 compute-0 sshd-session[77295]: Accepted publickey for ceph-admin from 192.168.122.100 port 42522 ssh2: RSA SHA256:+qskse9WP1l+rCMHNlGkhdiL5W/Nev0fY09y4UtHwh8
Nov 25 23:32:02 compute-0 systemd[77281]: Listening on D-Bus User Message Bus Socket.
Nov 25 23:32:02 compute-0 systemd[77281]: Reached target Sockets.
Nov 25 23:32:02 compute-0 systemd-logind[789]: New session 23 of user ceph-admin.
Nov 25 23:32:02 compute-0 systemd[77281]: Finished Create User's Volatile Files and Directories.
Nov 25 23:32:02 compute-0 systemd[77281]: Reached target Basic System.
Nov 25 23:32:02 compute-0 systemd[77281]: Reached target Main User Target.
Nov 25 23:32:02 compute-0 systemd[77281]: Startup finished in 165ms.
Nov 25 23:32:02 compute-0 systemd[1]: Started User Manager for UID 42477.
Nov 25 23:32:02 compute-0 systemd[1]: Started Session 21 of User ceph-admin.
Nov 25 23:32:02 compute-0 systemd[1]: Started Session 23 of User ceph-admin.
Nov 25 23:32:02 compute-0 sshd-session[77277]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 25 23:32:02 compute-0 sshd-session[77295]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 25 23:32:02 compute-0 ceph-mon[75654]: from='client.14152 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 23:32:02 compute-0 sudo[77302]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:32:02 compute-0 sudo[77302]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:02 compute-0 sudo[77302]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:02 compute-0 sudo[77327]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:32:02 compute-0 sudo[77327]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:02 compute-0 sudo[77327]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:02 compute-0 sshd-session[77352]: Accepted publickey for ceph-admin from 192.168.122.100 port 42530 ssh2: RSA SHA256:+qskse9WP1l+rCMHNlGkhdiL5W/Nev0fY09y4UtHwh8
Nov 25 23:32:02 compute-0 systemd-logind[789]: New session 24 of user ceph-admin.
Nov 25 23:32:02 compute-0 systemd[1]: Started Session 24 of User ceph-admin.
Nov 25 23:32:02 compute-0 sshd-session[77352]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 25 23:32:02 compute-0 sudo[77356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:32:02 compute-0 sudo[77356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:02 compute-0 sudo[77356]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:02 compute-0 sudo[77381]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host --expect-hostname compute-0
Nov 25 23:32:02 compute-0 sudo[77381]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:02 compute-0 sudo[77381]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:03 compute-0 sshd-session[77273]: Invalid user work from 68.183.82.237 port 59884
Nov 25 23:32:03 compute-0 sshd-session[77406]: Accepted publickey for ceph-admin from 192.168.122.100 port 42536 ssh2: RSA SHA256:+qskse9WP1l+rCMHNlGkhdiL5W/Nev0fY09y4UtHwh8
Nov 25 23:32:03 compute-0 systemd-logind[789]: New session 25 of user ceph-admin.
Nov 25 23:32:03 compute-0 systemd[1]: Started Session 25 of User ceph-admin.
Nov 25 23:32:03 compute-0 sshd-session[77406]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 25 23:32:03 compute-0 sudo[77410]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:32:03 compute-0 sudo[77410]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:03 compute-0 ceph-mon[75654]: from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 23:32:03 compute-0 sudo[77410]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:03 compute-0 sshd-session[77273]: Received disconnect from 68.183.82.237 port 59884:11: Bye Bye [preauth]
Nov 25 23:32:03 compute-0 sshd-session[77273]: Disconnected from invalid user work 68.183.82.237 port 59884 [preauth]
Nov 25 23:32:03 compute-0 sudo[77435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d
Nov 25 23:32:03 compute-0 sudo[77435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:03 compute-0 sudo[77435]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:03 compute-0 ceph-mgr[75954]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Nov 25 23:32:03 compute-0 ceph-mgr[75954]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Nov 25 23:32:03 compute-0 sshd-session[77460]: Accepted publickey for ceph-admin from 192.168.122.100 port 42550 ssh2: RSA SHA256:+qskse9WP1l+rCMHNlGkhdiL5W/Nev0fY09y4UtHwh8
Nov 25 23:32:03 compute-0 systemd-logind[789]: New session 26 of user ceph-admin.
Nov 25 23:32:03 compute-0 systemd[1]: Started Session 26 of User ceph-admin.
Nov 25 23:32:03 compute-0 sshd-session[77460]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 25 23:32:03 compute-0 sudo[77464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:32:03 compute-0 sudo[77464]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:03 compute-0 sudo[77464]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:03 compute-0 sudo[77489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/101922db-575f-58e2-980f-928050464f69
Nov 25 23:32:03 compute-0 sudo[77489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:03 compute-0 sudo[77489]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:03 compute-0 ceph-mgr[75954]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 25 23:32:04 compute-0 sshd-session[77514]: Accepted publickey for ceph-admin from 192.168.122.100 port 42552 ssh2: RSA SHA256:+qskse9WP1l+rCMHNlGkhdiL5W/Nev0fY09y4UtHwh8
Nov 25 23:32:04 compute-0 systemd-logind[789]: New session 27 of user ceph-admin.
Nov 25 23:32:04 compute-0 systemd[1]: Started Session 27 of User ceph-admin.
Nov 25 23:32:04 compute-0 sshd-session[77514]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 25 23:32:04 compute-0 sudo[77518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:32:04 compute-0 sudo[77518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:04 compute-0 sudo[77518]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:04 compute-0 ceph-mon[75654]: Deploying cephadm binary to compute-0
Nov 25 23:32:04 compute-0 sudo[77543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-101922db-575f-58e2-980f-928050464f69/var/lib/ceph/101922db-575f-58e2-980f-928050464f69
Nov 25 23:32:04 compute-0 sudo[77543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:04 compute-0 sudo[77543]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:04 compute-0 sshd-session[77568]: Accepted publickey for ceph-admin from 192.168.122.100 port 42564 ssh2: RSA SHA256:+qskse9WP1l+rCMHNlGkhdiL5W/Nev0fY09y4UtHwh8
Nov 25 23:32:04 compute-0 systemd-logind[789]: New session 28 of user ceph-admin.
Nov 25 23:32:04 compute-0 systemd[1]: Started Session 28 of User ceph-admin.
Nov 25 23:32:04 compute-0 sshd-session[77568]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 25 23:32:04 compute-0 sudo[77572]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:32:04 compute-0 sudo[77572]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:04 compute-0 sudo[77572]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:04 compute-0 sudo[77597]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-101922db-575f-58e2-980f-928050464f69/var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new
Nov 25 23:32:04 compute-0 sudo[77597]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:04 compute-0 sudo[77597]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:05 compute-0 sshd-session[77622]: Accepted publickey for ceph-admin from 192.168.122.100 port 42566 ssh2: RSA SHA256:+qskse9WP1l+rCMHNlGkhdiL5W/Nev0fY09y4UtHwh8
Nov 25 23:32:05 compute-0 systemd-logind[789]: New session 29 of user ceph-admin.
Nov 25 23:32:05 compute-0 systemd[1]: Started Session 29 of User ceph-admin.
Nov 25 23:32:05 compute-0 sshd-session[77622]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 25 23:32:05 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020052964 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:32:05 compute-0 sudo[77626]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:32:05 compute-0 sudo[77626]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:05 compute-0 sudo[77626]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:05 compute-0 sudo[77651]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-101922db-575f-58e2-980f-928050464f69
Nov 25 23:32:05 compute-0 sudo[77651]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:05 compute-0 sudo[77651]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:05 compute-0 sshd-session[77676]: Accepted publickey for ceph-admin from 192.168.122.100 port 42580 ssh2: RSA SHA256:+qskse9WP1l+rCMHNlGkhdiL5W/Nev0fY09y4UtHwh8
Nov 25 23:32:05 compute-0 systemd-logind[789]: New session 30 of user ceph-admin.
Nov 25 23:32:05 compute-0 systemd[1]: Started Session 30 of User ceph-admin.
Nov 25 23:32:05 compute-0 sshd-session[77676]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 25 23:32:05 compute-0 sudo[77680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:32:05 compute-0 sudo[77680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:05 compute-0 sudo[77680]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:05 compute-0 sudo[77705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-101922db-575f-58e2-980f-928050464f69/var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new
Nov 25 23:32:05 compute-0 sudo[77705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:05 compute-0 sudo[77705]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:05 compute-0 sshd-session[77730]: Accepted publickey for ceph-admin from 192.168.122.100 port 42586 ssh2: RSA SHA256:+qskse9WP1l+rCMHNlGkhdiL5W/Nev0fY09y4UtHwh8
Nov 25 23:32:05 compute-0 systemd-logind[789]: New session 31 of user ceph-admin.
Nov 25 23:32:05 compute-0 systemd[1]: Started Session 31 of User ceph-admin.
Nov 25 23:32:05 compute-0 sshd-session[77730]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 25 23:32:05 compute-0 ceph-mgr[75954]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 25 23:32:06 compute-0 sshd-session[77757]: Accepted publickey for ceph-admin from 192.168.122.100 port 42596 ssh2: RSA SHA256:+qskse9WP1l+rCMHNlGkhdiL5W/Nev0fY09y4UtHwh8
Nov 25 23:32:06 compute-0 systemd-logind[789]: New session 32 of user ceph-admin.
Nov 25 23:32:06 compute-0 systemd[1]: Started Session 32 of User ceph-admin.
Nov 25 23:32:06 compute-0 sshd-session[77757]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 25 23:32:06 compute-0 sudo[77761]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:32:06 compute-0 sudo[77761]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:06 compute-0 sudo[77761]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:06 compute-0 sudo[77786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-101922db-575f-58e2-980f-928050464f69/var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d
Nov 25 23:32:06 compute-0 sudo[77786]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:06 compute-0 sudo[77786]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:06 compute-0 sshd-session[77811]: Accepted publickey for ceph-admin from 192.168.122.100 port 42600 ssh2: RSA SHA256:+qskse9WP1l+rCMHNlGkhdiL5W/Nev0fY09y4UtHwh8
Nov 25 23:32:06 compute-0 systemd-logind[789]: New session 33 of user ceph-admin.
Nov 25 23:32:06 compute-0 systemd[1]: Started Session 33 of User ceph-admin.
Nov 25 23:32:06 compute-0 sshd-session[77811]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 25 23:32:07 compute-0 sudo[77815]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:32:07 compute-0 sudo[77815]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:07 compute-0 sudo[77815]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:07 compute-0 sudo[77840]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host --expect-hostname compute-0
Nov 25 23:32:07 compute-0 sudo[77840]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:07 compute-0 sudo[77840]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:07 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 25 23:32:07 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:07 compute-0 ceph-mgr[75954]: [cephadm INFO root] Added host compute-0
Nov 25 23:32:07 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 25 23:32:07 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 25 23:32:07 compute-0 ceph-mgr[75954]: log_channel(cephadm) log [INF] : Added host compute-0
Nov 25 23:32:07 compute-0 fervent_carver[77247]: Added host 'compute-0' with addr '192.168.122.100'
Nov 25 23:32:07 compute-0 systemd[1]: libpod-0e86ee7925a8d6bf6b17756b1309e4cf2d3b47b4ae96c5e14d7dbb8c76effc53.scope: Deactivated successfully.
Nov 25 23:32:07 compute-0 podman[77231]: 2025-11-25 23:32:07.392822858 +0000 UTC m=+6.286831284 container died 0e86ee7925a8d6bf6b17756b1309e4cf2d3b47b4ae96c5e14d7dbb8c76effc53 (image=quay.io/ceph/ceph:v18, name=fervent_carver, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 23:32:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-f38e401c6be48f144acbe3d96348e5770b32d2ce483063e17922ca325ee76079-merged.mount: Deactivated successfully.
Nov 25 23:32:07 compute-0 sudo[77886]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:32:07 compute-0 sudo[77886]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:07 compute-0 sudo[77886]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:07 compute-0 podman[77231]: 2025-11-25 23:32:07.444221355 +0000 UTC m=+6.338229781 container remove 0e86ee7925a8d6bf6b17756b1309e4cf2d3b47b4ae96c5e14d7dbb8c76effc53 (image=quay.io/ceph/ceph:v18, name=fervent_carver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 23:32:07 compute-0 systemd[1]: libpod-conmon-0e86ee7925a8d6bf6b17756b1309e4cf2d3b47b4ae96c5e14d7dbb8c76effc53.scope: Deactivated successfully.
Nov 25 23:32:07 compute-0 sudo[77924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:32:07 compute-0 sudo[77924]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:07 compute-0 sudo[77924]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:07 compute-0 podman[77930]: 2025-11-25 23:32:07.526279484 +0000 UTC m=+0.053714621 container create 76b298c52b99791f2b111e209546dcb402dba83bec0c6fccc543976f39320db4 (image=quay.io/ceph/ceph:v18, name=bold_torvalds, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:32:07 compute-0 systemd[1]: Started libpod-conmon-76b298c52b99791f2b111e209546dcb402dba83bec0c6fccc543976f39320db4.scope.
Nov 25 23:32:07 compute-0 sudo[77960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:32:07 compute-0 sudo[77960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:07 compute-0 sudo[77960]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:07 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:32:07 compute-0 podman[77930]: 2025-11-25 23:32:07.510774429 +0000 UTC m=+0.038209546 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 23:32:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72bac06a2ac875cc0d531d62ec22f3b4a2f3de60183c3122b85f13012a806ca0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72bac06a2ac875cc0d531d62ec22f3b4a2f3de60183c3122b85f13012a806ca0/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72bac06a2ac875cc0d531d62ec22f3b4a2f3de60183c3122b85f13012a806ca0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:07 compute-0 podman[77930]: 2025-11-25 23:32:07.620998442 +0000 UTC m=+0.148433579 container init 76b298c52b99791f2b111e209546dcb402dba83bec0c6fccc543976f39320db4 (image=quay.io/ceph/ceph:v18, name=bold_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:32:07 compute-0 sudo[77993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph:v18 --timeout 895 inspect-image
Nov 25 23:32:07 compute-0 podman[77930]: 2025-11-25 23:32:07.631041882 +0000 UTC m=+0.158476999 container start 76b298c52b99791f2b111e209546dcb402dba83bec0c6fccc543976f39320db4 (image=quay.io/ceph/ceph:v18, name=bold_torvalds, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:32:07 compute-0 podman[77930]: 2025-11-25 23:32:07.634250857 +0000 UTC m=+0.161685994 container attach 76b298c52b99791f2b111e209546dcb402dba83bec0c6fccc543976f39320db4 (image=quay.io/ceph/ceph:v18, name=bold_torvalds, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:32:07 compute-0 sudo[77993]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:07 compute-0 podman[78047]: 2025-11-25 23:32:07.897202454 +0000 UTC m=+0.043695021 container create b07529db55fd27958ba0a42df4149e4df319aa8e5682d53bca50750bbe1e2ead (image=quay.io/ceph/ceph:v18, name=flamboyant_mccarthy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 23:32:07 compute-0 systemd[1]: Started libpod-conmon-b07529db55fd27958ba0a42df4149e4df319aa8e5682d53bca50750bbe1e2ead.scope.
Nov 25 23:32:07 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:32:07 compute-0 podman[78047]: 2025-11-25 23:32:07.880361773 +0000 UTC m=+0.026854360 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 23:32:07 compute-0 podman[78047]: 2025-11-25 23:32:07.973752715 +0000 UTC m=+0.120245352 container init b07529db55fd27958ba0a42df4149e4df319aa8e5682d53bca50750bbe1e2ead (image=quay.io/ceph/ceph:v18, name=flamboyant_mccarthy, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:32:07 compute-0 podman[78047]: 2025-11-25 23:32:07.979394756 +0000 UTC m=+0.125887363 container start b07529db55fd27958ba0a42df4149e4df319aa8e5682d53bca50750bbe1e2ead (image=quay.io/ceph/ceph:v18, name=flamboyant_mccarthy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 23:32:07 compute-0 ceph-mgr[75954]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 25 23:32:07 compute-0 podman[78047]: 2025-11-25 23:32:07.983372863 +0000 UTC m=+0.129865460 container attach b07529db55fd27958ba0a42df4149e4df319aa8e5682d53bca50750bbe1e2ead (image=quay.io/ceph/ceph:v18, name=flamboyant_mccarthy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 23:32:08 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 23:32:08 compute-0 ceph-mgr[75954]: [cephadm INFO root] Saving service mon spec with placement count:5
Nov 25 23:32:08 compute-0 ceph-mgr[75954]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Nov 25 23:32:08 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 25 23:32:08 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:08 compute-0 bold_torvalds[77989]: Scheduled mon update...
Nov 25 23:32:08 compute-0 systemd[1]: libpod-76b298c52b99791f2b111e209546dcb402dba83bec0c6fccc543976f39320db4.scope: Deactivated successfully.
Nov 25 23:32:08 compute-0 podman[77930]: 2025-11-25 23:32:08.162279427 +0000 UTC m=+0.689714584 container died 76b298c52b99791f2b111e209546dcb402dba83bec0c6fccc543976f39320db4 (image=quay.io/ceph/ceph:v18, name=bold_torvalds, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 23:32:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-72bac06a2ac875cc0d531d62ec22f3b4a2f3de60183c3122b85f13012a806ca0-merged.mount: Deactivated successfully.
Nov 25 23:32:08 compute-0 podman[77930]: 2025-11-25 23:32:08.220509678 +0000 UTC m=+0.747944825 container remove 76b298c52b99791f2b111e209546dcb402dba83bec0c6fccc543976f39320db4 (image=quay.io/ceph/ceph:v18, name=bold_torvalds, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 23:32:08 compute-0 systemd[1]: libpod-conmon-76b298c52b99791f2b111e209546dcb402dba83bec0c6fccc543976f39320db4.scope: Deactivated successfully.
Nov 25 23:32:08 compute-0 flamboyant_mccarthy[78082]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Nov 25 23:32:08 compute-0 systemd[1]: libpod-b07529db55fd27958ba0a42df4149e4df319aa8e5682d53bca50750bbe1e2ead.scope: Deactivated successfully.
Nov 25 23:32:08 compute-0 podman[78047]: 2025-11-25 23:32:08.271622877 +0000 UTC m=+0.418115454 container died b07529db55fd27958ba0a42df4149e4df319aa8e5682d53bca50750bbe1e2ead (image=quay.io/ceph/ceph:v18, name=flamboyant_mccarthy, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 23:32:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-81e4dd50fedbcf6f2f874545e0d477b8ec2a0b2ca1146eb470c148886a26f988-merged.mount: Deactivated successfully.
Nov 25 23:32:08 compute-0 podman[78047]: 2025-11-25 23:32:08.322454839 +0000 UTC m=+0.468947446 container remove b07529db55fd27958ba0a42df4149e4df319aa8e5682d53bca50750bbe1e2ead (image=quay.io/ceph/ceph:v18, name=flamboyant_mccarthy, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 23:32:08 compute-0 systemd[1]: libpod-conmon-b07529db55fd27958ba0a42df4149e4df319aa8e5682d53bca50750bbe1e2ead.scope: Deactivated successfully.
Nov 25 23:32:08 compute-0 podman[78101]: 2025-11-25 23:32:08.342541418 +0000 UTC m=+0.091296547 container create 9614a59221ab4851bc83f8ac87fb3241ef4f085dd0b4ea2fcae4e490e0b63d7d (image=quay.io/ceph/ceph:v18, name=unruffled_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 23:32:08 compute-0 sudo[77993]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:08 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0) v1
Nov 25 23:32:08 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:08 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:08 compute-0 ceph-mon[75654]: Added host compute-0
Nov 25 23:32:08 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 25 23:32:08 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:08 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:08 compute-0 systemd[1]: Started libpod-conmon-9614a59221ab4851bc83f8ac87fb3241ef4f085dd0b4ea2fcae4e490e0b63d7d.scope.
Nov 25 23:32:08 compute-0 podman[78101]: 2025-11-25 23:32:08.291783928 +0000 UTC m=+0.040539137 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 23:32:08 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:32:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ada961579c74cd1e1e65ab8aa77dbfa311f4d5409fc53c58344a9ab6f3f79c4e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ada961579c74cd1e1e65ab8aa77dbfa311f4d5409fc53c58344a9ab6f3f79c4e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ada961579c74cd1e1e65ab8aa77dbfa311f4d5409fc53c58344a9ab6f3f79c4e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:08 compute-0 podman[78101]: 2025-11-25 23:32:08.428745827 +0000 UTC m=+0.177500956 container init 9614a59221ab4851bc83f8ac87fb3241ef4f085dd0b4ea2fcae4e490e0b63d7d (image=quay.io/ceph/ceph:v18, name=unruffled_kapitsa, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 25 23:32:08 compute-0 podman[78101]: 2025-11-25 23:32:08.434749829 +0000 UTC m=+0.183504948 container start 9614a59221ab4851bc83f8ac87fb3241ef4f085dd0b4ea2fcae4e490e0b63d7d (image=quay.io/ceph/ceph:v18, name=unruffled_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 23:32:08 compute-0 podman[78101]: 2025-11-25 23:32:08.437947485 +0000 UTC m=+0.186702604 container attach 9614a59221ab4851bc83f8ac87fb3241ef4f085dd0b4ea2fcae4e490e0b63d7d (image=quay.io/ceph/ceph:v18, name=unruffled_kapitsa, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 23:32:08 compute-0 sudo[78130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:32:08 compute-0 sudo[78130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:08 compute-0 sudo[78130]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:08 compute-0 sudo[78158]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:32:08 compute-0 sudo[78158]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:08 compute-0 sudo[78158]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:08 compute-0 sudo[78183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:32:08 compute-0 sudo[78183]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:08 compute-0 sudo[78183]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:08 compute-0 sudo[78208]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 25 23:32:08 compute-0 sudo[78208]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:08 compute-0 sudo[78208]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:08 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 23:32:08 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:08 compute-0 sudo[78272]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:32:08 compute-0 sudo[78272]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:08 compute-0 sudo[78272]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:09 compute-0 sudo[78297]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:32:09 compute-0 sudo[78297]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:09 compute-0 sudo[78297]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:09 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 23:32:09 compute-0 ceph-mgr[75954]: [cephadm INFO root] Saving service mgr spec with placement count:2
Nov 25 23:32:09 compute-0 ceph-mgr[75954]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Nov 25 23:32:09 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 25 23:32:09 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:09 compute-0 unruffled_kapitsa[78128]: Scheduled mgr update...
Nov 25 23:32:09 compute-0 systemd[1]: libpod-9614a59221ab4851bc83f8ac87fb3241ef4f085dd0b4ea2fcae4e490e0b63d7d.scope: Deactivated successfully.
Nov 25 23:32:09 compute-0 podman[78101]: 2025-11-25 23:32:09.061547715 +0000 UTC m=+0.810302854 container died 9614a59221ab4851bc83f8ac87fb3241ef4f085dd0b4ea2fcae4e490e0b63d7d (image=quay.io/ceph/ceph:v18, name=unruffled_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:32:09 compute-0 sudo[78322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:32:09 compute-0 sudo[78322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-ada961579c74cd1e1e65ab8aa77dbfa311f4d5409fc53c58344a9ab6f3f79c4e-merged.mount: Deactivated successfully.
Nov 25 23:32:09 compute-0 sudo[78322]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:09 compute-0 podman[78101]: 2025-11-25 23:32:09.115604414 +0000 UTC m=+0.864359573 container remove 9614a59221ab4851bc83f8ac87fb3241ef4f085dd0b4ea2fcae4e490e0b63d7d (image=quay.io/ceph/ceph:v18, name=unruffled_kapitsa, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 23:32:09 compute-0 systemd[1]: libpod-conmon-9614a59221ab4851bc83f8ac87fb3241ef4f085dd0b4ea2fcae4e490e0b63d7d.scope: Deactivated successfully.
Nov 25 23:32:09 compute-0 sudo[78362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 25 23:32:09 compute-0 sudo[78362]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:09 compute-0 podman[78380]: 2025-11-25 23:32:09.180435431 +0000 UTC m=+0.041113512 container create 33638c6bed675d27bdc196d42ba6a353f6c9c182f8d51b20b0b817fda4e0507d (image=quay.io/ceph/ceph:v18, name=fervent_chatterjee, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:32:09 compute-0 systemd[1]: Started libpod-conmon-33638c6bed675d27bdc196d42ba6a353f6c9c182f8d51b20b0b817fda4e0507d.scope.
Nov 25 23:32:09 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:32:09 compute-0 podman[78380]: 2025-11-25 23:32:09.161565306 +0000 UTC m=+0.022243357 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 23:32:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32f21df22da39ee1569737662165a6ab090a7a9e15ea4c85fc7c0ab8b2be3dc3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32f21df22da39ee1569737662165a6ab090a7a9e15ea4c85fc7c0ab8b2be3dc3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32f21df22da39ee1569737662165a6ab090a7a9e15ea4c85fc7c0ab8b2be3dc3/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:09 compute-0 podman[78380]: 2025-11-25 23:32:09.278700624 +0000 UTC m=+0.139378685 container init 33638c6bed675d27bdc196d42ba6a353f6c9c182f8d51b20b0b817fda4e0507d (image=quay.io/ceph/ceph:v18, name=fervent_chatterjee, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 25 23:32:09 compute-0 podman[78380]: 2025-11-25 23:32:09.289403762 +0000 UTC m=+0.150081793 container start 33638c6bed675d27bdc196d42ba6a353f6c9c182f8d51b20b0b817fda4e0507d (image=quay.io/ceph/ceph:v18, name=fervent_chatterjee, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 25 23:32:09 compute-0 podman[78380]: 2025-11-25 23:32:09.292652108 +0000 UTC m=+0.153330179 container attach 33638c6bed675d27bdc196d42ba6a353f6c9c182f8d51b20b0b817fda4e0507d (image=quay.io/ceph/ceph:v18, name=fervent_chatterjee, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 23:32:09 compute-0 ceph-mon[75654]: from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 23:32:09 compute-0 ceph-mon[75654]: Saving service mon spec with placement count:5
Nov 25 23:32:09 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:09 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:09 compute-0 podman[78499]: 2025-11-25 23:32:09.794844496 +0000 UTC m=+0.084759582 container exec 42789e176a5dd28a3c234ac87642cbe4e8e4b6a7a43641214b6da8a496b18af2 (image=quay.io/ceph/ceph:v18, name=ceph-101922db-575f-58e2-980f-928050464f69-mon-compute-0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 25 23:32:09 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14160 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 23:32:09 compute-0 ceph-mgr[75954]: [cephadm INFO root] Saving service crash spec with placement *
Nov 25 23:32:09 compute-0 ceph-mgr[75954]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Nov 25 23:32:09 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Nov 25 23:32:09 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:09 compute-0 fervent_chatterjee[78404]: Scheduled crash update...
Nov 25 23:32:09 compute-0 systemd[1]: libpod-33638c6bed675d27bdc196d42ba6a353f6c9c182f8d51b20b0b817fda4e0507d.scope: Deactivated successfully.
Nov 25 23:32:09 compute-0 conmon[78404]: conmon 33638c6bed675d27bdc1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-33638c6bed675d27bdc196d42ba6a353f6c9c182f8d51b20b0b817fda4e0507d.scope/container/memory.events
Nov 25 23:32:09 compute-0 podman[78380]: 2025-11-25 23:32:09.824147662 +0000 UTC m=+0.684825743 container died 33638c6bed675d27bdc196d42ba6a353f6c9c182f8d51b20b0b817fda4e0507d (image=quay.io/ceph/ceph:v18, name=fervent_chatterjee, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:32:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-32f21df22da39ee1569737662165a6ab090a7a9e15ea4c85fc7c0ab8b2be3dc3-merged.mount: Deactivated successfully.
Nov 25 23:32:09 compute-0 podman[78380]: 2025-11-25 23:32:09.882713511 +0000 UTC m=+0.743391592 container remove 33638c6bed675d27bdc196d42ba6a353f6c9c182f8d51b20b0b817fda4e0507d (image=quay.io/ceph/ceph:v18, name=fervent_chatterjee, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 25 23:32:09 compute-0 systemd[1]: libpod-conmon-33638c6bed675d27bdc196d42ba6a353f6c9c182f8d51b20b0b817fda4e0507d.scope: Deactivated successfully.
Nov 25 23:32:09 compute-0 podman[78535]: 2025-11-25 23:32:09.957842264 +0000 UTC m=+0.053341740 container create eeac51b0e200317da63a61483f5f3834a145772339c5310ba0dbd1d5f982721b (image=quay.io/ceph/ceph:v18, name=sad_booth, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 25 23:32:09 compute-0 ceph-mgr[75954]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 25 23:32:10 compute-0 systemd[1]: Started libpod-conmon-eeac51b0e200317da63a61483f5f3834a145772339c5310ba0dbd1d5f982721b.scope.
Nov 25 23:32:10 compute-0 podman[78535]: 2025-11-25 23:32:09.93156761 +0000 UTC m=+0.027067116 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 23:32:10 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:32:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/707c05e1e4299089a0a9221804fb86cc4b77b08b687e65fb08231eeba73c048b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/707c05e1e4299089a0a9221804fb86cc4b77b08b687e65fb08231eeba73c048b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/707c05e1e4299089a0a9221804fb86cc4b77b08b687e65fb08231eeba73c048b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:10 compute-0 podman[78535]: 2025-11-25 23:32:10.066712312 +0000 UTC m=+0.162211818 container init eeac51b0e200317da63a61483f5f3834a145772339c5310ba0dbd1d5f982721b (image=quay.io/ceph/ceph:v18, name=sad_booth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:32:10 compute-0 podman[78535]: 2025-11-25 23:32:10.078112617 +0000 UTC m=+0.173612113 container start eeac51b0e200317da63a61483f5f3834a145772339c5310ba0dbd1d5f982721b (image=quay.io/ceph/ceph:v18, name=sad_booth, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:32:10 compute-0 podman[78535]: 2025-11-25 23:32:10.081900799 +0000 UTC m=+0.177400285 container attach eeac51b0e200317da63a61483f5f3834a145772339c5310ba0dbd1d5f982721b (image=quay.io/ceph/ceph:v18, name=sad_booth, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 23:32:10 compute-0 podman[78499]: 2025-11-25 23:32:10.124528012 +0000 UTC m=+0.414443058 container exec_died 42789e176a5dd28a3c234ac87642cbe4e8e4b6a7a43641214b6da8a496b18af2 (image=quay.io/ceph/ceph:v18, name=ceph-101922db-575f-58e2-980f-928050464f69-mon-compute-0, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:32:10 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054709 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:32:10 compute-0 sudo[78362]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:10 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 23:32:10 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:10 compute-0 sudo[78588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:32:10 compute-0 sudo[78588]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:10 compute-0 sudo[78588]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:10 compute-0 ceph-mon[75654]: from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 23:32:10 compute-0 ceph-mon[75654]: Saving service mgr spec with placement count:2
Nov 25 23:32:10 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:10 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:10 compute-0 sudo[78613]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:32:10 compute-0 sudo[78613]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:10 compute-0 sudo[78613]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:10 compute-0 sudo[78657]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:32:10 compute-0 sudo[78657]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:10 compute-0 sudo[78657]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:10 compute-0 sudo[78682]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 23:32:10 compute-0 sudo[78682]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:10 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0) v1
Nov 25 23:32:10 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3121850749' entity='client.admin' 
Nov 25 23:32:10 compute-0 systemd[1]: libpod-eeac51b0e200317da63a61483f5f3834a145772339c5310ba0dbd1d5f982721b.scope: Deactivated successfully.
Nov 25 23:32:10 compute-0 podman[78709]: 2025-11-25 23:32:10.763976197 +0000 UTC m=+0.033237421 container died eeac51b0e200317da63a61483f5f3834a145772339c5310ba0dbd1d5f982721b (image=quay.io/ceph/ceph:v18, name=sad_booth, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 23:32:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-707c05e1e4299089a0a9221804fb86cc4b77b08b687e65fb08231eeba73c048b-merged.mount: Deactivated successfully.
Nov 25 23:32:10 compute-0 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 78736 (sysctl)
Nov 25 23:32:10 compute-0 podman[78709]: 2025-11-25 23:32:10.835956996 +0000 UTC m=+0.105218150 container remove eeac51b0e200317da63a61483f5f3834a145772339c5310ba0dbd1d5f982721b (image=quay.io/ceph/ceph:v18, name=sad_booth, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 25 23:32:10 compute-0 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Nov 25 23:32:10 compute-0 systemd[1]: libpod-conmon-eeac51b0e200317da63a61483f5f3834a145772339c5310ba0dbd1d5f982721b.scope: Deactivated successfully.
Nov 25 23:32:10 compute-0 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Nov 25 23:32:10 compute-0 podman[78739]: 2025-11-25 23:32:10.91856877 +0000 UTC m=+0.054679916 container create 659dbbb4acaecad8c61f663de217c5e32f9d74e4b42969eac7a5766d4bb6b73b (image=quay.io/ceph/ceph:v18, name=nervous_galileo, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:32:10 compute-0 systemd[1]: Started libpod-conmon-659dbbb4acaecad8c61f663de217c5e32f9d74e4b42969eac7a5766d4bb6b73b.scope.
Nov 25 23:32:10 compute-0 podman[78739]: 2025-11-25 23:32:10.892513691 +0000 UTC m=+0.028624857 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 23:32:10 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:32:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b3f56440d70701ba28ee65a0327e0ca7941e1bb79107d18a15daf4c8f3b7a27/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b3f56440d70701ba28ee65a0327e0ca7941e1bb79107d18a15daf4c8f3b7a27/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b3f56440d70701ba28ee65a0327e0ca7941e1bb79107d18a15daf4c8f3b7a27/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:11 compute-0 podman[78739]: 2025-11-25 23:32:11.016646798 +0000 UTC m=+0.152757994 container init 659dbbb4acaecad8c61f663de217c5e32f9d74e4b42969eac7a5766d4bb6b73b (image=quay.io/ceph/ceph:v18, name=nervous_galileo, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 23:32:11 compute-0 podman[78739]: 2025-11-25 23:32:11.028547687 +0000 UTC m=+0.164658843 container start 659dbbb4acaecad8c61f663de217c5e32f9d74e4b42969eac7a5766d4bb6b73b (image=quay.io/ceph/ceph:v18, name=nervous_galileo, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 23:32:11 compute-0 podman[78739]: 2025-11-25 23:32:11.032868202 +0000 UTC m=+0.168979358 container attach 659dbbb4acaecad8c61f663de217c5e32f9d74e4b42969eac7a5766d4bb6b73b (image=quay.io/ceph/ceph:v18, name=nervous_galileo, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 23:32:11 compute-0 sudo[78682]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:11 compute-0 sudo[78780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:32:11 compute-0 sudo[78780]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:11 compute-0 sudo[78780]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:11 compute-0 ceph-mon[75654]: from='client.14160 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 23:32:11 compute-0 ceph-mon[75654]: Saving service crash spec with placement *
Nov 25 23:32:11 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/3121850749' entity='client.admin' 
Nov 25 23:32:11 compute-0 sudo[78822]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:32:11 compute-0 sudo[78822]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:11 compute-0 sudo[78822]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:11 compute-0 sudo[78849]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:32:11 compute-0 sudo[78849]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:11 compute-0 sudo[78849]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:11 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 23:32:11 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0) v1
Nov 25 23:32:11 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:11 compute-0 sudo[78874]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Nov 25 23:32:11 compute-0 sudo[78874]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:11 compute-0 systemd[1]: libpod-659dbbb4acaecad8c61f663de217c5e32f9d74e4b42969eac7a5766d4bb6b73b.scope: Deactivated successfully.
Nov 25 23:32:11 compute-0 podman[78739]: 2025-11-25 23:32:11.598747976 +0000 UTC m=+0.734859092 container died 659dbbb4acaecad8c61f663de217c5e32f9d74e4b42969eac7a5766d4bb6b73b (image=quay.io/ceph/ceph:v18, name=nervous_galileo, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 23:32:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-8b3f56440d70701ba28ee65a0327e0ca7941e1bb79107d18a15daf4c8f3b7a27-merged.mount: Deactivated successfully.
Nov 25 23:32:11 compute-0 podman[78739]: 2025-11-25 23:32:11.646425455 +0000 UTC m=+0.782536571 container remove 659dbbb4acaecad8c61f663de217c5e32f9d74e4b42969eac7a5766d4bb6b73b (image=quay.io/ceph/ceph:v18, name=nervous_galileo, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:32:11 compute-0 systemd[1]: libpod-conmon-659dbbb4acaecad8c61f663de217c5e32f9d74e4b42969eac7a5766d4bb6b73b.scope: Deactivated successfully.
Nov 25 23:32:11 compute-0 podman[78913]: 2025-11-25 23:32:11.728633947 +0000 UTC m=+0.052129768 container create f75e5f183a028e6952035859ca5d8362b6fc9bc889c9f481f2941a63564ba782 (image=quay.io/ceph/ceph:v18, name=keen_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:32:11 compute-0 systemd[1]: Started libpod-conmon-f75e5f183a028e6952035859ca5d8362b6fc9bc889c9f481f2941a63564ba782.scope.
Nov 25 23:32:11 compute-0 sudo[78874]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:11 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:32:11 compute-0 podman[78913]: 2025-11-25 23:32:11.703127424 +0000 UTC m=+0.026623245 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 23:32:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14c2e8b26e91297aa7e5828858ec44eb5ad9bcd92f4a1b8bfdaa3bde117d81ff/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14c2e8b26e91297aa7e5828858ec44eb5ad9bcd92f4a1b8bfdaa3bde117d81ff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14c2e8b26e91297aa7e5828858ec44eb5ad9bcd92f4a1b8bfdaa3bde117d81ff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:11 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 23:32:11 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:11 compute-0 podman[78913]: 2025-11-25 23:32:11.815342131 +0000 UTC m=+0.138837972 container init f75e5f183a028e6952035859ca5d8362b6fc9bc889c9f481f2941a63564ba782 (image=quay.io/ceph/ceph:v18, name=keen_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:32:11 compute-0 podman[78913]: 2025-11-25 23:32:11.822144543 +0000 UTC m=+0.145640354 container start f75e5f183a028e6952035859ca5d8362b6fc9bc889c9f481f2941a63564ba782 (image=quay.io/ceph/ceph:v18, name=keen_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 23:32:11 compute-0 podman[78913]: 2025-11-25 23:32:11.825918715 +0000 UTC m=+0.149414526 container attach f75e5f183a028e6952035859ca5d8362b6fc9bc889c9f481f2941a63564ba782 (image=quay.io/ceph/ceph:v18, name=keen_lewin, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:32:11 compute-0 sudo[78950]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:32:11 compute-0 sudo[78950]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:11 compute-0 sudo[78950]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:11 compute-0 sudo[78976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:32:11 compute-0 sudo[78976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:11 compute-0 sudo[78976]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:11 compute-0 ceph-mgr[75954]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 25 23:32:12 compute-0 sudo[79001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:32:12 compute-0 sudo[79001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:12 compute-0 sudo[79001]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:12 compute-0 sudo[79026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 -- inventory --format=json-pretty --filter-for-batch
Nov 25 23:32:12 compute-0 sudo[79026]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:12 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 23:32:12 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 25 23:32:12 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:12 compute-0 ceph-mgr[75954]: [cephadm INFO root] Added label _admin to host compute-0
Nov 25 23:32:12 compute-0 ceph-mgr[75954]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Nov 25 23:32:12 compute-0 keen_lewin[78946]: Added label _admin to host compute-0
Nov 25 23:32:12 compute-0 systemd[1]: libpod-f75e5f183a028e6952035859ca5d8362b6fc9bc889c9f481f2941a63564ba782.scope: Deactivated successfully.
Nov 25 23:32:12 compute-0 podman[78913]: 2025-11-25 23:32:12.398350285 +0000 UTC m=+0.721846106 container died f75e5f183a028e6952035859ca5d8362b6fc9bc889c9f481f2941a63564ba782 (image=quay.io/ceph/ceph:v18, name=keen_lewin, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 25 23:32:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-14c2e8b26e91297aa7e5828858ec44eb5ad9bcd92f4a1b8bfdaa3bde117d81ff-merged.mount: Deactivated successfully.
Nov 25 23:32:12 compute-0 podman[78913]: 2025-11-25 23:32:12.444550103 +0000 UTC m=+0.768045924 container remove f75e5f183a028e6952035859ca5d8362b6fc9bc889c9f481f2941a63564ba782 (image=quay.io/ceph/ceph:v18, name=keen_lewin, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:32:12 compute-0 systemd[1]: libpod-conmon-f75e5f183a028e6952035859ca5d8362b6fc9bc889c9f481f2941a63564ba782.scope: Deactivated successfully.
Nov 25 23:32:12 compute-0 podman[79109]: 2025-11-25 23:32:12.493303338 +0000 UTC m=+0.093013343 container create bb51458d95cc2604880a67f81eb45e048fb93171af80d276bf2bd00126f0a6db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:32:12 compute-0 podman[79134]: 2025-11-25 23:32:12.535047127 +0000 UTC m=+0.068578158 container create cd1813e992cd12b6cfd51df56dc950e259390b68341ef4040f7c281485f8687d (image=quay.io/ceph/ceph:v18, name=frosty_gagarin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 23:32:12 compute-0 systemd[1]: Started libpod-conmon-bb51458d95cc2604880a67f81eb45e048fb93171af80d276bf2bd00126f0a6db.scope.
Nov 25 23:32:12 compute-0 podman[79109]: 2025-11-25 23:32:12.464950109 +0000 UTC m=+0.064660104 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:32:12 compute-0 systemd[1]: Started libpod-conmon-cd1813e992cd12b6cfd51df56dc950e259390b68341ef4040f7c281485f8687d.scope.
Nov 25 23:32:12 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:32:12 compute-0 ceph-mon[75654]: from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 23:32:12 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:12 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:12 compute-0 ceph-mon[75654]: from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 23:32:12 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:12 compute-0 ceph-mon[75654]: Added label _admin to host compute-0
Nov 25 23:32:12 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:32:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf7be4cdb4210f473a1c225aca7cda7e771261aa4ccba49bab5a00ee57347bd1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf7be4cdb4210f473a1c225aca7cda7e771261aa4ccba49bab5a00ee57347bd1/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf7be4cdb4210f473a1c225aca7cda7e771261aa4ccba49bab5a00ee57347bd1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:12 compute-0 podman[79109]: 2025-11-25 23:32:12.600935463 +0000 UTC m=+0.200645488 container init bb51458d95cc2604880a67f81eb45e048fb93171af80d276bf2bd00126f0a6db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_torvalds, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:32:12 compute-0 podman[79134]: 2025-11-25 23:32:12.511443055 +0000 UTC m=+0.044974176 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 23:32:12 compute-0 podman[79134]: 2025-11-25 23:32:12.616155601 +0000 UTC m=+0.149686682 container init cd1813e992cd12b6cfd51df56dc950e259390b68341ef4040f7c281485f8687d (image=quay.io/ceph/ceph:v18, name=frosty_gagarin, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 25 23:32:12 compute-0 podman[79109]: 2025-11-25 23:32:12.620036005 +0000 UTC m=+0.219745990 container start bb51458d95cc2604880a67f81eb45e048fb93171af80d276bf2bd00126f0a6db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_torvalds, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:32:12 compute-0 podman[79109]: 2025-11-25 23:32:12.624622188 +0000 UTC m=+0.224332193 container attach bb51458d95cc2604880a67f81eb45e048fb93171af80d276bf2bd00126f0a6db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_torvalds, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Nov 25 23:32:12 compute-0 vibrant_torvalds[79152]: 167 167
Nov 25 23:32:12 compute-0 systemd[1]: libpod-bb51458d95cc2604880a67f81eb45e048fb93171af80d276bf2bd00126f0a6db.scope: Deactivated successfully.
Nov 25 23:32:12 compute-0 podman[79109]: 2025-11-25 23:32:12.629653633 +0000 UTC m=+0.229363628 container died bb51458d95cc2604880a67f81eb45e048fb93171af80d276bf2bd00126f0a6db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_torvalds, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 23:32:12 compute-0 podman[79134]: 2025-11-25 23:32:12.630464194 +0000 UTC m=+0.163995215 container start cd1813e992cd12b6cfd51df56dc950e259390b68341ef4040f7c281485f8687d (image=quay.io/ceph/ceph:v18, name=frosty_gagarin, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 25 23:32:12 compute-0 podman[79134]: 2025-11-25 23:32:12.641720266 +0000 UTC m=+0.175251337 container attach cd1813e992cd12b6cfd51df56dc950e259390b68341ef4040f7c281485f8687d (image=quay.io/ceph/ceph:v18, name=frosty_gagarin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 23:32:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-065e2d521e13bd5b3beb8187d8f66c5452a45527020845a22b7542444ca49d8a-merged.mount: Deactivated successfully.
Nov 25 23:32:12 compute-0 podman[79109]: 2025-11-25 23:32:12.680610368 +0000 UTC m=+0.280320373 container remove bb51458d95cc2604880a67f81eb45e048fb93171af80d276bf2bd00126f0a6db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:32:12 compute-0 systemd[1]: libpod-conmon-bb51458d95cc2604880a67f81eb45e048fb93171af80d276bf2bd00126f0a6db.scope: Deactivated successfully.
Nov 25 23:32:13 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target_autotune}] v 0) v1
Nov 25 23:32:13 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1524641099' entity='client.admin' 
Nov 25 23:32:13 compute-0 systemd[1]: libpod-cd1813e992cd12b6cfd51df56dc950e259390b68341ef4040f7c281485f8687d.scope: Deactivated successfully.
Nov 25 23:32:13 compute-0 podman[79197]: 2025-11-25 23:32:13.229936308 +0000 UTC m=+0.029444930 container died cd1813e992cd12b6cfd51df56dc950e259390b68341ef4040f7c281485f8687d (image=quay.io/ceph/ceph:v18, name=frosty_gagarin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:32:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-bf7be4cdb4210f473a1c225aca7cda7e771261aa4ccba49bab5a00ee57347bd1-merged.mount: Deactivated successfully.
Nov 25 23:32:13 compute-0 podman[79197]: 2025-11-25 23:32:13.281557002 +0000 UTC m=+0.081065684 container remove cd1813e992cd12b6cfd51df56dc950e259390b68341ef4040f7c281485f8687d (image=quay.io/ceph/ceph:v18, name=frosty_gagarin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:32:13 compute-0 systemd[1]: libpod-conmon-cd1813e992cd12b6cfd51df56dc950e259390b68341ef4040f7c281485f8687d.scope: Deactivated successfully.
Nov 25 23:32:13 compute-0 podman[79211]: 2025-11-25 23:32:13.376666731 +0000 UTC m=+0.061013336 container create 1e6c0b11af388b07196afb32c5a023d2d578295825d52d00f0f9ffe18c922a2d (image=quay.io/ceph/ceph:v18, name=cool_grothendieck, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 23:32:13 compute-0 systemd[1]: Started libpod-conmon-1e6c0b11af388b07196afb32c5a023d2d578295825d52d00f0f9ffe18c922a2d.scope.
Nov 25 23:32:13 compute-0 podman[79211]: 2025-11-25 23:32:13.347461818 +0000 UTC m=+0.031808473 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 23:32:13 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:32:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09b807784ebe330de1fca1bf4ed3a61b29cd54376dfb4298eabf1004dd361b78/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09b807784ebe330de1fca1bf4ed3a61b29cd54376dfb4298eabf1004dd361b78/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09b807784ebe330de1fca1bf4ed3a61b29cd54376dfb4298eabf1004dd361b78/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:13 compute-0 podman[79211]: 2025-11-25 23:32:13.471829651 +0000 UTC m=+0.156176266 container init 1e6c0b11af388b07196afb32c5a023d2d578295825d52d00f0f9ffe18c922a2d (image=quay.io/ceph/ceph:v18, name=cool_grothendieck, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:32:13 compute-0 podman[79211]: 2025-11-25 23:32:13.482769764 +0000 UTC m=+0.167116369 container start 1e6c0b11af388b07196afb32c5a023d2d578295825d52d00f0f9ffe18c922a2d (image=quay.io/ceph/ceph:v18, name=cool_grothendieck, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 23:32:13 compute-0 podman[79211]: 2025-11-25 23:32:13.487451599 +0000 UTC m=+0.171798174 container attach 1e6c0b11af388b07196afb32c5a023d2d578295825d52d00f0f9ffe18c922a2d (image=quay.io/ceph/ceph:v18, name=cool_grothendieck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:32:13 compute-0 ceph-mgr[75954]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 25 23:32:14 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0) v1
Nov 25 23:32:14 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3552777047' entity='client.admin' 
Nov 25 23:32:14 compute-0 cool_grothendieck[79227]: set mgr/dashboard/cluster/status
Nov 25 23:32:14 compute-0 systemd[1]: libpod-1e6c0b11af388b07196afb32c5a023d2d578295825d52d00f0f9ffe18c922a2d.scope: Deactivated successfully.
Nov 25 23:32:14 compute-0 podman[79211]: 2025-11-25 23:32:14.168557172 +0000 UTC m=+0.852903757 container died 1e6c0b11af388b07196afb32c5a023d2d578295825d52d00f0f9ffe18c922a2d (image=quay.io/ceph/ceph:v18, name=cool_grothendieck, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:32:14 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/1524641099' entity='client.admin' 
Nov 25 23:32:14 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/3552777047' entity='client.admin' 
Nov 25 23:32:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-09b807784ebe330de1fca1bf4ed3a61b29cd54376dfb4298eabf1004dd361b78-merged.mount: Deactivated successfully.
Nov 25 23:32:14 compute-0 podman[79211]: 2025-11-25 23:32:14.225274101 +0000 UTC m=+0.909620666 container remove 1e6c0b11af388b07196afb32c5a023d2d578295825d52d00f0f9ffe18c922a2d (image=quay.io/ceph/ceph:v18, name=cool_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:32:14 compute-0 systemd[1]: libpod-conmon-1e6c0b11af388b07196afb32c5a023d2d578295825d52d00f0f9ffe18c922a2d.scope: Deactivated successfully.
Nov 25 23:32:14 compute-0 sudo[74630]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:14 compute-0 podman[79273]: 2025-11-25 23:32:14.440075358 +0000 UTC m=+0.065511967 container create 3ca2afce943eb1b521eee149f6df3635bfba6fa8c55903375521fa95eb7252be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:32:14 compute-0 systemd[1]: Started libpod-conmon-3ca2afce943eb1b521eee149f6df3635bfba6fa8c55903375521fa95eb7252be.scope.
Nov 25 23:32:14 compute-0 podman[79273]: 2025-11-25 23:32:14.413077934 +0000 UTC m=+0.038514583 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:32:14 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:32:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0693bf55e49f9dc19ec84d17ac1cc19c6cb7ff5c4ce40dcf941b3282f0d2a1c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0693bf55e49f9dc19ec84d17ac1cc19c6cb7ff5c4ce40dcf941b3282f0d2a1c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0693bf55e49f9dc19ec84d17ac1cc19c6cb7ff5c4ce40dcf941b3282f0d2a1c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0693bf55e49f9dc19ec84d17ac1cc19c6cb7ff5c4ce40dcf941b3282f0d2a1c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:14 compute-0 podman[79273]: 2025-11-25 23:32:14.532589607 +0000 UTC m=+0.158026176 container init 3ca2afce943eb1b521eee149f6df3635bfba6fa8c55903375521fa95eb7252be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hermann, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 23:32:14 compute-0 podman[79273]: 2025-11-25 23:32:14.548057282 +0000 UTC m=+0.173493891 container start 3ca2afce943eb1b521eee149f6df3635bfba6fa8c55903375521fa95eb7252be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 23:32:14 compute-0 podman[79273]: 2025-11-25 23:32:14.552251334 +0000 UTC m=+0.177687923 container attach 3ca2afce943eb1b521eee149f6df3635bfba6fa8c55903375521fa95eb7252be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hermann, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 23:32:14 compute-0 sudo[79318]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzqqqcdfzcmuwpqzhxnhrubtkapioqxr ; /usr/bin/python3'
Nov 25 23:32:14 compute-0 sudo[79318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:32:14 compute-0 python3[79320]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 101922db-575f-58e2-980f-928050464f69 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:32:14 compute-0 podman[79321]: 2025-11-25 23:32:14.860535605 +0000 UTC m=+0.094216616 container create 9a4ecace5f080e97421703bd98c1b81b578890ab62611cf134e191cfee20a7c9 (image=quay.io/ceph/ceph:v18, name=quizzical_jennings, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 23:32:14 compute-0 systemd[1]: Started libpod-conmon-9a4ecace5f080e97421703bd98c1b81b578890ab62611cf134e191cfee20a7c9.scope.
Nov 25 23:32:14 compute-0 podman[79321]: 2025-11-25 23:32:14.831366273 +0000 UTC m=+0.065047294 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 23:32:14 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:32:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d16fa754e3f7b4ff9c93bd2d3181be7eecac2b14e238652380e3f593ec0de8d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d16fa754e3f7b4ff9c93bd2d3181be7eecac2b14e238652380e3f593ec0de8d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:14 compute-0 podman[79321]: 2025-11-25 23:32:14.95586505 +0000 UTC m=+0.189546131 container init 9a4ecace5f080e97421703bd98c1b81b578890ab62611cf134e191cfee20a7c9 (image=quay.io/ceph/ceph:v18, name=quizzical_jennings, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 23:32:14 compute-0 podman[79321]: 2025-11-25 23:32:14.963419673 +0000 UTC m=+0.197100684 container start 9a4ecace5f080e97421703bd98c1b81b578890ab62611cf134e191cfee20a7c9 (image=quay.io/ceph/ceph:v18, name=quizzical_jennings, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:32:14 compute-0 podman[79321]: 2025-11-25 23:32:14.967957144 +0000 UTC m=+0.201638195 container attach 9a4ecace5f080e97421703bd98c1b81b578890ab62611cf134e191cfee20a7c9 (image=quay.io/ceph/ceph:v18, name=quizzical_jennings, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:32:15 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:32:15 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0) v1
Nov 25 23:32:15 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3228972506' entity='client.admin' 
Nov 25 23:32:15 compute-0 systemd[1]: libpod-9a4ecace5f080e97421703bd98c1b81b578890ab62611cf134e191cfee20a7c9.scope: Deactivated successfully.
Nov 25 23:32:15 compute-0 conmon[79337]: conmon 9a4ecace5f080e974217 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9a4ecace5f080e97421703bd98c1b81b578890ab62611cf134e191cfee20a7c9.scope/container/memory.events
Nov 25 23:32:15 compute-0 podman[79321]: 2025-11-25 23:32:15.565391383 +0000 UTC m=+0.799072444 container died 9a4ecace5f080e97421703bd98c1b81b578890ab62611cf134e191cfee20a7c9 (image=quay.io/ceph/ceph:v18, name=quizzical_jennings, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 23:32:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-9d16fa754e3f7b4ff9c93bd2d3181be7eecac2b14e238652380e3f593ec0de8d-merged.mount: Deactivated successfully.
Nov 25 23:32:15 compute-0 podman[79321]: 2025-11-25 23:32:15.618856817 +0000 UTC m=+0.852537788 container remove 9a4ecace5f080e97421703bd98c1b81b578890ab62611cf134e191cfee20a7c9 (image=quay.io/ceph/ceph:v18, name=quizzical_jennings, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 25 23:32:15 compute-0 systemd[1]: libpod-conmon-9a4ecace5f080e97421703bd98c1b81b578890ab62611cf134e191cfee20a7c9.scope: Deactivated successfully.
Nov 25 23:32:15 compute-0 sudo[79318]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:15 compute-0 ceph-mgr[75954]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Nov 25 23:32:15 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 23:32:15 compute-0 ceph-mon[75654]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Nov 25 23:32:16 compute-0 elastic_hermann[79290]: [
Nov 25 23:32:16 compute-0 elastic_hermann[79290]:     {
Nov 25 23:32:16 compute-0 elastic_hermann[79290]:         "available": false,
Nov 25 23:32:16 compute-0 elastic_hermann[79290]:         "ceph_device": false,
Nov 25 23:32:16 compute-0 elastic_hermann[79290]:         "device_id": "QEMU_DVD-ROM_QM00001",
Nov 25 23:32:16 compute-0 elastic_hermann[79290]:         "lsm_data": {},
Nov 25 23:32:16 compute-0 elastic_hermann[79290]:         "lvs": [],
Nov 25 23:32:16 compute-0 elastic_hermann[79290]:         "path": "/dev/sr0",
Nov 25 23:32:16 compute-0 elastic_hermann[79290]:         "rejected_reasons": [
Nov 25 23:32:16 compute-0 elastic_hermann[79290]:             "Insufficient space (<5GB)",
Nov 25 23:32:16 compute-0 elastic_hermann[79290]:             "Has a FileSystem"
Nov 25 23:32:16 compute-0 elastic_hermann[79290]:         ],
Nov 25 23:32:16 compute-0 elastic_hermann[79290]:         "sys_api": {
Nov 25 23:32:16 compute-0 elastic_hermann[79290]:             "actuators": null,
Nov 25 23:32:16 compute-0 elastic_hermann[79290]:             "device_nodes": "sr0",
Nov 25 23:32:16 compute-0 elastic_hermann[79290]:             "devname": "sr0",
Nov 25 23:32:16 compute-0 elastic_hermann[79290]:             "human_readable_size": "482.00 KB",
Nov 25 23:32:16 compute-0 elastic_hermann[79290]:             "id_bus": "ata",
Nov 25 23:32:16 compute-0 elastic_hermann[79290]:             "model": "QEMU DVD-ROM",
Nov 25 23:32:16 compute-0 elastic_hermann[79290]:             "nr_requests": "2",
Nov 25 23:32:16 compute-0 elastic_hermann[79290]:             "parent": "/dev/sr0",
Nov 25 23:32:16 compute-0 elastic_hermann[79290]:             "partitions": {},
Nov 25 23:32:16 compute-0 elastic_hermann[79290]:             "path": "/dev/sr0",
Nov 25 23:32:16 compute-0 elastic_hermann[79290]:             "removable": "1",
Nov 25 23:32:16 compute-0 elastic_hermann[79290]:             "rev": "2.5+",
Nov 25 23:32:16 compute-0 elastic_hermann[79290]:             "ro": "0",
Nov 25 23:32:16 compute-0 elastic_hermann[79290]:             "rotational": "1",
Nov 25 23:32:16 compute-0 elastic_hermann[79290]:             "sas_address": "",
Nov 25 23:32:16 compute-0 elastic_hermann[79290]:             "sas_device_handle": "",
Nov 25 23:32:16 compute-0 elastic_hermann[79290]:             "scheduler_mode": "mq-deadline",
Nov 25 23:32:16 compute-0 elastic_hermann[79290]:             "sectors": 0,
Nov 25 23:32:16 compute-0 elastic_hermann[79290]:             "sectorsize": "2048",
Nov 25 23:32:16 compute-0 elastic_hermann[79290]:             "size": 493568.0,
Nov 25 23:32:16 compute-0 elastic_hermann[79290]:             "support_discard": "2048",
Nov 25 23:32:16 compute-0 elastic_hermann[79290]:             "type": "disk",
Nov 25 23:32:16 compute-0 elastic_hermann[79290]:             "vendor": "QEMU"
Nov 25 23:32:16 compute-0 elastic_hermann[79290]:         }
Nov 25 23:32:16 compute-0 elastic_hermann[79290]:     }
Nov 25 23:32:16 compute-0 elastic_hermann[79290]: ]
Nov 25 23:32:16 compute-0 systemd[1]: libpod-3ca2afce943eb1b521eee149f6df3635bfba6fa8c55903375521fa95eb7252be.scope: Deactivated successfully.
Nov 25 23:32:16 compute-0 podman[79273]: 2025-11-25 23:32:16.09181519 +0000 UTC m=+1.717251769 container died 3ca2afce943eb1b521eee149f6df3635bfba6fa8c55903375521fa95eb7252be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 25 23:32:16 compute-0 systemd[1]: libpod-3ca2afce943eb1b521eee149f6df3635bfba6fa8c55903375521fa95eb7252be.scope: Consumed 1.587s CPU time.
Nov 25 23:32:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-e0693bf55e49f9dc19ec84d17ac1cc19c6cb7ff5c4ce40dcf941b3282f0d2a1c-merged.mount: Deactivated successfully.
Nov 25 23:32:16 compute-0 podman[79273]: 2025-11-25 23:32:16.167583712 +0000 UTC m=+1.793020321 container remove 3ca2afce943eb1b521eee149f6df3635bfba6fa8c55903375521fa95eb7252be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hermann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Nov 25 23:32:16 compute-0 systemd[1]: libpod-conmon-3ca2afce943eb1b521eee149f6df3635bfba6fa8c55903375521fa95eb7252be.scope: Deactivated successfully.
Nov 25 23:32:16 compute-0 sudo[79026]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:16 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 23:32:16 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:16 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 23:32:16 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:16 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 23:32:16 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:16 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 23:32:16 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:16 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 25 23:32:16 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 23:32:16 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 23:32:16 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:32:16 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 23:32:16 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 23:32:16 compute-0 ceph-mgr[75954]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Nov 25 23:32:16 compute-0 ceph-mgr[75954]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Nov 25 23:32:16 compute-0 sudo[81343]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:32:16 compute-0 sudo[81343]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:16 compute-0 sudo[81343]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:16 compute-0 sudo[81368]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Nov 25 23:32:16 compute-0 sudo[81368]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:16 compute-0 sudo[81368]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:16 compute-0 sudo[81394]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:32:16 compute-0 sudo[81394]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:16 compute-0 sudo[81394]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:16 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/3228972506' entity='client.admin' 
Nov 25 23:32:16 compute-0 ceph-mon[75654]: pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 23:32:16 compute-0 ceph-mon[75654]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Nov 25 23:32:16 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:16 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:16 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:16 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:16 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 23:32:16 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:32:16 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 23:32:16 compute-0 ceph-mon[75654]: Updating compute-0:/etc/ceph/ceph.conf
Nov 25 23:32:16 compute-0 sudo[81441]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-101922db-575f-58e2-980f-928050464f69/etc/ceph
Nov 25 23:32:16 compute-0 sudo[81441]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:16 compute-0 sudo[81441]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:16 compute-0 sudo[81492]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:32:16 compute-0 sudo[81492]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:16 compute-0 sudo[81492]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:16 compute-0 sudo[81538]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbwccxegjzodsqlmzhfnduplmhhnnclx ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764113536.0793626-36633-250666746881010/async_wrapper.py j144314215060 30 /home/zuul/.ansible/tmp/ansible-tmp-1764113536.0793626-36633-250666746881010/AnsiballZ_command.py _'
Nov 25 23:32:16 compute-0 sudo[81538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:32:16 compute-0 sudo[81543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-101922db-575f-58e2-980f-928050464f69/etc/ceph/ceph.conf.new
Nov 25 23:32:16 compute-0 sudo[81543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:16 compute-0 sudo[81543]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:16 compute-0 ansible-async_wrapper.py[81542]: Invoked with j144314215060 30 /home/zuul/.ansible/tmp/ansible-tmp-1764113536.0793626-36633-250666746881010/AnsiballZ_command.py _
Nov 25 23:32:16 compute-0 ansible-async_wrapper.py[81593]: Starting module and watcher
Nov 25 23:32:16 compute-0 ansible-async_wrapper.py[81593]: Start watching 81594 (30)
Nov 25 23:32:16 compute-0 ansible-async_wrapper.py[81594]: Start module (81594)
Nov 25 23:32:16 compute-0 ansible-async_wrapper.py[81542]: Return async_wrapper task started.
Nov 25 23:32:16 compute-0 sudo[81568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:32:16 compute-0 sudo[81568]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:16 compute-0 sudo[81568]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:16 compute-0 sudo[81538]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:16 compute-0 sudo[81598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-101922db-575f-58e2-980f-928050464f69
Nov 25 23:32:16 compute-0 sudo[81598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:16 compute-0 sudo[81598]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:16 compute-0 sudo[81623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:32:16 compute-0 sudo[81623]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:16 compute-0 sudo[81623]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:16 compute-0 python3[81596]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 101922db-575f-58e2-980f-928050464f69 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:32:17 compute-0 sudo[81648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-101922db-575f-58e2-980f-928050464f69/etc/ceph/ceph.conf.new
Nov 25 23:32:17 compute-0 sudo[81648]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:17 compute-0 sudo[81648]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:17 compute-0 podman[81655]: 2025-11-25 23:32:17.051644052 +0000 UTC m=+0.056318320 container create 7e07ec8624d3efbea0ef7cd17ccb0b146dd858d6c72124d1fa83250249c8c819 (image=quay.io/ceph/ceph:v18, name=determined_hertz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 23:32:17 compute-0 systemd[1]: Started libpod-conmon-7e07ec8624d3efbea0ef7cd17ccb0b146dd858d6c72124d1fa83250249c8c819.scope.
Nov 25 23:32:17 compute-0 podman[81655]: 2025-11-25 23:32:17.022291515 +0000 UTC m=+0.026965883 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 23:32:17 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:32:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec2873d38b015dbe2f35d30ecae2014be736c8df2573dc29b1e6459dcf3baea5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec2873d38b015dbe2f35d30ecae2014be736c8df2573dc29b1e6459dcf3baea5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:17 compute-0 podman[81655]: 2025-11-25 23:32:17.148946059 +0000 UTC m=+0.153620387 container init 7e07ec8624d3efbea0ef7cd17ccb0b146dd858d6c72124d1fa83250249c8c819 (image=quay.io/ceph/ceph:v18, name=determined_hertz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 23:32:17 compute-0 podman[81655]: 2025-11-25 23:32:17.160690544 +0000 UTC m=+0.165364812 container start 7e07ec8624d3efbea0ef7cd17ccb0b146dd858d6c72124d1fa83250249c8c819 (image=quay.io/ceph/ceph:v18, name=determined_hertz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 25 23:32:17 compute-0 podman[81655]: 2025-11-25 23:32:17.163771976 +0000 UTC m=+0.168446244 container attach 7e07ec8624d3efbea0ef7cd17ccb0b146dd858d6c72124d1fa83250249c8c819 (image=quay.io/ceph/ceph:v18, name=determined_hertz, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 23:32:17 compute-0 sudo[81713]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:32:17 compute-0 sudo[81713]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:17 compute-0 sudo[81713]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:17 compute-0 sudo[81740]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-101922db-575f-58e2-980f-928050464f69/etc/ceph/ceph.conf.new
Nov 25 23:32:17 compute-0 sudo[81740]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:17 compute-0 sudo[81740]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:17 compute-0 sudo[81765]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:32:17 compute-0 sudo[81765]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:17 compute-0 sudo[81765]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:17 compute-0 sudo[81790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-101922db-575f-58e2-980f-928050464f69/etc/ceph/ceph.conf.new
Nov 25 23:32:17 compute-0 sudo[81790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:17 compute-0 sudo[81790]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:17 compute-0 sudo[81815]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:32:17 compute-0 sudo[81815]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:17 compute-0 sudo[81815]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:17 compute-0 sudo[81859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-101922db-575f-58e2-980f-928050464f69/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Nov 25 23:32:17 compute-0 sudo[81859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:17 compute-0 sudo[81859]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:17 compute-0 ceph-mgr[75954]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/101922db-575f-58e2-980f-928050464f69/config/ceph.conf
Nov 25 23:32:17 compute-0 ceph-mgr[75954]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/101922db-575f-58e2-980f-928050464f69/config/ceph.conf
Nov 25 23:32:17 compute-0 sudo[81884]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:32:17 compute-0 sudo[81884]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:17 compute-0 sudo[81884]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:17 compute-0 sudo[81909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/config
Nov 25 23:32:17 compute-0 sudo[81909]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:17 compute-0 sudo[81909]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:17 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 25 23:32:17 compute-0 determined_hertz[81704]: 
Nov 25 23:32:17 compute-0 determined_hertz[81704]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 25 23:32:17 compute-0 systemd[1]: libpod-7e07ec8624d3efbea0ef7cd17ccb0b146dd858d6c72124d1fa83250249c8c819.scope: Deactivated successfully.
Nov 25 23:32:17 compute-0 podman[81655]: 2025-11-25 23:32:17.673721622 +0000 UTC m=+0.678395880 container died 7e07ec8624d3efbea0ef7cd17ccb0b146dd858d6c72124d1fa83250249c8c819 (image=quay.io/ceph/ceph:v18, name=determined_hertz, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:32:17 compute-0 sudo[81934]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:32:17 compute-0 sudo[81934]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:17 compute-0 sudo[81934]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-ec2873d38b015dbe2f35d30ecae2014be736c8df2573dc29b1e6459dcf3baea5-merged.mount: Deactivated successfully.
Nov 25 23:32:17 compute-0 podman[81655]: 2025-11-25 23:32:17.723476245 +0000 UTC m=+0.728150553 container remove 7e07ec8624d3efbea0ef7cd17ccb0b146dd858d6c72124d1fa83250249c8c819 (image=quay.io/ceph/ceph:v18, name=determined_hertz, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:32:17 compute-0 systemd[1]: libpod-conmon-7e07ec8624d3efbea0ef7cd17ccb0b146dd858d6c72124d1fa83250249c8c819.scope: Deactivated successfully.
Nov 25 23:32:17 compute-0 sudo[81968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-101922db-575f-58e2-980f-928050464f69/var/lib/ceph/101922db-575f-58e2-980f-928050464f69/config
Nov 25 23:32:17 compute-0 ansible-async_wrapper.py[81594]: Module complete (81594)
Nov 25 23:32:17 compute-0 sudo[81968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:17 compute-0 sudo[81968]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:17 compute-0 sudo[81999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:32:17 compute-0 sudo[81999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:17 compute-0 sudo[81999]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:17 compute-0 sudo[82024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-101922db-575f-58e2-980f-928050464f69/var/lib/ceph/101922db-575f-58e2-980f-928050464f69/config/ceph.conf.new
Nov 25 23:32:17 compute-0 sudo[82024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:17 compute-0 sudo[82024]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:17 compute-0 sudo[82072]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:32:17 compute-0 sudo[82072]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:17 compute-0 sudo[82072]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:17 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 23:32:17 compute-0 sudo[82097]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-101922db-575f-58e2-980f-928050464f69
Nov 25 23:32:17 compute-0 sudo[82097]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:17 compute-0 sudo[82097]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:18 compute-0 sudo[82122]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:32:18 compute-0 sudo[82122]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:18 compute-0 sudo[82122]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:18 compute-0 sudo[82147]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-101922db-575f-58e2-980f-928050464f69/var/lib/ceph/101922db-575f-58e2-980f-928050464f69/config/ceph.conf.new
Nov 25 23:32:18 compute-0 sudo[82147]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:18 compute-0 sudo[82147]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:18 compute-0 sudo[82200]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqmjyoabrfrolbdzotahijjminyoavoc ; /usr/bin/python3'
Nov 25 23:32:18 compute-0 sudo[82200]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:32:18 compute-0 sudo[82221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:32:18 compute-0 sudo[82221]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:18 compute-0 sudo[82221]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:18 compute-0 python3[82216]: ansible-ansible.legacy.async_status Invoked with jid=j144314215060.81542 mode=status _async_dir=/root/.ansible_async
Nov 25 23:32:18 compute-0 sudo[82200]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:18 compute-0 sudo[82246]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-101922db-575f-58e2-980f-928050464f69/var/lib/ceph/101922db-575f-58e2-980f-928050464f69/config/ceph.conf.new
Nov 25 23:32:18 compute-0 sudo[82246]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:18 compute-0 sudo[82246]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:18 compute-0 sudo[82271]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:32:18 compute-0 sudo[82271]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:18 compute-0 sudo[82271]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:18 compute-0 sudo[82349]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfolxamidoopiwxtsfbelziikbkopjqm ; /usr/bin/python3'
Nov 25 23:32:18 compute-0 sudo[82349]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:32:18 compute-0 sudo[82332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-101922db-575f-58e2-980f-928050464f69/var/lib/ceph/101922db-575f-58e2-980f-928050464f69/config/ceph.conf.new
Nov 25 23:32:18 compute-0 sudo[82332]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:18 compute-0 sudo[82332]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:18 compute-0 python3[82364]: ansible-ansible.legacy.async_status Invoked with jid=j144314215060.81542 mode=cleanup _async_dir=/root/.ansible_async
Nov 25 23:32:18 compute-0 sudo[82370]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:32:18 compute-0 sudo[82370]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:18 compute-0 sudo[82349]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:18 compute-0 sudo[82370]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:18 compute-0 sudo[82395]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-101922db-575f-58e2-980f-928050464f69/var/lib/ceph/101922db-575f-58e2-980f-928050464f69/config/ceph.conf.new /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/config/ceph.conf
Nov 25 23:32:18 compute-0 sudo[82395]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:18 compute-0 sudo[82395]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:18 compute-0 ceph-mgr[75954]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 25 23:32:18 compute-0 ceph-mgr[75954]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 25 23:32:18 compute-0 sudo[82420]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:32:18 compute-0 sudo[82420]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:18 compute-0 sudo[82420]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:18 compute-0 sudo[82445]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Nov 25 23:32:18 compute-0 sudo[82445]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:18 compute-0 sudo[82445]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:18 compute-0 sudo[82470]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:32:18 compute-0 sudo[82470]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:18 compute-0 sudo[82470]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:18 compute-0 sudo[82524]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ycfrsapexvgjuaggdpdlsvkwgpjpkaez ; /usr/bin/python3'
Nov 25 23:32:18 compute-0 sudo[82524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:32:18 compute-0 sudo[82516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-101922db-575f-58e2-980f-928050464f69/etc/ceph
Nov 25 23:32:18 compute-0 sudo[82516]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:18 compute-0 sudo[82516]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:19 compute-0 sudo[82546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:32:19 compute-0 sudo[82546]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:19 compute-0 sudo[82546]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:19 compute-0 ceph-mon[75654]: Updating compute-0:/var/lib/ceph/101922db-575f-58e2-980f-928050464f69/config/ceph.conf
Nov 25 23:32:19 compute-0 ceph-mon[75654]: from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 25 23:32:19 compute-0 ceph-mon[75654]: pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 23:32:19 compute-0 python3[82543]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 25 23:32:19 compute-0 sudo[82571]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-101922db-575f-58e2-980f-928050464f69/etc/ceph/ceph.client.admin.keyring.new
Nov 25 23:32:19 compute-0 sudo[82571]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:19 compute-0 sudo[82571]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:19 compute-0 sudo[82524]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:19 compute-0 sudo[82598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:32:19 compute-0 sudo[82598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:19 compute-0 sudo[82598]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:19 compute-0 sudo[82623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-101922db-575f-58e2-980f-928050464f69
Nov 25 23:32:19 compute-0 sudo[82623]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:19 compute-0 sudo[82623]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:19 compute-0 sudo[82648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:32:19 compute-0 sudo[82648]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:19 compute-0 sudo[82648]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:19 compute-0 sudo[82673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-101922db-575f-58e2-980f-928050464f69/etc/ceph/ceph.client.admin.keyring.new
Nov 25 23:32:19 compute-0 sudo[82673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:19 compute-0 sudo[82673]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:19 compute-0 sudo[82721]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdqfgxhqdjkqlvbtesvsjblivlzwkorg ; /usr/bin/python3'
Nov 25 23:32:19 compute-0 sudo[82721]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:32:19 compute-0 sudo[82747]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:32:19 compute-0 sudo[82747]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:19 compute-0 sudo[82747]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:19 compute-0 python3[82730]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 101922db-575f-58e2-980f-928050464f69 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:32:19 compute-0 sudo[82772]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-101922db-575f-58e2-980f-928050464f69/etc/ceph/ceph.client.admin.keyring.new
Nov 25 23:32:19 compute-0 sudo[82772]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:19 compute-0 sudo[82772]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:19 compute-0 sudo[82803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:32:19 compute-0 sudo[82803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:19 compute-0 podman[82796]: 2025-11-25 23:32:19.588948206 +0000 UTC m=+0.021230140 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 23:32:19 compute-0 sudo[82803]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:19 compute-0 podman[82796]: 2025-11-25 23:32:19.958874489 +0000 UTC m=+0.391156363 container create 6b97dbf2128c46db21038b2fe701bbd048a3fd31197e790365399cd50fa326b9 (image=quay.io/ceph/ceph:v18, name=nifty_montalcini, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:32:19 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 23:32:20 compute-0 systemd[1]: Started libpod-conmon-6b97dbf2128c46db21038b2fe701bbd048a3fd31197e790365399cd50fa326b9.scope.
Nov 25 23:32:20 compute-0 sudo[82835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-101922db-575f-58e2-980f-928050464f69/etc/ceph/ceph.client.admin.keyring.new
Nov 25 23:32:20 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:32:20 compute-0 sudo[82835]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2fc6f375395c171c2839986ba3dd460afe166443b7ace335b54c3240f241615/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2fc6f375395c171c2839986ba3dd460afe166443b7ace335b54c3240f241615/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2fc6f375395c171c2839986ba3dd460afe166443b7ace335b54c3240f241615/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:20 compute-0 ceph-mon[75654]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 25 23:32:20 compute-0 sudo[82835]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:20 compute-0 podman[82796]: 2025-11-25 23:32:20.066856142 +0000 UTC m=+0.499137996 container init 6b97dbf2128c46db21038b2fe701bbd048a3fd31197e790365399cd50fa326b9 (image=quay.io/ceph/ceph:v18, name=nifty_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 23:32:20 compute-0 podman[82796]: 2025-11-25 23:32:20.075429793 +0000 UTC m=+0.507711677 container start 6b97dbf2128c46db21038b2fe701bbd048a3fd31197e790365399cd50fa326b9 (image=quay.io/ceph/ceph:v18, name=nifty_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:32:20 compute-0 podman[82796]: 2025-11-25 23:32:20.079311686 +0000 UTC m=+0.511593630 container attach 6b97dbf2128c46db21038b2fe701bbd048a3fd31197e790365399cd50fa326b9 (image=quay.io/ceph/ceph:v18, name=nifty_montalcini, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:32:20 compute-0 sudo[82866]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:32:20 compute-0 sudo[82866]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:20 compute-0 sudo[82866]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:20 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:32:20 compute-0 sudo[82891]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-101922db-575f-58e2-980f-928050464f69/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Nov 25 23:32:20 compute-0 sudo[82891]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:20 compute-0 sudo[82891]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:20 compute-0 ceph-mgr[75954]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/101922db-575f-58e2-980f-928050464f69/config/ceph.client.admin.keyring
Nov 25 23:32:20 compute-0 ceph-mgr[75954]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/101922db-575f-58e2-980f-928050464f69/config/ceph.client.admin.keyring
Nov 25 23:32:20 compute-0 sudo[82916]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:32:20 compute-0 sudo[82916]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:20 compute-0 sudo[82916]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:20 compute-0 sudo[82941]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/config
Nov 25 23:32:20 compute-0 sudo[82941]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:20 compute-0 sudo[82941]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:20 compute-0 sudo[82983]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:32:20 compute-0 sudo[82983]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:20 compute-0 sudo[82983]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:20 compute-0 sudo[83010]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-101922db-575f-58e2-980f-928050464f69/var/lib/ceph/101922db-575f-58e2-980f-928050464f69/config
Nov 25 23:32:20 compute-0 sudo[83010]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:20 compute-0 sudo[83010]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:20 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 25 23:32:20 compute-0 sudo[83035]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:32:20 compute-0 nifty_montalcini[82860]: 
Nov 25 23:32:20 compute-0 nifty_montalcini[82860]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 25 23:32:20 compute-0 sudo[83035]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:20 compute-0 sudo[83035]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:20 compute-0 systemd[1]: libpod-6b97dbf2128c46db21038b2fe701bbd048a3fd31197e790365399cd50fa326b9.scope: Deactivated successfully.
Nov 25 23:32:20 compute-0 podman[82796]: 2025-11-25 23:32:20.641660626 +0000 UTC m=+1.073942480 container died 6b97dbf2128c46db21038b2fe701bbd048a3fd31197e790365399cd50fa326b9 (image=quay.io/ceph/ceph:v18, name=nifty_montalcini, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:32:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-a2fc6f375395c171c2839986ba3dd460afe166443b7ace335b54c3240f241615-merged.mount: Deactivated successfully.
Nov 25 23:32:20 compute-0 podman[82796]: 2025-11-25 23:32:20.686138018 +0000 UTC m=+1.118419862 container remove 6b97dbf2128c46db21038b2fe701bbd048a3fd31197e790365399cd50fa326b9 (image=quay.io/ceph/ceph:v18, name=nifty_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:32:20 compute-0 systemd[1]: libpod-conmon-6b97dbf2128c46db21038b2fe701bbd048a3fd31197e790365399cd50fa326b9.scope: Deactivated successfully.
Nov 25 23:32:20 compute-0 sudo[83062]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-101922db-575f-58e2-980f-928050464f69/var/lib/ceph/101922db-575f-58e2-980f-928050464f69/config/ceph.client.admin.keyring.new
Nov 25 23:32:20 compute-0 sudo[82721]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:20 compute-0 sudo[83062]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:20 compute-0 sudo[83062]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:20 compute-0 sudo[83098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:32:20 compute-0 sudo[83098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:20 compute-0 sudo[83098]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:20 compute-0 sudo[83123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-101922db-575f-58e2-980f-928050464f69
Nov 25 23:32:20 compute-0 sudo[83123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:20 compute-0 sudo[83123]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:20 compute-0 sudo[83148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:32:20 compute-0 sudo[83148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:20 compute-0 sudo[83148]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:21 compute-0 sudo[83202]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yaejigkbftdawyiwzobwwdwjczwspxpk ; /usr/bin/python3'
Nov 25 23:32:21 compute-0 sudo[83202]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:32:21 compute-0 ceph-mon[75654]: pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 23:32:21 compute-0 ceph-mon[75654]: Updating compute-0:/var/lib/ceph/101922db-575f-58e2-980f-928050464f69/config/ceph.client.admin.keyring
Nov 25 23:32:21 compute-0 sudo[83186]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-101922db-575f-58e2-980f-928050464f69/var/lib/ceph/101922db-575f-58e2-980f-928050464f69/config/ceph.client.admin.keyring.new
Nov 25 23:32:21 compute-0 sudo[83186]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:21 compute-0 sudo[83186]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:21 compute-0 python3[83216]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 101922db-575f-58e2-980f-928050464f69 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:32:21 compute-0 sudo[83247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:32:21 compute-0 sudo[83247]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:21 compute-0 sudo[83247]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:21 compute-0 podman[83259]: 2025-11-25 23:32:21.253672296 +0000 UTC m=+0.051139060 container create 42c79a575f0f05476a338b10af31671eafc354014f3431244c8af8b17a98266b (image=quay.io/ceph/ceph:v18, name=objective_dirac, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:32:21 compute-0 systemd[1]: Started libpod-conmon-42c79a575f0f05476a338b10af31671eafc354014f3431244c8af8b17a98266b.scope.
Nov 25 23:32:21 compute-0 sudo[83285]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-101922db-575f-58e2-980f-928050464f69/var/lib/ceph/101922db-575f-58e2-980f-928050464f69/config/ceph.client.admin.keyring.new
Nov 25 23:32:21 compute-0 sudo[83285]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:21 compute-0 sudo[83285]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:21 compute-0 podman[83259]: 2025-11-25 23:32:21.237053161 +0000 UTC m=+0.034519885 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 23:32:21 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:32:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09abc300a67c9b3c1607750f4fb8b8ba4f75f8232944c489065ce391b42f9bf9/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09abc300a67c9b3c1607750f4fb8b8ba4f75f8232944c489065ce391b42f9bf9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09abc300a67c9b3c1607750f4fb8b8ba4f75f8232944c489065ce391b42f9bf9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:21 compute-0 podman[83259]: 2025-11-25 23:32:21.364742112 +0000 UTC m=+0.162208906 container init 42c79a575f0f05476a338b10af31671eafc354014f3431244c8af8b17a98266b (image=quay.io/ceph/ceph:v18, name=objective_dirac, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:32:21 compute-0 podman[83259]: 2025-11-25 23:32:21.375222694 +0000 UTC m=+0.172689438 container start 42c79a575f0f05476a338b10af31671eafc354014f3431244c8af8b17a98266b (image=quay.io/ceph/ceph:v18, name=objective_dirac, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 23:32:21 compute-0 podman[83259]: 2025-11-25 23:32:21.379540639 +0000 UTC m=+0.177007403 container attach 42c79a575f0f05476a338b10af31671eafc354014f3431244c8af8b17a98266b (image=quay.io/ceph/ceph:v18, name=objective_dirac, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 25 23:32:21 compute-0 sudo[83315]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:32:21 compute-0 sudo[83315]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:21 compute-0 sudo[83315]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:21 compute-0 sudo[83341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-101922db-575f-58e2-980f-928050464f69/var/lib/ceph/101922db-575f-58e2-980f-928050464f69/config/ceph.client.admin.keyring.new
Nov 25 23:32:21 compute-0 sudo[83341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:21 compute-0 sudo[83341]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:21 compute-0 sudo[83366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:32:21 compute-0 sudo[83366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:21 compute-0 sudo[83366]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:21 compute-0 sudo[83391]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-101922db-575f-58e2-980f-928050464f69/var/lib/ceph/101922db-575f-58e2-980f-928050464f69/config/ceph.client.admin.keyring.new /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/config/ceph.client.admin.keyring
Nov 25 23:32:21 compute-0 sudo[83391]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:21 compute-0 sudo[83391]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:21 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 23:32:21 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:21 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 23:32:21 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:21 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 23:32:21 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:21 compute-0 ceph-mgr[75954]: [progress INFO root] update: starting ev 01d37128-2f77-42ad-b9c7-ed25c454262f (Updating crash deployment (+1 -> 1))
Nov 25 23:32:21 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Nov 25 23:32:21 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 25 23:32:21 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 25 23:32:21 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 23:32:21 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:32:21 compute-0 ceph-mgr[75954]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Nov 25 23:32:21 compute-0 ceph-mgr[75954]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Nov 25 23:32:21 compute-0 sudo[83426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:32:21 compute-0 sudo[83426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:21 compute-0 sudo[83426]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:21 compute-0 ansible-async_wrapper.py[81593]: Done in kid B.
Nov 25 23:32:21 compute-0 sudo[83460]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:32:21 compute-0 sudo[83460]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:21 compute-0 sudo[83460]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:21 compute-0 sudo[83485]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:32:21 compute-0 sudo[83485]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:21 compute-0 sudo[83485]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:21 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0) v1
Nov 25 23:32:21 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3516919307' entity='client.admin' 
Nov 25 23:32:21 compute-0 systemd[1]: libpod-42c79a575f0f05476a338b10af31671eafc354014f3431244c8af8b17a98266b.scope: Deactivated successfully.
Nov 25 23:32:21 compute-0 conmon[83311]: conmon 42c79a575f0f05476a33 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-42c79a575f0f05476a338b10af31671eafc354014f3431244c8af8b17a98266b.scope/container/memory.events
Nov 25 23:32:21 compute-0 podman[83259]: 2025-11-25 23:32:21.917053264 +0000 UTC m=+0.714520018 container died 42c79a575f0f05476a338b10af31671eafc354014f3431244c8af8b17a98266b (image=quay.io/ceph/ceph:v18, name=objective_dirac, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 23:32:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-09abc300a67c9b3c1607750f4fb8b8ba4f75f8232944c489065ce391b42f9bf9-merged.mount: Deactivated successfully.
Nov 25 23:32:21 compute-0 sudo[83511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 101922db-575f-58e2-980f-928050464f69
Nov 25 23:32:21 compute-0 sudo[83511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:21 compute-0 podman[83259]: 2025-11-25 23:32:21.973424664 +0000 UTC m=+0.770891398 container remove 42c79a575f0f05476a338b10af31671eafc354014f3431244c8af8b17a98266b (image=quay.io/ceph/ceph:v18, name=objective_dirac, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 23:32:21 compute-0 systemd[1]: libpod-conmon-42c79a575f0f05476a338b10af31671eafc354014f3431244c8af8b17a98266b.scope: Deactivated successfully.
Nov 25 23:32:21 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 23:32:21 compute-0 sudo[83202]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:22 compute-0 ceph-mon[75654]: from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 25 23:32:22 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:22 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:22 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:22 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 25 23:32:22 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 25 23:32:22 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:32:22 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/3516919307' entity='client.admin' 
Nov 25 23:32:22 compute-0 sudo[83587]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkjjzqnwjubndvzovynexfeaqepjpdms ; /usr/bin/python3'
Nov 25 23:32:22 compute-0 sudo[83587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:32:22 compute-0 python3[83591]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 101922db-575f-58e2-980f-928050464f69 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:32:22 compute-0 podman[83613]: 2025-11-25 23:32:22.410843686 +0000 UTC m=+0.067913601 container create 59cc9d136a094a773d2b64cc19337870123b78a5bff29219ef1e3ceb9e8ef794 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:32:22 compute-0 podman[83625]: 2025-11-25 23:32:22.451452525 +0000 UTC m=+0.064926302 container create 3e9fc3a02d6c8948c98fe7b80de5da372d69e4b066f58c85783a1a9d7826ea6b (image=quay.io/ceph/ceph:v18, name=eager_jennings, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:32:22 compute-0 systemd[1]: Started libpod-conmon-59cc9d136a094a773d2b64cc19337870123b78a5bff29219ef1e3ceb9e8ef794.scope.
Nov 25 23:32:22 compute-0 podman[83613]: 2025-11-25 23:32:22.381378397 +0000 UTC m=+0.038448362 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:32:22 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:32:22 compute-0 systemd[1]: Started libpod-conmon-3e9fc3a02d6c8948c98fe7b80de5da372d69e4b066f58c85783a1a9d7826ea6b.scope.
Nov 25 23:32:22 compute-0 podman[83613]: 2025-11-25 23:32:22.506562001 +0000 UTC m=+0.163631906 container init 59cc9d136a094a773d2b64cc19337870123b78a5bff29219ef1e3ceb9e8ef794 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_agnesi, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:32:22 compute-0 podman[83613]: 2025-11-25 23:32:22.51322164 +0000 UTC m=+0.170291555 container start 59cc9d136a094a773d2b64cc19337870123b78a5bff29219ef1e3ceb9e8ef794 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Nov 25 23:32:22 compute-0 podman[83625]: 2025-11-25 23:32:22.423399443 +0000 UTC m=+0.036873270 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 23:32:22 compute-0 nifty_agnesi[83641]: 167 167
Nov 25 23:32:22 compute-0 podman[83613]: 2025-11-25 23:32:22.519286182 +0000 UTC m=+0.176356147 container attach 59cc9d136a094a773d2b64cc19337870123b78a5bff29219ef1e3ceb9e8ef794 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_agnesi, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2)
Nov 25 23:32:22 compute-0 podman[83613]: 2025-11-25 23:32:22.519692453 +0000 UTC m=+0.176762378 container died 59cc9d136a094a773d2b64cc19337870123b78a5bff29219ef1e3ceb9e8ef794 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_agnesi, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:32:22 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:32:22 compute-0 systemd[1]: libpod-59cc9d136a094a773d2b64cc19337870123b78a5bff29219ef1e3ceb9e8ef794.scope: Deactivated successfully.
Nov 25 23:32:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a37ef6e4943db87d2472a137035eabb48dea597a1ceee342439cee8f846145d0/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a37ef6e4943db87d2472a137035eabb48dea597a1ceee342439cee8f846145d0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a37ef6e4943db87d2472a137035eabb48dea597a1ceee342439cee8f846145d0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:22 compute-0 podman[83625]: 2025-11-25 23:32:22.551119375 +0000 UTC m=+0.164593152 container init 3e9fc3a02d6c8948c98fe7b80de5da372d69e4b066f58c85783a1a9d7826ea6b (image=quay.io/ceph/ceph:v18, name=eager_jennings, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:32:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e1c47754ab309e550aef2018ea675b819e60f29a8fa2d7f91034947ab575329-merged.mount: Deactivated successfully.
Nov 25 23:32:22 compute-0 podman[83625]: 2025-11-25 23:32:22.563294991 +0000 UTC m=+0.176768778 container start 3e9fc3a02d6c8948c98fe7b80de5da372d69e4b066f58c85783a1a9d7826ea6b (image=quay.io/ceph/ceph:v18, name=eager_jennings, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 23:32:22 compute-0 podman[83625]: 2025-11-25 23:32:22.567523894 +0000 UTC m=+0.180997691 container attach 3e9fc3a02d6c8948c98fe7b80de5da372d69e4b066f58c85783a1a9d7826ea6b (image=quay.io/ceph/ceph:v18, name=eager_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:32:22 compute-0 podman[83613]: 2025-11-25 23:32:22.589245167 +0000 UTC m=+0.246315092 container remove 59cc9d136a094a773d2b64cc19337870123b78a5bff29219ef1e3ceb9e8ef794 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_agnesi, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 23:32:22 compute-0 systemd[1]: libpod-conmon-59cc9d136a094a773d2b64cc19337870123b78a5bff29219ef1e3ceb9e8ef794.scope: Deactivated successfully.
Nov 25 23:32:22 compute-0 systemd[1]: Reloading.
Nov 25 23:32:22 compute-0 systemd-rc-local-generator[83693]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 23:32:22 compute-0 systemd-sysv-generator[83696]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 23:32:23 compute-0 systemd[1]: Reloading.
Nov 25 23:32:23 compute-0 ceph-mon[75654]: Deploying daemon crash.compute-0 on compute-0
Nov 25 23:32:23 compute-0 ceph-mon[75654]: pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 23:32:23 compute-0 systemd-sysv-generator[83755]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 23:32:23 compute-0 systemd-rc-local-generator[83752]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 23:32:23 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0) v1
Nov 25 23:32:23 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/113567869' entity='client.admin' 
Nov 25 23:32:23 compute-0 podman[83625]: 2025-11-25 23:32:23.141727332 +0000 UTC m=+0.755201109 container died 3e9fc3a02d6c8948c98fe7b80de5da372d69e4b066f58c85783a1a9d7826ea6b (image=quay.io/ceph/ceph:v18, name=eager_jennings, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:32:23 compute-0 systemd[1]: libpod-3e9fc3a02d6c8948c98fe7b80de5da372d69e4b066f58c85783a1a9d7826ea6b.scope: Deactivated successfully.
Nov 25 23:32:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-a37ef6e4943db87d2472a137035eabb48dea597a1ceee342439cee8f846145d0-merged.mount: Deactivated successfully.
Nov 25 23:32:23 compute-0 podman[83625]: 2025-11-25 23:32:23.29314279 +0000 UTC m=+0.906616537 container remove 3e9fc3a02d6c8948c98fe7b80de5da372d69e4b066f58c85783a1a9d7826ea6b (image=quay.io/ceph/ceph:v18, name=eager_jennings, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 23:32:23 compute-0 systemd[1]: Starting Ceph crash.compute-0 for 101922db-575f-58e2-980f-928050464f69...
Nov 25 23:32:23 compute-0 systemd[1]: libpod-conmon-3e9fc3a02d6c8948c98fe7b80de5da372d69e4b066f58c85783a1a9d7826ea6b.scope: Deactivated successfully.
Nov 25 23:32:23 compute-0 sudo[83587]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:23 compute-0 sudo[83854]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvhmtfmwuxmyvbnhomydiaozwfdkghsm ; /usr/bin/python3'
Nov 25 23:32:23 compute-0 sudo[83854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:32:23 compute-0 podman[83850]: 2025-11-25 23:32:23.632872393 +0000 UTC m=+0.075466903 container create 42d7403704ba2cd3e1da4f13821251ee83623d7ba3755973192c5a74b5ccbae0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-crash-compute-0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:32:23 compute-0 podman[83850]: 2025-11-25 23:32:23.595863242 +0000 UTC m=+0.038457832 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:32:23 compute-0 python3[83863]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 101922db-575f-58e2-980f-928050464f69 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:32:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/828cfc035f3321e5f3babfcc650e4dbb0913a988a56b75f575c0c560162df02a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/828cfc035f3321e5f3babfcc650e4dbb0913a988a56b75f575c0c560162df02a/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/828cfc035f3321e5f3babfcc650e4dbb0913a988a56b75f575c0c560162df02a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/828cfc035f3321e5f3babfcc650e4dbb0913a988a56b75f575c0c560162df02a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:23 compute-0 podman[83850]: 2025-11-25 23:32:23.740543919 +0000 UTC m=+0.183138459 container init 42d7403704ba2cd3e1da4f13821251ee83623d7ba3755973192c5a74b5ccbae0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-crash-compute-0, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 25 23:32:23 compute-0 podman[83850]: 2025-11-25 23:32:23.750742083 +0000 UTC m=+0.193336593 container start 42d7403704ba2cd3e1da4f13821251ee83623d7ba3755973192c5a74b5ccbae0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-crash-compute-0, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:32:23 compute-0 bash[83850]: 42d7403704ba2cd3e1da4f13821251ee83623d7ba3755973192c5a74b5ccbae0
Nov 25 23:32:23 compute-0 systemd[1]: Started Ceph crash.compute-0 for 101922db-575f-58e2-980f-928050464f69.
Nov 25 23:32:23 compute-0 sudo[83511]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:23 compute-0 podman[83873]: 2025-11-25 23:32:23.799762066 +0000 UTC m=+0.072032662 container create f25c89a9f592e059c53542345a1890bcb4358233572672870d987ad4ad158e33 (image=quay.io/ceph/ceph:v18, name=hopeful_kepler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:32:23 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 23:32:23 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:23 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 23:32:23 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:23 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Nov 25 23:32:23 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:23 compute-0 ceph-mgr[75954]: [progress INFO root] complete: finished ev 01d37128-2f77-42ad-b9c7-ed25c454262f (Updating crash deployment (+1 -> 1))
Nov 25 23:32:23 compute-0 ceph-mgr[75954]: [progress INFO root] Completed event 01d37128-2f77-42ad-b9c7-ed25c454262f (Updating crash deployment (+1 -> 1)) in 2 seconds
Nov 25 23:32:23 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Nov 25 23:32:23 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:23 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev a2283866-e30d-4884-a246-0fc4ab05bcc8 does not exist
Nov 25 23:32:23 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 25 23:32:23 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:23 compute-0 ceph-mgr[75954]: [progress INFO root] update: starting ev 5e4e79e8-e0bf-4510-9e37-72aa1cc6fb23 (Updating mgr deployment (+1 -> 2))
Nov 25 23:32:23 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.cckgxa", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Nov 25 23:32:23 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.cckgxa", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 25 23:32:23 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.cckgxa", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Nov 25 23:32:23 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 25 23:32:23 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 25 23:32:23 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 23:32:23 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:32:23 compute-0 ceph-mgr[75954]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-0.cckgxa on compute-0
Nov 25 23:32:23 compute-0 ceph-mgr[75954]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-0.cckgxa on compute-0
Nov 25 23:32:23 compute-0 systemd[1]: Started libpod-conmon-f25c89a9f592e059c53542345a1890bcb4358233572672870d987ad4ad158e33.scope.
Nov 25 23:32:23 compute-0 podman[83873]: 2025-11-25 23:32:23.770546193 +0000 UTC m=+0.042816819 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 23:32:23 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:32:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62b09875b329b4241c77e1cb3f020a304ae9cb9abc6ba0c6d9fc1e350f650592/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62b09875b329b4241c77e1cb3f020a304ae9cb9abc6ba0c6d9fc1e350f650592/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62b09875b329b4241c77e1cb3f020a304ae9cb9abc6ba0c6d9fc1e350f650592/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:23 compute-0 podman[83873]: 2025-11-25 23:32:23.928842835 +0000 UTC m=+0.201113481 container init f25c89a9f592e059c53542345a1890bcb4358233572672870d987ad4ad158e33 (image=quay.io/ceph/ceph:v18, name=hopeful_kepler, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:32:23 compute-0 podman[83873]: 2025-11-25 23:32:23.939408288 +0000 UTC m=+0.211678894 container start f25c89a9f592e059c53542345a1890bcb4358233572672870d987ad4ad158e33 (image=quay.io/ceph/ceph:v18, name=hopeful_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 23:32:23 compute-0 podman[83873]: 2025-11-25 23:32:23.945066879 +0000 UTC m=+0.217337485 container attach f25c89a9f592e059c53542345a1890bcb4358233572672870d987ad4ad158e33 (image=quay.io/ceph/ceph:v18, name=hopeful_kepler, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 23:32:23 compute-0 sudo[83892]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:32:23 compute-0 sudo[83892]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:23 compute-0 sudo[83892]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:23 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 23:32:24 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-crash-compute-0[83871]: INFO:ceph-crash:pinging cluster to exercise our key
Nov 25 23:32:24 compute-0 sudo[83920]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:32:24 compute-0 sudo[83920]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:24 compute-0 sudo[83920]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:24 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/113567869' entity='client.admin' 
Nov 25 23:32:24 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:24 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:24 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:24 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:24 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:24 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.cckgxa", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 25 23:32:24 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.cckgxa", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Nov 25 23:32:24 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 25 23:32:24 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:32:24 compute-0 sudo[83947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:32:24 compute-0 sudo[83947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:24 compute-0 sudo[83947]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:24 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-crash-compute-0[83871]: 2025-11-25T23:32:24.224+0000 7f5ee9164640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Nov 25 23:32:24 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-crash-compute-0[83871]: 2025-11-25T23:32:24.224+0000 7f5ee9164640 -1 AuthRegistry(0x7f5ee4067440) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Nov 25 23:32:24 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-crash-compute-0[83871]: 2025-11-25T23:32:24.225+0000 7f5ee9164640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Nov 25 23:32:24 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-crash-compute-0[83871]: 2025-11-25T23:32:24.225+0000 7f5ee9164640 -1 AuthRegistry(0x7f5ee9163000) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Nov 25 23:32:24 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-crash-compute-0[83871]: 2025-11-25T23:32:24.227+0000 7f5ee2d76640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Nov 25 23:32:24 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-crash-compute-0[83871]: 2025-11-25T23:32:24.227+0000 7f5ee9164640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Nov 25 23:32:24 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-crash-compute-0[83871]: [errno 13] RADOS permission denied (error connecting to the cluster)
Nov 25 23:32:24 compute-0 sudo[83972]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 101922db-575f-58e2-980f-928050464f69
Nov 25 23:32:24 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-crash-compute-0[83871]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Nov 25 23:32:24 compute-0 sudo[83972]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:24 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0) v1
Nov 25 23:32:24 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3921811036' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Nov 25 23:32:24 compute-0 podman[84070]: 2025-11-25 23:32:24.665495105 +0000 UTC m=+0.052700383 container create 7f667a74639b8f1d41d6dd20f825953cc13f75e0a6ee13488fc8fe454dde5ace (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_beaver, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:32:24 compute-0 systemd[1]: Started libpod-conmon-7f667a74639b8f1d41d6dd20f825953cc13f75e0a6ee13488fc8fe454dde5ace.scope.
Nov 25 23:32:24 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:32:24 compute-0 podman[84070]: 2025-11-25 23:32:24.642226991 +0000 UTC m=+0.029432299 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:32:24 compute-0 podman[84070]: 2025-11-25 23:32:24.75151245 +0000 UTC m=+0.138717748 container init 7f667a74639b8f1d41d6dd20f825953cc13f75e0a6ee13488fc8fe454dde5ace (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_beaver, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 23:32:24 compute-0 podman[84070]: 2025-11-25 23:32:24.761291092 +0000 UTC m=+0.148496390 container start 7f667a74639b8f1d41d6dd20f825953cc13f75e0a6ee13488fc8fe454dde5ace (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_beaver, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:32:24 compute-0 podman[84070]: 2025-11-25 23:32:24.764817367 +0000 UTC m=+0.152022675 container attach 7f667a74639b8f1d41d6dd20f825953cc13f75e0a6ee13488fc8fe454dde5ace (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_beaver, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 23:32:24 compute-0 nice_beaver[84087]: 167 167
Nov 25 23:32:24 compute-0 systemd[1]: libpod-7f667a74639b8f1d41d6dd20f825953cc13f75e0a6ee13488fc8fe454dde5ace.scope: Deactivated successfully.
Nov 25 23:32:24 compute-0 conmon[84087]: conmon 7f667a74639b8f1d41d6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7f667a74639b8f1d41d6dd20f825953cc13f75e0a6ee13488fc8fe454dde5ace.scope/container/memory.events
Nov 25 23:32:24 compute-0 podman[84070]: 2025-11-25 23:32:24.76903764 +0000 UTC m=+0.156242938 container died 7f667a74639b8f1d41d6dd20f825953cc13f75e0a6ee13488fc8fe454dde5ace (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 23:32:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-4b3a924fa9680c3adb12ead1f656292db3fbe344244cfa35e3758d713a2cf845-merged.mount: Deactivated successfully.
Nov 25 23:32:24 compute-0 podman[84070]: 2025-11-25 23:32:24.814303643 +0000 UTC m=+0.201508931 container remove 7f667a74639b8f1d41d6dd20f825953cc13f75e0a6ee13488fc8fe454dde5ace (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_beaver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 23:32:24 compute-0 systemd[1]: libpod-conmon-7f667a74639b8f1d41d6dd20f825953cc13f75e0a6ee13488fc8fe454dde5ace.scope: Deactivated successfully.
Nov 25 23:32:24 compute-0 systemd[1]: Reloading.
Nov 25 23:32:24 compute-0 systemd-rc-local-generator[84130]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 23:32:24 compute-0 systemd-sysv-generator[84136]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 23:32:25 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Nov 25 23:32:25 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 25 23:32:25 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3921811036' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Nov 25 23:32:25 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Nov 25 23:32:25 compute-0 hopeful_kepler[83891]: set require_min_compat_client to mimic
Nov 25 23:32:25 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Nov 25 23:32:25 compute-0 ceph-mon[75654]: Deploying daemon mgr.compute-0.cckgxa on compute-0
Nov 25 23:32:25 compute-0 ceph-mon[75654]: pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 23:32:25 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/3921811036' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Nov 25 23:32:25 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:32:25 compute-0 systemd[1]: libpod-f25c89a9f592e059c53542345a1890bcb4358233572672870d987ad4ad158e33.scope: Deactivated successfully.
Nov 25 23:32:25 compute-0 podman[84143]: 2025-11-25 23:32:25.216425129 +0000 UTC m=+0.028655509 container died f25c89a9f592e059c53542345a1890bcb4358233572672870d987ad4ad158e33 (image=quay.io/ceph/ceph:v18, name=hopeful_kepler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True)
Nov 25 23:32:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-62b09875b329b4241c77e1cb3f020a304ae9cb9abc6ba0c6d9fc1e350f650592-merged.mount: Deactivated successfully.
Nov 25 23:32:25 compute-0 podman[84143]: 2025-11-25 23:32:25.262451712 +0000 UTC m=+0.074682092 container remove f25c89a9f592e059c53542345a1890bcb4358233572672870d987ad4ad158e33 (image=quay.io/ceph/ceph:v18, name=hopeful_kepler, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:32:25 compute-0 systemd[1]: Reloading.
Nov 25 23:32:25 compute-0 sudo[83854]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:25 compute-0 systemd-sysv-generator[84193]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 23:32:25 compute-0 systemd-rc-local-generator[84190]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 23:32:25 compute-0 systemd[1]: libpod-conmon-f25c89a9f592e059c53542345a1890bcb4358233572672870d987ad4ad158e33.scope: Deactivated successfully.
Nov 25 23:32:25 compute-0 systemd[1]: Starting Ceph mgr.compute-0.cckgxa for 101922db-575f-58e2-980f-928050464f69...
Nov 25 23:32:25 compute-0 sudo[84250]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzrfesdgnbyofgxnjuqdvwioialmvhwh ; /usr/bin/python3'
Nov 25 23:32:25 compute-0 sudo[84250]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:32:25 compute-0 podman[84273]: 2025-11-25 23:32:25.916806448 +0000 UTC m=+0.067222972 container create 973e576265bd7df37317eb5f812b06f0d1cd304e493def0cf8d9c3be1c6286f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-cckgxa, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 25 23:32:25 compute-0 python3[84260]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 101922db-575f-58e2-980f-928050464f69 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:32:25 compute-0 podman[84273]: 2025-11-25 23:32:25.882371215 +0000 UTC m=+0.032787799 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:32:25 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 23:32:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9036c2e2b2fcad3e2e1fae3c0aa618bbace139643c66e554e91b93f8e63bac2e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9036c2e2b2fcad3e2e1fae3c0aa618bbace139643c66e554e91b93f8e63bac2e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9036c2e2b2fcad3e2e1fae3c0aa618bbace139643c66e554e91b93f8e63bac2e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9036c2e2b2fcad3e2e1fae3c0aa618bbace139643c66e554e91b93f8e63bac2e/merged/var/lib/ceph/mgr/ceph-compute-0.cckgxa supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:26 compute-0 podman[84285]: 2025-11-25 23:32:26.015557194 +0000 UTC m=+0.054175533 container create 4a503e74c183b08651bf183cb9afda3baf0daedc220c58e3ffa45e61cd118f41 (image=quay.io/ceph/ceph:v18, name=recursing_gagarin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 23:32:26 compute-0 podman[84273]: 2025-11-25 23:32:26.026584569 +0000 UTC m=+0.177001093 container init 973e576265bd7df37317eb5f812b06f0d1cd304e493def0cf8d9c3be1c6286f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-cckgxa, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 25 23:32:26 compute-0 podman[84273]: 2025-11-25 23:32:26.033870464 +0000 UTC m=+0.184286958 container start 973e576265bd7df37317eb5f812b06f0d1cd304e493def0cf8d9c3be1c6286f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-cckgxa, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 23:32:26 compute-0 bash[84273]: 973e576265bd7df37317eb5f812b06f0d1cd304e493def0cf8d9c3be1c6286f0
Nov 25 23:32:26 compute-0 systemd[1]: Started Ceph mgr.compute-0.cckgxa for 101922db-575f-58e2-980f-928050464f69.
Nov 25 23:32:26 compute-0 ceph-mgr[75954]: [progress INFO root] Writing back 1 completed events
Nov 25 23:32:26 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 25 23:32:26 compute-0 systemd[1]: Started libpod-conmon-4a503e74c183b08651bf183cb9afda3baf0daedc220c58e3ffa45e61cd118f41.scope.
Nov 25 23:32:26 compute-0 sudo[83972]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:26 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:32:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:32:26 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 23:32:26 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:26 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:32:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:32:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:32:26 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 23:32:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:32:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:32:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4a89dc3966f656b62328b9b016d5499907a8b1ed135fd91d7e0b0a68ec22165/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4a89dc3966f656b62328b9b016d5499907a8b1ed135fd91d7e0b0a68ec22165/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4a89dc3966f656b62328b9b016d5499907a8b1ed135fd91d7e0b0a68ec22165/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:26 compute-0 podman[84285]: 2025-11-25 23:32:26.000204742 +0000 UTC m=+0.038823111 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 23:32:26 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:26 compute-0 ceph-mgr[84304]: set uid:gid to 167:167 (ceph:ceph)
Nov 25 23:32:26 compute-0 ceph-mgr[84304]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Nov 25 23:32:26 compute-0 ceph-mgr[84304]: pidfile_write: ignore empty --pid-file
Nov 25 23:32:26 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 25 23:32:26 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:26 compute-0 ceph-mgr[75954]: [progress INFO root] complete: finished ev 5e4e79e8-e0bf-4510-9e37-72aa1cc6fb23 (Updating mgr deployment (+1 -> 2))
Nov 25 23:32:26 compute-0 ceph-mgr[75954]: [progress INFO root] Completed event 5e4e79e8-e0bf-4510-9e37-72aa1cc6fb23 (Updating mgr deployment (+1 -> 2)) in 2 seconds
Nov 25 23:32:26 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 25 23:32:26 compute-0 podman[84285]: 2025-11-25 23:32:26.11466731 +0000 UTC m=+0.153285669 container init 4a503e74c183b08651bf183cb9afda3baf0daedc220c58e3ffa45e61cd118f41 (image=quay.io/ceph/ceph:v18, name=recursing_gagarin, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:32:26 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:26 compute-0 podman[84285]: 2025-11-25 23:32:26.121710119 +0000 UTC m=+0.160328458 container start 4a503e74c183b08651bf183cb9afda3baf0daedc220c58e3ffa45e61cd118f41 (image=quay.io/ceph/ceph:v18, name=recursing_gagarin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:32:26 compute-0 podman[84285]: 2025-11-25 23:32:26.12698833 +0000 UTC m=+0.165606689 container attach 4a503e74c183b08651bf183cb9afda3baf0daedc220c58e3ffa45e61cd118f41 (image=quay.io/ceph/ceph:v18, name=recursing_gagarin, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 25 23:32:26 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/3921811036' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Nov 25 23:32:26 compute-0 ceph-mon[75654]: osdmap e3: 0 total, 0 up, 0 in
Nov 25 23:32:26 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:26 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:26 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:26 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:26 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:26 compute-0 sudo[84335]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:32:26 compute-0 sudo[84335]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:26 compute-0 sudo[84335]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:26 compute-0 ceph-mgr[84304]: mgr[py] Loading python module 'alerts'
Nov 25 23:32:26 compute-0 sudo[84360]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 23:32:26 compute-0 sudo[84360]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:26 compute-0 sudo[84360]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:26 compute-0 sudo[84385]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:32:26 compute-0 sudo[84385]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:26 compute-0 sudo[84385]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:26 compute-0 sudo[84410]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:32:26 compute-0 sudo[84410]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:26 compute-0 sudo[84410]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:26 compute-0 sudo[84449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:32:26 compute-0 ceph-mgr[84304]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 25 23:32:26 compute-0 sudo[84449]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:26 compute-0 ceph-mgr[84304]: mgr[py] Loading python module 'balancer'
Nov 25 23:32:26 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-cckgxa[84293]: 2025-11-25T23:32:26.502+0000 7f04190af140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 25 23:32:26 compute-0 sudo[84449]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:26 compute-0 sudo[84479]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 25 23:32:26 compute-0 sudo[84479]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:26 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14186 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 23:32:26 compute-0 sudo[84505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:32:26 compute-0 sudo[84505]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:26 compute-0 ceph-mgr[84304]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 25 23:32:26 compute-0 ceph-mgr[84304]: mgr[py] Loading python module 'cephadm'
Nov 25 23:32:26 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-cckgxa[84293]: 2025-11-25T23:32:26.751+0000 7f04190af140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 25 23:32:26 compute-0 sudo[84505]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:26 compute-0 sudo[84541]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:32:26 compute-0 sudo[84541]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:26 compute-0 sudo[84541]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:26 compute-0 sudo[84580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:32:26 compute-0 sudo[84580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:26 compute-0 sudo[84580]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:27 compute-0 sudo[84622]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host --expect-hostname compute-0
Nov 25 23:32:27 compute-0 sudo[84622]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:27 compute-0 ceph-mon[75654]: pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 23:32:27 compute-0 podman[84673]: 2025-11-25 23:32:27.219345383 +0000 UTC m=+0.080461798 container exec 42789e176a5dd28a3c234ac87642cbe4e8e4b6a7a43641214b6da8a496b18af2 (image=quay.io/ceph/ceph:v18, name=ceph-101922db-575f-58e2-980f-928050464f69-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:32:27 compute-0 podman[84673]: 2025-11-25 23:32:27.360155815 +0000 UTC m=+0.221254470 container exec_died 42789e176a5dd28a3c234ac87642cbe4e8e4b6a7a43641214b6da8a496b18af2 (image=quay.io/ceph/ceph:v18, name=ceph-101922db-575f-58e2-980f-928050464f69-mon-compute-0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:32:27 compute-0 sudo[84622]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:27 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 25 23:32:27 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:27 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 25 23:32:27 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:27 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 25 23:32:27 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:27 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 25 23:32:27 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:27 compute-0 ceph-mgr[75954]: [cephadm INFO root] Added host compute-0
Nov 25 23:32:27 compute-0 ceph-mgr[75954]: log_channel(cephadm) log [INF] : Added host compute-0
Nov 25 23:32:27 compute-0 ceph-mgr[75954]: [cephadm INFO root] Saving service mon spec with placement compute-0
Nov 25 23:32:27 compute-0 ceph-mgr[75954]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0
Nov 25 23:32:27 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 25 23:32:27 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:27 compute-0 ceph-mgr[75954]: [cephadm INFO root] Saving service mgr spec with placement compute-0
Nov 25 23:32:27 compute-0 ceph-mgr[75954]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0
Nov 25 23:32:27 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 25 23:32:27 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:27 compute-0 ceph-mgr[75954]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Nov 25 23:32:27 compute-0 ceph-mgr[75954]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Nov 25 23:32:27 compute-0 ceph-mgr[75954]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0
Nov 25 23:32:27 compute-0 ceph-mgr[75954]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0
Nov 25 23:32:27 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0) v1
Nov 25 23:32:27 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:27 compute-0 recursing_gagarin[84307]: Added host 'compute-0' with addr '192.168.122.100'
Nov 25 23:32:27 compute-0 recursing_gagarin[84307]: Scheduled mon update...
Nov 25 23:32:27 compute-0 recursing_gagarin[84307]: Scheduled mgr update...
Nov 25 23:32:27 compute-0 recursing_gagarin[84307]: Scheduled osd.default_drive_group update...
Nov 25 23:32:27 compute-0 systemd[1]: libpod-4a503e74c183b08651bf183cb9afda3baf0daedc220c58e3ffa45e61cd118f41.scope: Deactivated successfully.
Nov 25 23:32:27 compute-0 podman[84285]: 2025-11-25 23:32:27.462007135 +0000 UTC m=+1.500625514 container died 4a503e74c183b08651bf183cb9afda3baf0daedc220c58e3ffa45e61cd118f41 (image=quay.io/ceph/ceph:v18, name=recursing_gagarin, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 23:32:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-d4a89dc3966f656b62328b9b016d5499907a8b1ed135fd91d7e0b0a68ec22165-merged.mount: Deactivated successfully.
Nov 25 23:32:27 compute-0 podman[84285]: 2025-11-25 23:32:27.539337307 +0000 UTC m=+1.577955656 container remove 4a503e74c183b08651bf183cb9afda3baf0daedc220c58e3ffa45e61cd118f41 (image=quay.io/ceph/ceph:v18, name=recursing_gagarin, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:32:27 compute-0 systemd[1]: libpod-conmon-4a503e74c183b08651bf183cb9afda3baf0daedc220c58e3ffa45e61cd118f41.scope: Deactivated successfully.
Nov 25 23:32:27 compute-0 sudo[84250]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:27 compute-0 sudo[84479]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:27 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 23:32:27 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:27 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 23:32:27 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:27 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 23:32:27 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:27 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 23:32:27 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:27 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 23:32:27 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:32:27 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 23:32:27 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 23:32:27 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 23:32:27 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:27 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev fa46c673-c5ef-4319-993d-dcf182181f69 does not exist
Nov 25 23:32:27 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 25 23:32:27 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:27 compute-0 ceph-mgr[75954]: [progress INFO root] update: starting ev 170bf164-f025-4c2d-825a-b7bdf989c95e (Updating mgr deployment (-1 -> 1))
Nov 25 23:32:27 compute-0 ceph-mgr[75954]: [cephadm INFO cephadm.serve] Removing daemon mgr.compute-0.cckgxa from compute-0 -- ports [8765]
Nov 25 23:32:27 compute-0 ceph-mgr[75954]: log_channel(cephadm) log [INF] : Removing daemon mgr.compute-0.cckgxa from compute-0 -- ports [8765]
Nov 25 23:32:27 compute-0 sudo[84791]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:32:27 compute-0 sudo[84791]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:27 compute-0 sudo[84791]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:27 compute-0 sudo[84849]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzmglmwaavrwndcaoxsknhxqlfwcgvne ; /usr/bin/python3'
Nov 25 23:32:27 compute-0 sudo[84849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:32:27 compute-0 sudo[84824]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:32:27 compute-0 sudo[84824]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:27 compute-0 sudo[84824]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:27 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 23:32:28 compute-0 sudo[84867]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:32:28 compute-0 sudo[84867]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:28 compute-0 sudo[84867]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:28 compute-0 sudo[84892]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 rm-daemon --fsid 101922db-575f-58e2-980f-928050464f69 --name mgr.compute-0.cckgxa --force --tcp-ports 8765
Nov 25 23:32:28 compute-0 sudo[84892]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:28 compute-0 python3[84865]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 101922db-575f-58e2-980f-928050464f69 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:32:28 compute-0 podman[84919]: 2025-11-25 23:32:28.179085411 +0000 UTC m=+0.056115125 container create d3b85dc52598c6f1c56fc161f0d07b10e93591de79b179297633f7996ae0cf3d (image=quay.io/ceph/ceph:v18, name=infallible_wozniak, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 23:32:28 compute-0 systemd[1]: Started libpod-conmon-d3b85dc52598c6f1c56fc161f0d07b10e93591de79b179297633f7996ae0cf3d.scope.
Nov 25 23:32:28 compute-0 podman[84919]: 2025-11-25 23:32:28.158340736 +0000 UTC m=+0.035370490 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 23:32:28 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:32:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/378b53322a6ae95938ed791b4856d7696515d659437a34685e581e2135848b59/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/378b53322a6ae95938ed791b4856d7696515d659437a34685e581e2135848b59/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/378b53322a6ae95938ed791b4856d7696515d659437a34685e581e2135848b59/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:28 compute-0 podman[84919]: 2025-11-25 23:32:28.287345323 +0000 UTC m=+0.164375087 container init d3b85dc52598c6f1c56fc161f0d07b10e93591de79b179297633f7996ae0cf3d (image=quay.io/ceph/ceph:v18, name=infallible_wozniak, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:32:28 compute-0 podman[84919]: 2025-11-25 23:32:28.298494502 +0000 UTC m=+0.175524246 container start d3b85dc52598c6f1c56fc161f0d07b10e93591de79b179297633f7996ae0cf3d (image=quay.io/ceph/ceph:v18, name=infallible_wozniak, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:32:28 compute-0 podman[84919]: 2025-11-25 23:32:28.304624146 +0000 UTC m=+0.181653940 container attach d3b85dc52598c6f1c56fc161f0d07b10e93591de79b179297633f7996ae0cf3d (image=quay.io/ceph/ceph:v18, name=infallible_wozniak, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 25 23:32:28 compute-0 ceph-mon[75654]: from='client.14186 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 23:32:28 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:28 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:28 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:28 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:28 compute-0 ceph-mon[75654]: Added host compute-0
Nov 25 23:32:28 compute-0 ceph-mon[75654]: Saving service mon spec with placement compute-0
Nov 25 23:32:28 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:28 compute-0 ceph-mon[75654]: Saving service mgr spec with placement compute-0
Nov 25 23:32:28 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:28 compute-0 ceph-mon[75654]: Marking host: compute-0 for OSDSpec preview refresh.
Nov 25 23:32:28 compute-0 ceph-mon[75654]: Saving service osd.default_drive_group spec with placement compute-0
Nov 25 23:32:28 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:28 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:28 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:28 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:28 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:28 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:32:28 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 23:32:28 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:28 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:28 compute-0 systemd[1]: Stopping Ceph mgr.compute-0.cckgxa for 101922db-575f-58e2-980f-928050464f69...
Nov 25 23:32:28 compute-0 ceph-mgr[84304]: mgr[py] Loading python module 'crash'
Nov 25 23:32:28 compute-0 podman[85035]: 2025-11-25 23:32:28.732840641 +0000 UTC m=+0.089188671 container died 973e576265bd7df37317eb5f812b06f0d1cd304e493def0cf8d9c3be1c6286f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-cckgxa, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 23:32:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-9036c2e2b2fcad3e2e1fae3c0aa618bbace139643c66e554e91b93f8e63bac2e-merged.mount: Deactivated successfully.
Nov 25 23:32:28 compute-0 podman[85035]: 2025-11-25 23:32:28.802319212 +0000 UTC m=+0.158667252 container remove 973e576265bd7df37317eb5f812b06f0d1cd304e493def0cf8d9c3be1c6286f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-cckgxa, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 23:32:28 compute-0 bash[85035]: ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-cckgxa
Nov 25 23:32:28 compute-0 systemd[1]: ceph-101922db-575f-58e2-980f-928050464f69@mgr.compute-0.cckgxa.service: Main process exited, code=exited, status=143/n/a
Nov 25 23:32:28 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 25 23:32:28 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3645594730' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 25 23:32:28 compute-0 infallible_wozniak[84952]: 
Nov 25 23:32:28 compute-0 infallible_wozniak[84952]: {"fsid":"101922db-575f-58e2-980f-928050464f69","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":78,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":1,"modified":"2025-11-25T23:31:07.189601+0000","services":{}},"progress_events":{"170bf164-f025-4c2d-825a-b7bdf989c95e":{"message":"Updating mgr deployment (-1 -> 1) (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Nov 25 23:32:28 compute-0 systemd[1]: libpod-d3b85dc52598c6f1c56fc161f0d07b10e93591de79b179297633f7996ae0cf3d.scope: Deactivated successfully.
Nov 25 23:32:28 compute-0 podman[84919]: 2025-11-25 23:32:28.925869354 +0000 UTC m=+0.802899088 container died d3b85dc52598c6f1c56fc161f0d07b10e93591de79b179297633f7996ae0cf3d (image=quay.io/ceph/ceph:v18, name=infallible_wozniak, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:32:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-378b53322a6ae95938ed791b4856d7696515d659437a34685e581e2135848b59-merged.mount: Deactivated successfully.
Nov 25 23:32:28 compute-0 podman[84919]: 2025-11-25 23:32:28.989918609 +0000 UTC m=+0.866948313 container remove d3b85dc52598c6f1c56fc161f0d07b10e93591de79b179297633f7996ae0cf3d (image=quay.io/ceph/ceph:v18, name=infallible_wozniak, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:32:29 compute-0 systemd[1]: libpod-conmon-d3b85dc52598c6f1c56fc161f0d07b10e93591de79b179297633f7996ae0cf3d.scope: Deactivated successfully.
Nov 25 23:32:29 compute-0 systemd[1]: ceph-101922db-575f-58e2-980f-928050464f69@mgr.compute-0.cckgxa.service: Failed with result 'exit-code'.
Nov 25 23:32:29 compute-0 systemd[1]: Stopped Ceph mgr.compute-0.cckgxa for 101922db-575f-58e2-980f-928050464f69.
Nov 25 23:32:29 compute-0 systemd[1]: ceph-101922db-575f-58e2-980f-928050464f69@mgr.compute-0.cckgxa.service: Consumed 3.764s CPU time.
Nov 25 23:32:29 compute-0 sudo[84849]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:29 compute-0 systemd[1]: Reloading.
Nov 25 23:32:29 compute-0 systemd-rc-local-generator[85131]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 23:32:29 compute-0 systemd-sysv-generator[85136]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 23:32:29 compute-0 sudo[84892]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:29 compute-0 ceph-mgr[75954]: [cephadm INFO cephadm.services.cephadmservice] Removing key for mgr.compute-0.cckgxa
Nov 25 23:32:29 compute-0 ceph-mgr[75954]: log_channel(cephadm) log [INF] : Removing key for mgr.compute-0.cckgxa
Nov 25 23:32:29 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "mgr.compute-0.cckgxa"} v 0) v1
Nov 25 23:32:29 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth rm", "entity": "mgr.compute-0.cckgxa"}]: dispatch
Nov 25 23:32:29 compute-0 ceph-mon[75654]: Removing daemon mgr.compute-0.cckgxa from compute-0 -- ports [8765]
Nov 25 23:32:29 compute-0 ceph-mon[75654]: pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 23:32:29 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/3645594730' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 25 23:32:29 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.cckgxa"}]': finished
Nov 25 23:32:29 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 25 23:32:29 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:29 compute-0 ceph-mgr[75954]: [progress INFO root] complete: finished ev 170bf164-f025-4c2d-825a-b7bdf989c95e (Updating mgr deployment (-1 -> 1))
Nov 25 23:32:29 compute-0 ceph-mgr[75954]: [progress INFO root] Completed event 170bf164-f025-4c2d-825a-b7bdf989c95e (Updating mgr deployment (-1 -> 1)) in 2 seconds
Nov 25 23:32:29 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 25 23:32:29 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:29 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev 407603af-93ec-47c3-9c69-317c36f86d49 does not exist
Nov 25 23:32:29 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 23:32:29 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 23:32:29 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 23:32:29 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 23:32:29 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 23:32:29 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:32:29 compute-0 sudo[85143]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:32:29 compute-0 sudo[85143]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:29 compute-0 sudo[85143]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:29 compute-0 sudo[85168]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:32:29 compute-0 sudo[85168]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:29 compute-0 sudo[85168]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:29 compute-0 sudo[85193]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:32:29 compute-0 sudo[85193]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:29 compute-0 sudo[85193]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:29 compute-0 sudo[85218]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 23:32:29 compute-0 sudo[85218]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:29 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 23:32:30 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:32:30 compute-0 podman[85286]: 2025-11-25 23:32:30.211330701 +0000 UTC m=+0.058052267 container create c31620b14d72ce182c52cb23e5f32e8e12cf9edec8acea4fd238cc57ec45c99a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_khorana, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 23:32:30 compute-0 systemd[1]: Started libpod-conmon-c31620b14d72ce182c52cb23e5f32e8e12cf9edec8acea4fd238cc57ec45c99a.scope.
Nov 25 23:32:30 compute-0 podman[85286]: 2025-11-25 23:32:30.190895883 +0000 UTC m=+0.037617479 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:32:30 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:32:30 compute-0 podman[85286]: 2025-11-25 23:32:30.326880867 +0000 UTC m=+0.173602483 container init c31620b14d72ce182c52cb23e5f32e8e12cf9edec8acea4fd238cc57ec45c99a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_khorana, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 23:32:30 compute-0 podman[85286]: 2025-11-25 23:32:30.342536156 +0000 UTC m=+0.189257722 container start c31620b14d72ce182c52cb23e5f32e8e12cf9edec8acea4fd238cc57ec45c99a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_khorana, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 23:32:30 compute-0 podman[85286]: 2025-11-25 23:32:30.347224763 +0000 UTC m=+0.193946329 container attach c31620b14d72ce182c52cb23e5f32e8e12cf9edec8acea4fd238cc57ec45c99a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_khorana, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 23:32:30 compute-0 lucid_khorana[85302]: 167 167
Nov 25 23:32:30 compute-0 systemd[1]: libpod-c31620b14d72ce182c52cb23e5f32e8e12cf9edec8acea4fd238cc57ec45c99a.scope: Deactivated successfully.
Nov 25 23:32:30 compute-0 podman[85286]: 2025-11-25 23:32:30.349643578 +0000 UTC m=+0.196365124 container died c31620b14d72ce182c52cb23e5f32e8e12cf9edec8acea4fd238cc57ec45c99a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_khorana, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:32:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-3900020c5d11de5a42b34ad5e97ee6dd0afd9a89c5538d8ba451b29c8fe2377a-merged.mount: Deactivated successfully.
Nov 25 23:32:30 compute-0 podman[85286]: 2025-11-25 23:32:30.395569168 +0000 UTC m=+0.242290734 container remove c31620b14d72ce182c52cb23e5f32e8e12cf9edec8acea4fd238cc57ec45c99a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_khorana, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:32:30 compute-0 ceph-mon[75654]: Removing key for mgr.compute-0.cckgxa
Nov 25 23:32:30 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth rm", "entity": "mgr.compute-0.cckgxa"}]: dispatch
Nov 25 23:32:30 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.cckgxa"}]': finished
Nov 25 23:32:30 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:30 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:30 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 23:32:30 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 23:32:30 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:32:30 compute-0 systemd[1]: libpod-conmon-c31620b14d72ce182c52cb23e5f32e8e12cf9edec8acea4fd238cc57ec45c99a.scope: Deactivated successfully.
Nov 25 23:32:30 compute-0 podman[85326]: 2025-11-25 23:32:30.568851162 +0000 UTC m=+0.035190444 container create 1db6dab5773c2eb934c3f9f2a0ad11922c2f525dd8696b53e1af12553ea04018 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_mirzakhani, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:32:30 compute-0 systemd[1]: Started libpod-conmon-1db6dab5773c2eb934c3f9f2a0ad11922c2f525dd8696b53e1af12553ea04018.scope.
Nov 25 23:32:30 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:32:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7aa9bf557799dc3b620078240d18c033263b1c0b38534363bf9c83cb1a9df318/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7aa9bf557799dc3b620078240d18c033263b1c0b38534363bf9c83cb1a9df318/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7aa9bf557799dc3b620078240d18c033263b1c0b38534363bf9c83cb1a9df318/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7aa9bf557799dc3b620078240d18c033263b1c0b38534363bf9c83cb1a9df318/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7aa9bf557799dc3b620078240d18c033263b1c0b38534363bf9c83cb1a9df318/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:30 compute-0 podman[85326]: 2025-11-25 23:32:30.554740213 +0000 UTC m=+0.021079515 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:32:30 compute-0 podman[85326]: 2025-11-25 23:32:30.653751737 +0000 UTC m=+0.120091069 container init 1db6dab5773c2eb934c3f9f2a0ad11922c2f525dd8696b53e1af12553ea04018 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_mirzakhani, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:32:30 compute-0 podman[85326]: 2025-11-25 23:32:30.66694647 +0000 UTC m=+0.133285762 container start 1db6dab5773c2eb934c3f9f2a0ad11922c2f525dd8696b53e1af12553ea04018 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_mirzakhani, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:32:30 compute-0 podman[85326]: 2025-11-25 23:32:30.670822694 +0000 UTC m=+0.137162016 container attach 1db6dab5773c2eb934c3f9f2a0ad11922c2f525dd8696b53e1af12553ea04018 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:32:30 compute-0 sshd-session[85243]: Invalid user msf from 124.18.141.70 port 59756
Nov 25 23:32:31 compute-0 sshd-session[85243]: Received disconnect from 124.18.141.70 port 59756:11: Bye Bye [preauth]
Nov 25 23:32:31 compute-0 sshd-session[85243]: Disconnected from invalid user msf 124.18.141.70 port 59756 [preauth]
Nov 25 23:32:31 compute-0 ceph-mgr[75954]: [progress INFO root] Writing back 3 completed events
Nov 25 23:32:31 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 25 23:32:31 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:31 compute-0 ceph-mon[75654]: pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 23:32:31 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:31 compute-0 keen_mirzakhani[85343]: --> passed data devices: 0 physical, 3 LVM
Nov 25 23:32:31 compute-0 keen_mirzakhani[85343]: --> relative data size: 1.0
Nov 25 23:32:31 compute-0 keen_mirzakhani[85343]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 25 23:32:31 compute-0 keen_mirzakhani[85343]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new c5ddf08a-6193-41e0-8332-60b5083aa62e
Nov 25 23:32:31 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 23:32:32 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "c5ddf08a-6193-41e0-8332-60b5083aa62e"} v 0) v1
Nov 25 23:32:32 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1583645874' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "c5ddf08a-6193-41e0-8332-60b5083aa62e"}]: dispatch
Nov 25 23:32:32 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Nov 25 23:32:32 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 25 23:32:32 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1583645874' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "c5ddf08a-6193-41e0-8332-60b5083aa62e"}]': finished
Nov 25 23:32:32 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Nov 25 23:32:32 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Nov 25 23:32:32 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 25 23:32:32 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 23:32:32 compute-0 ceph-mgr[75954]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 25 23:32:32 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/1583645874' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "c5ddf08a-6193-41e0-8332-60b5083aa62e"}]: dispatch
Nov 25 23:32:32 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/1583645874' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "c5ddf08a-6193-41e0-8332-60b5083aa62e"}]': finished
Nov 25 23:32:32 compute-0 ceph-mon[75654]: osdmap e4: 1 total, 0 up, 1 in
Nov 25 23:32:32 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 23:32:32 compute-0 keen_mirzakhani[85343]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 25 23:32:32 compute-0 lvm[85406]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 23:32:32 compute-0 lvm[85406]: VG ceph_vg0 finished
Nov 25 23:32:32 compute-0 keen_mirzakhani[85343]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Nov 25 23:32:32 compute-0 keen_mirzakhani[85343]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Nov 25 23:32:32 compute-0 keen_mirzakhani[85343]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 25 23:32:32 compute-0 keen_mirzakhani[85343]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 25 23:32:32 compute-0 keen_mirzakhani[85343]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Nov 25 23:32:33 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Nov 25 23:32:33 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4174327017' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 25 23:32:33 compute-0 keen_mirzakhani[85343]:  stderr: got monmap epoch 1
Nov 25 23:32:33 compute-0 keen_mirzakhani[85343]: --> Creating keyring file for osd.0
Nov 25 23:32:33 compute-0 keen_mirzakhani[85343]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Nov 25 23:32:33 compute-0 keen_mirzakhani[85343]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Nov 25 23:32:33 compute-0 keen_mirzakhani[85343]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid c5ddf08a-6193-41e0-8332-60b5083aa62e --setuser ceph --setgroup ceph
Nov 25 23:32:33 compute-0 sshd-session[85347]: Invalid user user10 from 45.78.216.132 port 43032
Nov 25 23:32:33 compute-0 ceph-mon[75654]: pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 23:32:33 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/4174327017' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 25 23:32:33 compute-0 sshd-session[85347]: Received disconnect from 45.78.216.132 port 43032:11: Bye Bye [preauth]
Nov 25 23:32:33 compute-0 sshd-session[85347]: Disconnected from invalid user user10 45.78.216.132 port 43032 [preauth]
Nov 25 23:32:33 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 23:32:34 compute-0 ceph-mon[75654]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Nov 25 23:32:34 compute-0 ceph-mon[75654]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 25 23:32:34 compute-0 ceph-mon[75654]: pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 23:32:34 compute-0 ceph-mon[75654]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Nov 25 23:32:34 compute-0 ceph-mon[75654]: Cluster is now healthy
Nov 25 23:32:35 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e4 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:32:35 compute-0 keen_mirzakhani[85343]:  stderr: 2025-11-25T23:32:33.109+0000 7fdf9b97a740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 25 23:32:35 compute-0 keen_mirzakhani[85343]:  stderr: 2025-11-25T23:32:33.109+0000 7fdf9b97a740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 25 23:32:35 compute-0 keen_mirzakhani[85343]:  stderr: 2025-11-25T23:32:33.109+0000 7fdf9b97a740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 25 23:32:35 compute-0 keen_mirzakhani[85343]:  stderr: 2025-11-25T23:32:33.109+0000 7fdf9b97a740 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Nov 25 23:32:35 compute-0 keen_mirzakhani[85343]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Nov 25 23:32:35 compute-0 keen_mirzakhani[85343]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 25 23:32:35 compute-0 keen_mirzakhani[85343]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Nov 25 23:32:35 compute-0 keen_mirzakhani[85343]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 25 23:32:35 compute-0 keen_mirzakhani[85343]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Nov 25 23:32:35 compute-0 keen_mirzakhani[85343]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 25 23:32:35 compute-0 keen_mirzakhani[85343]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 25 23:32:35 compute-0 keen_mirzakhani[85343]: --> ceph-volume lvm activate successful for osd ID: 0
Nov 25 23:32:35 compute-0 keen_mirzakhani[85343]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Nov 25 23:32:35 compute-0 keen_mirzakhani[85343]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 25 23:32:35 compute-0 keen_mirzakhani[85343]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 21bbab34-bea3-466b-8bf7-812749fcef47
Nov 25 23:32:35 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "21bbab34-bea3-466b-8bf7-812749fcef47"} v 0) v1
Nov 25 23:32:35 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1437777547' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "21bbab34-bea3-466b-8bf7-812749fcef47"}]: dispatch
Nov 25 23:32:35 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Nov 25 23:32:35 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 25 23:32:35 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1437777547' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "21bbab34-bea3-466b-8bf7-812749fcef47"}]': finished
Nov 25 23:32:35 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Nov 25 23:32:35 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Nov 25 23:32:35 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 25 23:32:35 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 23:32:35 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 23:32:35 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 23:32:35 compute-0 ceph-mgr[75954]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 25 23:32:35 compute-0 ceph-mgr[75954]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 23:32:35 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 23:32:36 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/1437777547' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "21bbab34-bea3-466b-8bf7-812749fcef47"}]: dispatch
Nov 25 23:32:36 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/1437777547' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "21bbab34-bea3-466b-8bf7-812749fcef47"}]': finished
Nov 25 23:32:36 compute-0 ceph-mon[75654]: osdmap e5: 2 total, 0 up, 2 in
Nov 25 23:32:36 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 23:32:36 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 23:32:36 compute-0 keen_mirzakhani[85343]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 25 23:32:36 compute-0 lvm[86361]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 25 23:32:36 compute-0 lvm[86361]: VG ceph_vg1 finished
Nov 25 23:32:36 compute-0 keen_mirzakhani[85343]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Nov 25 23:32:36 compute-0 keen_mirzakhani[85343]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg1/ceph_lv1
Nov 25 23:32:36 compute-0 keen_mirzakhani[85343]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Nov 25 23:32:36 compute-0 keen_mirzakhani[85343]: Running command: /usr/bin/ln -s /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Nov 25 23:32:36 compute-0 keen_mirzakhani[85343]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Nov 25 23:32:36 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Nov 25 23:32:36 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1149492936' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 25 23:32:36 compute-0 keen_mirzakhani[85343]:  stderr: got monmap epoch 1
Nov 25 23:32:36 compute-0 keen_mirzakhani[85343]: --> Creating keyring file for osd.1
Nov 25 23:32:36 compute-0 keen_mirzakhani[85343]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Nov 25 23:32:36 compute-0 keen_mirzakhani[85343]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Nov 25 23:32:36 compute-0 keen_mirzakhani[85343]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid 21bbab34-bea3-466b-8bf7-812749fcef47 --setuser ceph --setgroup ceph
Nov 25 23:32:37 compute-0 ceph-mon[75654]: pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 23:32:37 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/1149492936' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 25 23:32:37 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 23:32:39 compute-0 ceph-mon[75654]: pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 23:32:39 compute-0 keen_mirzakhani[85343]:  stderr: 2025-11-25T23:32:36.801+0000 7fcfcaae7740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 25 23:32:39 compute-0 keen_mirzakhani[85343]:  stderr: 2025-11-25T23:32:36.802+0000 7fcfcaae7740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 25 23:32:39 compute-0 keen_mirzakhani[85343]:  stderr: 2025-11-25T23:32:36.802+0000 7fcfcaae7740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 25 23:32:39 compute-0 keen_mirzakhani[85343]:  stderr: 2025-11-25T23:32:36.802+0000 7fcfcaae7740 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Nov 25 23:32:39 compute-0 keen_mirzakhani[85343]: --> ceph-volume lvm prepare successful for: ceph_vg1/ceph_lv1
Nov 25 23:32:39 compute-0 keen_mirzakhani[85343]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 25 23:32:39 compute-0 keen_mirzakhani[85343]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Nov 25 23:32:39 compute-0 keen_mirzakhani[85343]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Nov 25 23:32:39 compute-0 keen_mirzakhani[85343]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Nov 25 23:32:39 compute-0 keen_mirzakhani[85343]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Nov 25 23:32:39 compute-0 keen_mirzakhani[85343]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 25 23:32:39 compute-0 keen_mirzakhani[85343]: --> ceph-volume lvm activate successful for osd ID: 1
Nov 25 23:32:39 compute-0 keen_mirzakhani[85343]: --> ceph-volume lvm create successful for: ceph_vg1/ceph_lv1
Nov 25 23:32:39 compute-0 keen_mirzakhani[85343]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 25 23:32:39 compute-0 keen_mirzakhani[85343]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 019d967b-1a56-4e90-8682-a890da577e20
Nov 25 23:32:39 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "019d967b-1a56-4e90-8682-a890da577e20"} v 0) v1
Nov 25 23:32:39 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/144360762' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "019d967b-1a56-4e90-8682-a890da577e20"}]: dispatch
Nov 25 23:32:39 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Nov 25 23:32:39 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 25 23:32:39 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/144360762' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "019d967b-1a56-4e90-8682-a890da577e20"}]': finished
Nov 25 23:32:39 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e6 e6: 3 total, 0 up, 3 in
Nov 25 23:32:39 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e6: 3 total, 0 up, 3 in
Nov 25 23:32:39 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 25 23:32:39 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 23:32:39 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 23:32:39 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 23:32:39 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 23:32:39 compute-0 ceph-mgr[75954]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 25 23:32:39 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 23:32:39 compute-0 ceph-mgr[75954]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 23:32:39 compute-0 ceph-mgr[75954]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 23:32:39 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 23:32:40 compute-0 keen_mirzakhani[85343]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 25 23:32:40 compute-0 lvm[87311]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 25 23:32:40 compute-0 lvm[87311]: VG ceph_vg2 finished
Nov 25 23:32:40 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/144360762' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "019d967b-1a56-4e90-8682-a890da577e20"}]: dispatch
Nov 25 23:32:40 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/144360762' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "019d967b-1a56-4e90-8682-a890da577e20"}]': finished
Nov 25 23:32:40 compute-0 ceph-mon[75654]: osdmap e6: 3 total, 0 up, 3 in
Nov 25 23:32:40 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 23:32:40 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 23:32:40 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 23:32:40 compute-0 keen_mirzakhani[85343]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
Nov 25 23:32:40 compute-0 keen_mirzakhani[85343]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg2/ceph_lv2
Nov 25 23:32:40 compute-0 keen_mirzakhani[85343]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Nov 25 23:32:40 compute-0 keen_mirzakhani[85343]: Running command: /usr/bin/ln -s /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Nov 25 23:32:40 compute-0 keen_mirzakhani[85343]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
Nov 25 23:32:40 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:32:40 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Nov 25 23:32:40 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/239912430' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 25 23:32:40 compute-0 keen_mirzakhani[85343]:  stderr: got monmap epoch 1
Nov 25 23:32:40 compute-0 keen_mirzakhani[85343]: --> Creating keyring file for osd.2
Nov 25 23:32:40 compute-0 keen_mirzakhani[85343]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
Nov 25 23:32:40 compute-0 keen_mirzakhani[85343]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
Nov 25 23:32:40 compute-0 keen_mirzakhani[85343]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid 019d967b-1a56-4e90-8682-a890da577e20 --setuser ceph --setgroup ceph
Nov 25 23:32:41 compute-0 ceph-mon[75654]: pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 23:32:41 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/239912430' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 25 23:32:41 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 23:32:42 compute-0 keen_mirzakhani[85343]:  stderr: 2025-11-25T23:32:40.612+0000 7f3d7028a740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 25 23:32:42 compute-0 keen_mirzakhani[85343]:  stderr: 2025-11-25T23:32:40.612+0000 7f3d7028a740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 25 23:32:42 compute-0 keen_mirzakhani[85343]:  stderr: 2025-11-25T23:32:40.612+0000 7f3d7028a740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 25 23:32:42 compute-0 keen_mirzakhani[85343]:  stderr: 2025-11-25T23:32:40.612+0000 7f3d7028a740 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid
Nov 25 23:32:42 compute-0 keen_mirzakhani[85343]: --> ceph-volume lvm prepare successful for: ceph_vg2/ceph_lv2
Nov 25 23:32:42 compute-0 keen_mirzakhani[85343]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 25 23:32:42 compute-0 keen_mirzakhani[85343]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Nov 25 23:32:42 compute-0 keen_mirzakhani[85343]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Nov 25 23:32:42 compute-0 keen_mirzakhani[85343]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Nov 25 23:32:42 compute-0 keen_mirzakhani[85343]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Nov 25 23:32:42 compute-0 keen_mirzakhani[85343]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 25 23:32:43 compute-0 keen_mirzakhani[85343]: --> ceph-volume lvm activate successful for osd ID: 2
Nov 25 23:32:43 compute-0 keen_mirzakhani[85343]: --> ceph-volume lvm create successful for: ceph_vg2/ceph_lv2
Nov 25 23:32:43 compute-0 ceph-mon[75654]: pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 23:32:43 compute-0 systemd[1]: libpod-1db6dab5773c2eb934c3f9f2a0ad11922c2f525dd8696b53e1af12553ea04018.scope: Deactivated successfully.
Nov 25 23:32:43 compute-0 systemd[1]: libpod-1db6dab5773c2eb934c3f9f2a0ad11922c2f525dd8696b53e1af12553ea04018.scope: Consumed 6.786s CPU time.
Nov 25 23:32:43 compute-0 podman[88232]: 2025-11-25 23:32:43.125300304 +0000 UTC m=+0.044802311 container died 1db6dab5773c2eb934c3f9f2a0ad11922c2f525dd8696b53e1af12553ea04018 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:32:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-7aa9bf557799dc3b620078240d18c033263b1c0b38534363bf9c83cb1a9df318-merged.mount: Deactivated successfully.
Nov 25 23:32:43 compute-0 podman[88232]: 2025-11-25 23:32:43.211375641 +0000 UTC m=+0.130877588 container remove 1db6dab5773c2eb934c3f9f2a0ad11922c2f525dd8696b53e1af12553ea04018 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_mirzakhani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 23:32:43 compute-0 systemd[1]: libpod-conmon-1db6dab5773c2eb934c3f9f2a0ad11922c2f525dd8696b53e1af12553ea04018.scope: Deactivated successfully.
Nov 25 23:32:43 compute-0 sudo[85218]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:43 compute-0 sudo[88247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:32:43 compute-0 sudo[88247]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:43 compute-0 sudo[88247]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:43 compute-0 sudo[88272]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:32:43 compute-0 sudo[88272]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:43 compute-0 sudo[88272]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:43 compute-0 sudo[88297]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:32:43 compute-0 sudo[88297]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:43 compute-0 sudo[88297]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:43 compute-0 sudo[88322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 -- lvm list --format json
Nov 25 23:32:43 compute-0 sudo[88322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:43 compute-0 podman[88389]: 2025-11-25 23:32:43.95510799 +0000 UTC m=+0.053617361 container create b3a3f5a3c2beb697ee1ec0fe37dd27f8bcec691a6cc65010fdf8ac5926038a2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 25 23:32:43 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 23:32:44 compute-0 systemd[1]: Started libpod-conmon-b3a3f5a3c2beb697ee1ec0fe37dd27f8bcec691a6cc65010fdf8ac5926038a2e.scope.
Nov 25 23:32:44 compute-0 podman[88389]: 2025-11-25 23:32:43.928462125 +0000 UTC m=+0.026971536 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:32:44 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:32:44 compute-0 podman[88389]: 2025-11-25 23:32:44.06470166 +0000 UTC m=+0.163211021 container init b3a3f5a3c2beb697ee1ec0fe37dd27f8bcec691a6cc65010fdf8ac5926038a2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_tu, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:32:44 compute-0 podman[88389]: 2025-11-25 23:32:44.077229077 +0000 UTC m=+0.175738438 container start b3a3f5a3c2beb697ee1ec0fe37dd27f8bcec691a6cc65010fdf8ac5926038a2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_tu, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 23:32:44 compute-0 podman[88389]: 2025-11-25 23:32:44.082147095 +0000 UTC m=+0.180656506 container attach b3a3f5a3c2beb697ee1ec0fe37dd27f8bcec691a6cc65010fdf8ac5926038a2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 23:32:44 compute-0 elated_tu[88405]: 167 167
Nov 25 23:32:44 compute-0 systemd[1]: libpod-b3a3f5a3c2beb697ee1ec0fe37dd27f8bcec691a6cc65010fdf8ac5926038a2e.scope: Deactivated successfully.
Nov 25 23:32:44 compute-0 podman[88389]: 2025-11-25 23:32:44.084660051 +0000 UTC m=+0.183169422 container died b3a3f5a3c2beb697ee1ec0fe37dd27f8bcec691a6cc65010fdf8ac5926038a2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 23:32:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-2aa2bff6fe618b7a163581cf2cacefe80c91441756792816dbbb3f1ccf5ef59f-merged.mount: Deactivated successfully.
Nov 25 23:32:44 compute-0 podman[88389]: 2025-11-25 23:32:44.129353197 +0000 UTC m=+0.227862558 container remove b3a3f5a3c2beb697ee1ec0fe37dd27f8bcec691a6cc65010fdf8ac5926038a2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_tu, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 23:32:44 compute-0 systemd[1]: libpod-conmon-b3a3f5a3c2beb697ee1ec0fe37dd27f8bcec691a6cc65010fdf8ac5926038a2e.scope: Deactivated successfully.
Nov 25 23:32:44 compute-0 podman[88431]: 2025-11-25 23:32:44.354881993 +0000 UTC m=+0.053016725 container create 83373ad0fabfa7b01bfed07fb213c5ac42d6c0ba0cbe2d2941d1568835d33db8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 25 23:32:44 compute-0 systemd[1]: Started libpod-conmon-83373ad0fabfa7b01bfed07fb213c5ac42d6c0ba0cbe2d2941d1568835d33db8.scope.
Nov 25 23:32:44 compute-0 podman[88431]: 2025-11-25 23:32:44.333464004 +0000 UTC m=+0.031598816 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:32:44 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:32:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e451f99bddd449245975cb3159e436e42477164b18600181bdb8f7a401b73cd1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e451f99bddd449245975cb3159e436e42477164b18600181bdb8f7a401b73cd1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e451f99bddd449245975cb3159e436e42477164b18600181bdb8f7a401b73cd1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e451f99bddd449245975cb3159e436e42477164b18600181bdb8f7a401b73cd1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:44 compute-0 podman[88431]: 2025-11-25 23:32:44.464757771 +0000 UTC m=+0.162892573 container init 83373ad0fabfa7b01bfed07fb213c5ac42d6c0ba0cbe2d2941d1568835d33db8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mclaren, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 23:32:44 compute-0 podman[88431]: 2025-11-25 23:32:44.475285155 +0000 UTC m=+0.173419897 container start 83373ad0fabfa7b01bfed07fb213c5ac42d6c0ba0cbe2d2941d1568835d33db8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mclaren, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 23:32:44 compute-0 podman[88431]: 2025-11-25 23:32:44.478814848 +0000 UTC m=+0.176949660 container attach 83373ad0fabfa7b01bfed07fb213c5ac42d6c0ba0cbe2d2941d1568835d33db8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:32:45 compute-0 ceph-mon[75654]: pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 23:32:45 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:32:45 compute-0 zealous_mclaren[88447]: {
Nov 25 23:32:45 compute-0 zealous_mclaren[88447]:     "0": [
Nov 25 23:32:45 compute-0 zealous_mclaren[88447]:         {
Nov 25 23:32:45 compute-0 zealous_mclaren[88447]:             "devices": [
Nov 25 23:32:45 compute-0 zealous_mclaren[88447]:                 "/dev/loop3"
Nov 25 23:32:45 compute-0 zealous_mclaren[88447]:             ],
Nov 25 23:32:45 compute-0 zealous_mclaren[88447]:             "lv_name": "ceph_lv0",
Nov 25 23:32:45 compute-0 zealous_mclaren[88447]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 23:32:45 compute-0 zealous_mclaren[88447]:             "lv_size": "21470642176",
Nov 25 23:32:45 compute-0 zealous_mclaren[88447]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c5ddf08a-6193-41e0-8332-60b5083aa62e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 23:32:45 compute-0 zealous_mclaren[88447]:             "lv_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 23:32:45 compute-0 zealous_mclaren[88447]:             "name": "ceph_lv0",
Nov 25 23:32:45 compute-0 zealous_mclaren[88447]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 23:32:45 compute-0 zealous_mclaren[88447]:             "tags": {
Nov 25 23:32:45 compute-0 zealous_mclaren[88447]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 23:32:45 compute-0 zealous_mclaren[88447]:                 "ceph.block_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 23:32:45 compute-0 zealous_mclaren[88447]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 23:32:45 compute-0 zealous_mclaren[88447]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:32:45 compute-0 zealous_mclaren[88447]:                 "ceph.cluster_name": "ceph",
Nov 25 23:32:45 compute-0 zealous_mclaren[88447]:                 "ceph.crush_device_class": "",
Nov 25 23:32:45 compute-0 zealous_mclaren[88447]:                 "ceph.encrypted": "0",
Nov 25 23:32:45 compute-0 zealous_mclaren[88447]:                 "ceph.osd_fsid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 23:32:45 compute-0 zealous_mclaren[88447]:                 "ceph.osd_id": "0",
Nov 25 23:32:45 compute-0 zealous_mclaren[88447]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 23:32:45 compute-0 zealous_mclaren[88447]:                 "ceph.type": "block",
Nov 25 23:32:45 compute-0 zealous_mclaren[88447]:                 "ceph.vdo": "0"
Nov 25 23:32:45 compute-0 zealous_mclaren[88447]:             },
Nov 25 23:32:45 compute-0 zealous_mclaren[88447]:             "type": "block",
Nov 25 23:32:45 compute-0 zealous_mclaren[88447]:             "vg_name": "ceph_vg0"
Nov 25 23:32:45 compute-0 zealous_mclaren[88447]:         }
Nov 25 23:32:45 compute-0 zealous_mclaren[88447]:     ],
Nov 25 23:32:45 compute-0 zealous_mclaren[88447]:     "1": [
Nov 25 23:32:45 compute-0 zealous_mclaren[88447]:         {
Nov 25 23:32:45 compute-0 zealous_mclaren[88447]:             "devices": [
Nov 25 23:32:45 compute-0 zealous_mclaren[88447]:                 "/dev/loop4"
Nov 25 23:32:45 compute-0 zealous_mclaren[88447]:             ],
Nov 25 23:32:45 compute-0 zealous_mclaren[88447]:             "lv_name": "ceph_lv1",
Nov 25 23:32:45 compute-0 zealous_mclaren[88447]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 23:32:45 compute-0 zealous_mclaren[88447]:             "lv_size": "21470642176",
Nov 25 23:32:45 compute-0 zealous_mclaren[88447]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21bbab34-bea3-466b-8bf7-812749fcef47,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 23:32:45 compute-0 zealous_mclaren[88447]:             "lv_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 23:32:45 compute-0 zealous_mclaren[88447]:             "name": "ceph_lv1",
Nov 25 23:32:45 compute-0 zealous_mclaren[88447]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 23:32:45 compute-0 zealous_mclaren[88447]:             "tags": {
Nov 25 23:32:45 compute-0 zealous_mclaren[88447]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 23:32:45 compute-0 zealous_mclaren[88447]:                 "ceph.block_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 23:32:45 compute-0 zealous_mclaren[88447]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 23:32:45 compute-0 zealous_mclaren[88447]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:32:45 compute-0 zealous_mclaren[88447]:                 "ceph.cluster_name": "ceph",
Nov 25 23:32:45 compute-0 zealous_mclaren[88447]:                 "ceph.crush_device_class": "",
Nov 25 23:32:45 compute-0 zealous_mclaren[88447]:                 "ceph.encrypted": "0",
Nov 25 23:32:45 compute-0 zealous_mclaren[88447]:                 "ceph.osd_fsid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 23:32:45 compute-0 zealous_mclaren[88447]:                 "ceph.osd_id": "1",
Nov 25 23:32:45 compute-0 zealous_mclaren[88447]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 23:32:45 compute-0 zealous_mclaren[88447]:                 "ceph.type": "block",
Nov 25 23:32:45 compute-0 zealous_mclaren[88447]:                 "ceph.vdo": "0"
Nov 25 23:32:45 compute-0 zealous_mclaren[88447]:             },
Nov 25 23:32:45 compute-0 zealous_mclaren[88447]:             "type": "block",
Nov 25 23:32:45 compute-0 zealous_mclaren[88447]:             "vg_name": "ceph_vg1"
Nov 25 23:32:45 compute-0 zealous_mclaren[88447]:         }
Nov 25 23:32:45 compute-0 zealous_mclaren[88447]:     ],
Nov 25 23:32:45 compute-0 zealous_mclaren[88447]:     "2": [
Nov 25 23:32:45 compute-0 zealous_mclaren[88447]:         {
Nov 25 23:32:45 compute-0 zealous_mclaren[88447]:             "devices": [
Nov 25 23:32:45 compute-0 zealous_mclaren[88447]:                 "/dev/loop5"
Nov 25 23:32:45 compute-0 zealous_mclaren[88447]:             ],
Nov 25 23:32:45 compute-0 zealous_mclaren[88447]:             "lv_name": "ceph_lv2",
Nov 25 23:32:45 compute-0 zealous_mclaren[88447]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 23:32:45 compute-0 zealous_mclaren[88447]:             "lv_size": "21470642176",
Nov 25 23:32:45 compute-0 zealous_mclaren[88447]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=019d967b-1a56-4e90-8682-a890da577e20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 23:32:45 compute-0 zealous_mclaren[88447]:             "lv_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 23:32:45 compute-0 zealous_mclaren[88447]:             "name": "ceph_lv2",
Nov 25 23:32:45 compute-0 zealous_mclaren[88447]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 23:32:45 compute-0 zealous_mclaren[88447]:             "tags": {
Nov 25 23:32:45 compute-0 zealous_mclaren[88447]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 23:32:45 compute-0 zealous_mclaren[88447]:                 "ceph.block_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 23:32:45 compute-0 zealous_mclaren[88447]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 23:32:45 compute-0 zealous_mclaren[88447]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:32:45 compute-0 zealous_mclaren[88447]:                 "ceph.cluster_name": "ceph",
Nov 25 23:32:45 compute-0 zealous_mclaren[88447]:                 "ceph.crush_device_class": "",
Nov 25 23:32:45 compute-0 zealous_mclaren[88447]:                 "ceph.encrypted": "0",
Nov 25 23:32:45 compute-0 zealous_mclaren[88447]:                 "ceph.osd_fsid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 23:32:45 compute-0 zealous_mclaren[88447]:                 "ceph.osd_id": "2",
Nov 25 23:32:45 compute-0 zealous_mclaren[88447]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 23:32:45 compute-0 zealous_mclaren[88447]:                 "ceph.type": "block",
Nov 25 23:32:45 compute-0 zealous_mclaren[88447]:                 "ceph.vdo": "0"
Nov 25 23:32:45 compute-0 zealous_mclaren[88447]:             },
Nov 25 23:32:45 compute-0 zealous_mclaren[88447]:             "type": "block",
Nov 25 23:32:45 compute-0 zealous_mclaren[88447]:             "vg_name": "ceph_vg2"
Nov 25 23:32:45 compute-0 zealous_mclaren[88447]:         }
Nov 25 23:32:45 compute-0 zealous_mclaren[88447]:     ]
Nov 25 23:32:45 compute-0 zealous_mclaren[88447]: }
Nov 25 23:32:45 compute-0 systemd[1]: libpod-83373ad0fabfa7b01bfed07fb213c5ac42d6c0ba0cbe2d2941d1568835d33db8.scope: Deactivated successfully.
Nov 25 23:32:45 compute-0 podman[88431]: 2025-11-25 23:32:45.256637737 +0000 UTC m=+0.954772489 container died 83373ad0fabfa7b01bfed07fb213c5ac42d6c0ba0cbe2d2941d1568835d33db8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mclaren, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:32:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-e451f99bddd449245975cb3159e436e42477164b18600181bdb8f7a401b73cd1-merged.mount: Deactivated successfully.
Nov 25 23:32:45 compute-0 podman[88431]: 2025-11-25 23:32:45.320136315 +0000 UTC m=+1.018271077 container remove 83373ad0fabfa7b01bfed07fb213c5ac42d6c0ba0cbe2d2941d1568835d33db8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mclaren, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 23:32:45 compute-0 systemd[1]: libpod-conmon-83373ad0fabfa7b01bfed07fb213c5ac42d6c0ba0cbe2d2941d1568835d33db8.scope: Deactivated successfully.
Nov 25 23:32:45 compute-0 sudo[88322]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:45 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0) v1
Nov 25 23:32:45 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Nov 25 23:32:45 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 23:32:45 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:32:45 compute-0 ceph-mgr[75954]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-0
Nov 25 23:32:45 compute-0 ceph-mgr[75954]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-0
Nov 25 23:32:45 compute-0 sudo[88466]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:32:45 compute-0 sudo[88466]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:45 compute-0 sudo[88466]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:45 compute-0 sudo[88491]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:32:45 compute-0 sudo[88491]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:45 compute-0 sudo[88491]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:45 compute-0 sudo[88516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:32:45 compute-0 sudo[88516]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:45 compute-0 sudo[88516]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:45 compute-0 sudo[88541]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 101922db-575f-58e2-980f-928050464f69
Nov 25 23:32:45 compute-0 sudo[88541]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:45 compute-0 podman[88606]: 2025-11-25 23:32:45.981798212 +0000 UTC m=+0.037364335 container create d84fa3a119df6e81d1023a51419479cd3ba7634224b149816376eb66382f1f54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_wozniak, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:32:45 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 23:32:46 compute-0 systemd[1]: Started libpod-conmon-d84fa3a119df6e81d1023a51419479cd3ba7634224b149816376eb66382f1f54.scope.
Nov 25 23:32:46 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:32:46 compute-0 podman[88606]: 2025-11-25 23:32:45.965856117 +0000 UTC m=+0.021422270 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:32:46 compute-0 podman[88606]: 2025-11-25 23:32:46.071810112 +0000 UTC m=+0.127376295 container init d84fa3a119df6e81d1023a51419479cd3ba7634224b149816376eb66382f1f54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_wozniak, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:32:46 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Nov 25 23:32:46 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:32:46 compute-0 ceph-mon[75654]: Deploying daemon osd.0 on compute-0
Nov 25 23:32:46 compute-0 podman[88606]: 2025-11-25 23:32:46.079629556 +0000 UTC m=+0.135195689 container start d84fa3a119df6e81d1023a51419479cd3ba7634224b149816376eb66382f1f54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_wozniak, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 23:32:46 compute-0 quirky_wozniak[88623]: 167 167
Nov 25 23:32:46 compute-0 systemd[1]: libpod-d84fa3a119df6e81d1023a51419479cd3ba7634224b149816376eb66382f1f54.scope: Deactivated successfully.
Nov 25 23:32:46 compute-0 podman[88606]: 2025-11-25 23:32:46.087282296 +0000 UTC m=+0.142848499 container attach d84fa3a119df6e81d1023a51419479cd3ba7634224b149816376eb66382f1f54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_wozniak, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:32:46 compute-0 podman[88606]: 2025-11-25 23:32:46.0882436 +0000 UTC m=+0.143809733 container died d84fa3a119df6e81d1023a51419479cd3ba7634224b149816376eb66382f1f54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_wozniak, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 23:32:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-00d8399ef089d1089dc0351ff81858bd20bfefd939194bd58a7c3c355a7bbded-merged.mount: Deactivated successfully.
Nov 25 23:32:46 compute-0 podman[88606]: 2025-11-25 23:32:46.119129516 +0000 UTC m=+0.174695669 container remove d84fa3a119df6e81d1023a51419479cd3ba7634224b149816376eb66382f1f54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_wozniak, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:32:46 compute-0 systemd[1]: libpod-conmon-d84fa3a119df6e81d1023a51419479cd3ba7634224b149816376eb66382f1f54.scope: Deactivated successfully.
Nov 25 23:32:46 compute-0 podman[88653]: 2025-11-25 23:32:46.397129392 +0000 UTC m=+0.041617317 container create e9767c5f68a5c6ce94f1e70eda9cee0baff8ef144808fa8a69de58c5702b8f63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-osd-0-activate-test, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:32:46 compute-0 systemd[1]: Started libpod-conmon-e9767c5f68a5c6ce94f1e70eda9cee0baff8ef144808fa8a69de58c5702b8f63.scope.
Nov 25 23:32:46 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:32:46 compute-0 podman[88653]: 2025-11-25 23:32:46.380844407 +0000 UTC m=+0.025332352 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:32:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d581f6958527ef0f3faa6aa902100ffd99408105a90a5d24e2554e455a286734/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d581f6958527ef0f3faa6aa902100ffd99408105a90a5d24e2554e455a286734/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d581f6958527ef0f3faa6aa902100ffd99408105a90a5d24e2554e455a286734/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d581f6958527ef0f3faa6aa902100ffd99408105a90a5d24e2554e455a286734/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d581f6958527ef0f3faa6aa902100ffd99408105a90a5d24e2554e455a286734/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:46 compute-0 podman[88653]: 2025-11-25 23:32:46.505531711 +0000 UTC m=+0.150019726 container init e9767c5f68a5c6ce94f1e70eda9cee0baff8ef144808fa8a69de58c5702b8f63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-osd-0-activate-test, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 23:32:46 compute-0 podman[88653]: 2025-11-25 23:32:46.517112934 +0000 UTC m=+0.161600859 container start e9767c5f68a5c6ce94f1e70eda9cee0baff8ef144808fa8a69de58c5702b8f63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-osd-0-activate-test, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 23:32:46 compute-0 podman[88653]: 2025-11-25 23:32:46.520337908 +0000 UTC m=+0.164825923 container attach e9767c5f68a5c6ce94f1e70eda9cee0baff8ef144808fa8a69de58c5702b8f63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-osd-0-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Nov 25 23:32:47 compute-0 ceph-mon[75654]: pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 23:32:47 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-osd-0-activate-test[88669]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Nov 25 23:32:47 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-osd-0-activate-test[88669]:                             [--no-systemd] [--no-tmpfs]
Nov 25 23:32:47 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-osd-0-activate-test[88669]: ceph-volume activate: error: unrecognized arguments: --bad-option
Nov 25 23:32:47 compute-0 systemd[1]: libpod-e9767c5f68a5c6ce94f1e70eda9cee0baff8ef144808fa8a69de58c5702b8f63.scope: Deactivated successfully.
Nov 25 23:32:47 compute-0 podman[88653]: 2025-11-25 23:32:47.140980625 +0000 UTC m=+0.785468550 container died e9767c5f68a5c6ce94f1e70eda9cee0baff8ef144808fa8a69de58c5702b8f63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-osd-0-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:32:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-d581f6958527ef0f3faa6aa902100ffd99408105a90a5d24e2554e455a286734-merged.mount: Deactivated successfully.
Nov 25 23:32:47 compute-0 podman[88653]: 2025-11-25 23:32:47.205712334 +0000 UTC m=+0.850200269 container remove e9767c5f68a5c6ce94f1e70eda9cee0baff8ef144808fa8a69de58c5702b8f63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-osd-0-activate-test, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 25 23:32:47 compute-0 systemd[1]: libpod-conmon-e9767c5f68a5c6ce94f1e70eda9cee0baff8ef144808fa8a69de58c5702b8f63.scope: Deactivated successfully.
Nov 25 23:32:47 compute-0 systemd[1]: Reloading.
Nov 25 23:32:47 compute-0 systemd-sysv-generator[88730]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 23:32:47 compute-0 systemd-rc-local-generator[88727]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 23:32:47 compute-0 systemd[1]: Reloading.
Nov 25 23:32:47 compute-0 systemd-rc-local-generator[88771]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 23:32:47 compute-0 systemd-sysv-generator[88774]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 23:32:47 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 23:32:48 compute-0 systemd[1]: Starting Ceph osd.0 for 101922db-575f-58e2-980f-928050464f69...
Nov 25 23:32:48 compute-0 podman[88828]: 2025-11-25 23:32:48.325275863 +0000 UTC m=+0.072776161 container create 65e71522a9e79b8b51a2b2a5616cd09e3e9e68922378ba8756225394c9c1740b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-osd-0-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:32:48 compute-0 podman[88828]: 2025-11-25 23:32:48.292923818 +0000 UTC m=+0.040424186 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:32:48 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:32:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f641f7e61b429d2ac620d8812cf379747d74b07245263739f063a42a398e8c9f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f641f7e61b429d2ac620d8812cf379747d74b07245263739f063a42a398e8c9f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f641f7e61b429d2ac620d8812cf379747d74b07245263739f063a42a398e8c9f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f641f7e61b429d2ac620d8812cf379747d74b07245263739f063a42a398e8c9f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f641f7e61b429d2ac620d8812cf379747d74b07245263739f063a42a398e8c9f/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:48 compute-0 podman[88828]: 2025-11-25 23:32:48.42481801 +0000 UTC m=+0.172318328 container init 65e71522a9e79b8b51a2b2a5616cd09e3e9e68922378ba8756225394c9c1740b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-osd-0-activate, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:32:48 compute-0 podman[88828]: 2025-11-25 23:32:48.437496881 +0000 UTC m=+0.184997199 container start 65e71522a9e79b8b51a2b2a5616cd09e3e9e68922378ba8756225394c9c1740b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-osd-0-activate, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 23:32:48 compute-0 podman[88828]: 2025-11-25 23:32:48.44281994 +0000 UTC m=+0.190320258 container attach 65e71522a9e79b8b51a2b2a5616cd09e3e9e68922378ba8756225394c9c1740b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-osd-0-activate, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:32:49 compute-0 ceph-mon[75654]: pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 23:32:49 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-osd-0-activate[88844]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 25 23:32:49 compute-0 bash[88828]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 25 23:32:49 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-osd-0-activate[88844]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Nov 25 23:32:49 compute-0 bash[88828]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Nov 25 23:32:49 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-osd-0-activate[88844]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Nov 25 23:32:49 compute-0 bash[88828]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Nov 25 23:32:49 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-osd-0-activate[88844]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 25 23:32:49 compute-0 bash[88828]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 25 23:32:49 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-osd-0-activate[88844]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 25 23:32:49 compute-0 bash[88828]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 25 23:32:49 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-osd-0-activate[88844]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 25 23:32:49 compute-0 bash[88828]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 25 23:32:49 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-osd-0-activate[88844]: --> ceph-volume raw activate successful for osd ID: 0
Nov 25 23:32:49 compute-0 bash[88828]: --> ceph-volume raw activate successful for osd ID: 0
Nov 25 23:32:49 compute-0 systemd[1]: libpod-65e71522a9e79b8b51a2b2a5616cd09e3e9e68922378ba8756225394c9c1740b.scope: Deactivated successfully.
Nov 25 23:32:49 compute-0 systemd[1]: libpod-65e71522a9e79b8b51a2b2a5616cd09e3e9e68922378ba8756225394c9c1740b.scope: Consumed 1.147s CPU time.
Nov 25 23:32:49 compute-0 podman[88828]: 2025-11-25 23:32:49.564909844 +0000 UTC m=+1.312410132 container died 65e71522a9e79b8b51a2b2a5616cd09e3e9e68922378ba8756225394c9c1740b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-osd-0-activate, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 25 23:32:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-f641f7e61b429d2ac620d8812cf379747d74b07245263739f063a42a398e8c9f-merged.mount: Deactivated successfully.
Nov 25 23:32:49 compute-0 podman[88828]: 2025-11-25 23:32:49.647988743 +0000 UTC m=+1.395489041 container remove 65e71522a9e79b8b51a2b2a5616cd09e3e9e68922378ba8756225394c9c1740b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-osd-0-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 25 23:32:49 compute-0 podman[89024]: 2025-11-25 23:32:49.936645866 +0000 UTC m=+0.047317616 container create 1cdf379c2ca7941793d745665193cd2db26279d7acee8d9c89085655245b879f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-osd-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:32:49 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 23:32:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32122fa7e0b7886ecfed190746990384b8733bfebb6f647677b35f2503f7c4fc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32122fa7e0b7886ecfed190746990384b8733bfebb6f647677b35f2503f7c4fc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32122fa7e0b7886ecfed190746990384b8733bfebb6f647677b35f2503f7c4fc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32122fa7e0b7886ecfed190746990384b8733bfebb6f647677b35f2503f7c4fc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32122fa7e0b7886ecfed190746990384b8733bfebb6f647677b35f2503f7c4fc/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:50 compute-0 podman[89024]: 2025-11-25 23:32:50.007504065 +0000 UTC m=+0.118175885 container init 1cdf379c2ca7941793d745665193cd2db26279d7acee8d9c89085655245b879f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-osd-0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:32:50 compute-0 sshd-session[88849]: Received disconnect from 202.83.162.167 port 54628:11: Bye Bye [preauth]
Nov 25 23:32:50 compute-0 sshd-session[88849]: Disconnected from authenticating user root 202.83.162.167 port 54628 [preauth]
Nov 25 23:32:50 compute-0 podman[89024]: 2025-11-25 23:32:49.91684749 +0000 UTC m=+0.027519290 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:32:50 compute-0 podman[89024]: 2025-11-25 23:32:50.015583836 +0000 UTC m=+0.126255616 container start 1cdf379c2ca7941793d745665193cd2db26279d7acee8d9c89085655245b879f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-osd-0, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 23:32:50 compute-0 bash[89024]: 1cdf379c2ca7941793d745665193cd2db26279d7acee8d9c89085655245b879f
Nov 25 23:32:50 compute-0 systemd[1]: Started Ceph osd.0 for 101922db-575f-58e2-980f-928050464f69.
Nov 25 23:32:50 compute-0 ceph-osd[89044]: set uid:gid to 167:167 (ceph:ceph)
Nov 25 23:32:50 compute-0 ceph-osd[89044]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Nov 25 23:32:50 compute-0 ceph-osd[89044]: pidfile_write: ignore empty --pid-file
Nov 25 23:32:50 compute-0 ceph-osd[89044]: bdev(0x55a4eb9f5800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 25 23:32:50 compute-0 ceph-osd[89044]: bdev(0x55a4eb9f5800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 25 23:32:50 compute-0 ceph-osd[89044]: bdev(0x55a4eb9f5800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 23:32:50 compute-0 ceph-osd[89044]: bdev(0x55a4eb9f5800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 23:32:50 compute-0 ceph-osd[89044]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 25 23:32:50 compute-0 ceph-osd[89044]: bdev(0x55a4ec837800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 25 23:32:50 compute-0 ceph-osd[89044]: bdev(0x55a4ec837800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 25 23:32:50 compute-0 ceph-osd[89044]: bdev(0x55a4ec837800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 23:32:50 compute-0 ceph-osd[89044]: bdev(0x55a4ec837800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 23:32:50 compute-0 ceph-osd[89044]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Nov 25 23:32:50 compute-0 ceph-osd[89044]: bdev(0x55a4ec837800 /var/lib/ceph/osd/ceph-0/block) close
Nov 25 23:32:50 compute-0 sudo[88541]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:50 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 23:32:50 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:50 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 23:32:50 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:50 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) v1
Nov 25 23:32:50 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Nov 25 23:32:50 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 23:32:50 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:32:50 compute-0 ceph-mgr[75954]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-0
Nov 25 23:32:50 compute-0 ceph-mgr[75954]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-0
Nov 25 23:32:50 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:32:50 compute-0 sudo[89057]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:32:50 compute-0 sudo[89057]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:50 compute-0 sudo[89057]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:50 compute-0 sudo[89082]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:32:50 compute-0 sudo[89082]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:50 compute-0 sudo[89082]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:50 compute-0 sudo[89107]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:32:50 compute-0 sudo[89107]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:50 compute-0 ceph-osd[89044]: bdev(0x55a4eb9f5800 /var/lib/ceph/osd/ceph-0/block) close
Nov 25 23:32:50 compute-0 sudo[89107]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:50 compute-0 sudo[89132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 101922db-575f-58e2-980f-928050464f69
Nov 25 23:32:50 compute-0 sudo[89132]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:50 compute-0 ceph-osd[89044]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Nov 25 23:32:50 compute-0 ceph-osd[89044]: load: jerasure load: lrc 
Nov 25 23:32:50 compute-0 ceph-osd[89044]: bdev(0x55a4ec8b8c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 25 23:32:50 compute-0 ceph-osd[89044]: bdev(0x55a4ec8b8c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 25 23:32:50 compute-0 ceph-osd[89044]: bdev(0x55a4ec8b8c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 23:32:50 compute-0 ceph-osd[89044]: bdev(0x55a4ec8b8c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 23:32:50 compute-0 ceph-osd[89044]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 25 23:32:50 compute-0 ceph-osd[89044]: bdev(0x55a4ec8b8c00 /var/lib/ceph/osd/ceph-0/block) close
Nov 25 23:32:50 compute-0 podman[89204]: 2025-11-25 23:32:50.767049808 +0000 UTC m=+0.047426989 container create 06e03bf295be77b4cb9ad57e073409309fe1c355c5151f01e12217377d0e8fc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:32:50 compute-0 systemd[1]: Started libpod-conmon-06e03bf295be77b4cb9ad57e073409309fe1c355c5151f01e12217377d0e8fc9.scope.
Nov 25 23:32:50 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:32:50 compute-0 podman[89204]: 2025-11-25 23:32:50.74759047 +0000 UTC m=+0.027967681 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:32:50 compute-0 podman[89204]: 2025-11-25 23:32:50.849644344 +0000 UTC m=+0.130021555 container init 06e03bf295be77b4cb9ad57e073409309fe1c355c5151f01e12217377d0e8fc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_williamson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:32:50 compute-0 podman[89204]: 2025-11-25 23:32:50.856369779 +0000 UTC m=+0.136746950 container start 06e03bf295be77b4cb9ad57e073409309fe1c355c5151f01e12217377d0e8fc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_williamson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:32:50 compute-0 podman[89204]: 2025-11-25 23:32:50.859156382 +0000 UTC m=+0.139533603 container attach 06e03bf295be77b4cb9ad57e073409309fe1c355c5151f01e12217377d0e8fc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_williamson, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:32:50 compute-0 agitated_williamson[89221]: 167 167
Nov 25 23:32:50 compute-0 systemd[1]: libpod-06e03bf295be77b4cb9ad57e073409309fe1c355c5151f01e12217377d0e8fc9.scope: Deactivated successfully.
Nov 25 23:32:50 compute-0 podman[89204]: 2025-11-25 23:32:50.862093159 +0000 UTC m=+0.142470330 container died 06e03bf295be77b4cb9ad57e073409309fe1c355c5151f01e12217377d0e8fc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_williamson, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:32:50 compute-0 ceph-osd[89044]: bdev(0x55a4ec8b8c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 25 23:32:50 compute-0 ceph-osd[89044]: bdev(0x55a4ec8b8c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 25 23:32:50 compute-0 ceph-osd[89044]: bdev(0x55a4ec8b8c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 23:32:50 compute-0 ceph-osd[89044]: bdev(0x55a4ec8b8c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 23:32:50 compute-0 ceph-osd[89044]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 25 23:32:50 compute-0 ceph-osd[89044]: bdev(0x55a4ec8b8c00 /var/lib/ceph/osd/ceph-0/block) close
Nov 25 23:32:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-93d25b43b1a60a75f529204b5979fa45d92686e6a956b8ad6443a54f389bd7e1-merged.mount: Deactivated successfully.
Nov 25 23:32:50 compute-0 podman[89204]: 2025-11-25 23:32:50.899118005 +0000 UTC m=+0.179495176 container remove 06e03bf295be77b4cb9ad57e073409309fe1c355c5151f01e12217377d0e8fc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:32:50 compute-0 systemd[1]: libpod-conmon-06e03bf295be77b4cb9ad57e073409309fe1c355c5151f01e12217377d0e8fc9.scope: Deactivated successfully.
Nov 25 23:32:51 compute-0 ceph-mon[75654]: pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 23:32:51 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:51 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:51 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Nov 25 23:32:51 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:32:51 compute-0 ceph-mon[75654]: Deploying daemon osd.1 on compute-0
Nov 25 23:32:51 compute-0 podman[89258]: 2025-11-25 23:32:51.122232888 +0000 UTC m=+0.036587267 container create 98937659c6335a1a88af14a2503e4213849250a33d4a594104c0206b3cd23197 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-osd-1-activate-test, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 23:32:51 compute-0 ceph-osd[89044]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Nov 25 23:32:51 compute-0 ceph-osd[89044]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Nov 25 23:32:51 compute-0 ceph-osd[89044]: bdev(0x55a4ec8b8c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 25 23:32:51 compute-0 ceph-osd[89044]: bdev(0x55a4ec8b8c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 25 23:32:51 compute-0 ceph-osd[89044]: bdev(0x55a4ec8b8c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 23:32:51 compute-0 ceph-osd[89044]: bdev(0x55a4ec8b8c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 23:32:51 compute-0 ceph-osd[89044]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 25 23:32:51 compute-0 ceph-osd[89044]: bdev(0x55a4ec8b9400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 25 23:32:51 compute-0 ceph-osd[89044]: bdev(0x55a4ec8b9400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 25 23:32:51 compute-0 ceph-osd[89044]: bdev(0x55a4ec8b9400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 23:32:51 compute-0 ceph-osd[89044]: bdev(0x55a4ec8b9400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 23:32:51 compute-0 ceph-osd[89044]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Nov 25 23:32:51 compute-0 ceph-osd[89044]: bluefs mount
Nov 25 23:32:51 compute-0 ceph-osd[89044]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: bluefs mount shared_bdev_used = 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: RocksDB version: 7.9.2
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Git sha 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: DB SUMMARY
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: DB Session ID:  QLA8XFNANC8L6IOXF55E
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: CURRENT file:  CURRENT
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: IDENTITY file:  IDENTITY
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                         Options.error_if_exists: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                       Options.create_if_missing: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                         Options.paranoid_checks: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                                     Options.env: 0x55a4ec889d50
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                                Options.info_log: 0x55a4eba7c7e0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.max_file_opening_threads: 16
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                              Options.statistics: (nil)
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                               Options.use_fsync: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                       Options.max_log_file_size: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                         Options.allow_fallocate: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                        Options.use_direct_reads: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.create_missing_column_families: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                              Options.db_log_dir: 
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                                 Options.wal_dir: db.wal
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                   Options.advise_random_on_open: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                    Options.write_buffer_manager: 0x55a4ec992460
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 25 23:32:51 compute-0 systemd[1]: Started libpod-conmon-98937659c6335a1a88af14a2503e4213849250a33d4a594104c0206b3cd23197.scope.
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                            Options.rate_limiter: (nil)
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.unordered_write: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                               Options.row_cache: None
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                              Options.wal_filter: None
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:             Options.allow_ingest_behind: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:             Options.two_write_queues: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:             Options.manual_wal_flush: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:             Options.wal_compression: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:             Options.atomic_flush: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                 Options.log_readahead_size: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:             Options.allow_data_in_errors: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:             Options.db_host_id: __hostname__
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:             Options.max_background_jobs: 4
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:             Options.max_background_compactions: -1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:             Options.max_subcompactions: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                          Options.max_open_files: -1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                          Options.bytes_per_sync: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.max_background_flushes: -1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Compression algorithms supported:
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         kZSTD supported: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         kXpressCompression supported: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         kBZip2Compression supported: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         kLZ4Compression supported: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         kZlibCompression supported: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         kLZ4HCCompression supported: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         kSnappyCompression supported: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:        Options.compaction_filter: None
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a4eba7c200)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55a4eba691f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.compression: LZ4
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:             Options.num_levels: 7
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                           Options.bloom_locality: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                               Options.ttl: 2592000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                       Options.enable_blob_files: false
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                           Options.min_blob_size: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:           Options.merge_operator: None
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:        Options.compaction_filter: None
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a4eba7c200)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55a4eba691f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.compression: LZ4
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:             Options.num_levels: 7
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                           Options.bloom_locality: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                               Options.ttl: 2592000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                       Options.enable_blob_files: false
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                           Options.min_blob_size: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:           Options.merge_operator: None
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:        Options.compaction_filter: None
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a4eba7c200)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55a4eba691f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.compression: LZ4
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:             Options.num_levels: 7
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                           Options.bloom_locality: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                               Options.ttl: 2592000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                       Options.enable_blob_files: false
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                           Options.min_blob_size: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:           Options.merge_operator: None
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:        Options.compaction_filter: None
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a4eba7c200)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55a4eba691f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.compression: LZ4
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:             Options.num_levels: 7
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 23:32:51 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                           Options.bloom_locality: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                               Options.ttl: 2592000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                       Options.enable_blob_files: false
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                           Options.min_blob_size: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:           Options.merge_operator: None
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:        Options.compaction_filter: None
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a4eba7c200)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55a4eba691f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.compression: LZ4
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:             Options.num_levels: 7
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 23:32:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/857f4e6a8c699251961491629bfd1d38b5f878bb2f9389e34cb114e7641e1b4a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/857f4e6a8c699251961491629bfd1d38b5f878bb2f9389e34cb114e7641e1b4a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/857f4e6a8c699251961491629bfd1d38b5f878bb2f9389e34cb114e7641e1b4a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/857f4e6a8c699251961491629bfd1d38b5f878bb2f9389e34cb114e7641e1b4a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/857f4e6a8c699251961491629bfd1d38b5f878bb2f9389e34cb114e7641e1b4a/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                           Options.bloom_locality: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                               Options.ttl: 2592000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                       Options.enable_blob_files: false
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                           Options.min_blob_size: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:           Options.merge_operator: None
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:        Options.compaction_filter: None
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a4eba7c200)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55a4eba691f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.compression: LZ4
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:             Options.num_levels: 7
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                           Options.bloom_locality: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                               Options.ttl: 2592000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                       Options.enable_blob_files: false
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                           Options.min_blob_size: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:           Options.merge_operator: None
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:        Options.compaction_filter: None
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a4eba7c200)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55a4eba691f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.compression: LZ4
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:             Options.num_levels: 7
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                           Options.bloom_locality: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                               Options.ttl: 2592000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                       Options.enable_blob_files: false
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                           Options.min_blob_size: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:           Options.merge_operator: None
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:        Options.compaction_filter: None
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a4eba7c180)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55a4eba69090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.compression: LZ4
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:             Options.num_levels: 7
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                           Options.bloom_locality: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                               Options.ttl: 2592000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                       Options.enable_blob_files: false
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                           Options.min_blob_size: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:           Options.merge_operator: None
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:        Options.compaction_filter: None
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 23:32:51 compute-0 podman[89258]: 2025-11-25 23:32:51.20164972 +0000 UTC m=+0.116004109 container init 98937659c6335a1a88af14a2503e4213849250a33d4a594104c0206b3cd23197 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-osd-1-activate-test, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 23:32:51 compute-0 podman[89258]: 2025-11-25 23:32:51.106318952 +0000 UTC m=+0.020673341 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a4eba7c180)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55a4eba69090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.compression: LZ4
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:             Options.num_levels: 7
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                           Options.bloom_locality: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                               Options.ttl: 2592000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                       Options.enable_blob_files: false
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                           Options.min_blob_size: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:           Options.merge_operator: None
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:        Options.compaction_filter: None
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a4eba7c180)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55a4eba69090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.compression: LZ4
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:             Options.num_levels: 7
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                           Options.bloom_locality: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                               Options.ttl: 2592000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                       Options.enable_blob_files: false
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                           Options.min_blob_size: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 3bce52ae-013e-4dc3-b6d7-f1899aea7616
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764113571169131, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764113571169315, "job": 1, "event": "recovery_finished"}
Nov 25 23:32:51 compute-0 ceph-osd[89044]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Nov 25 23:32:51 compute-0 ceph-osd[89044]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Nov 25 23:32:51 compute-0 ceph-osd[89044]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Nov 25 23:32:51 compute-0 ceph-osd[89044]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: freelist init
Nov 25 23:32:51 compute-0 ceph-osd[89044]: freelist _read_cfg
Nov 25 23:32:51 compute-0 ceph-osd[89044]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 25 23:32:51 compute-0 ceph-osd[89044]: bluefs umount
Nov 25 23:32:51 compute-0 ceph-osd[89044]: bdev(0x55a4ec8b9400 /var/lib/ceph/osd/ceph-0/block) close
Nov 25 23:32:51 compute-0 podman[89258]: 2025-11-25 23:32:51.212119424 +0000 UTC m=+0.126473793 container start 98937659c6335a1a88af14a2503e4213849250a33d4a594104c0206b3cd23197 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-osd-1-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 23:32:51 compute-0 podman[89258]: 2025-11-25 23:32:51.215156793 +0000 UTC m=+0.129511152 container attach 98937659c6335a1a88af14a2503e4213849250a33d4a594104c0206b3cd23197 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-osd-1-activate-test, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:32:51 compute-0 ceph-osd[89044]: bdev(0x55a4ec8b9400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 25 23:32:51 compute-0 ceph-osd[89044]: bdev(0x55a4ec8b9400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 25 23:32:51 compute-0 ceph-osd[89044]: bdev(0x55a4ec8b9400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 23:32:51 compute-0 ceph-osd[89044]: bdev(0x55a4ec8b9400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 23:32:51 compute-0 ceph-osd[89044]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Nov 25 23:32:51 compute-0 ceph-osd[89044]: bluefs mount
Nov 25 23:32:51 compute-0 ceph-osd[89044]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: bluefs mount shared_bdev_used = 4718592
Nov 25 23:32:51 compute-0 ceph-osd[89044]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: RocksDB version: 7.9.2
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Git sha 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: DB SUMMARY
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: DB Session ID:  QLA8XFNANC8L6IOXF55F
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: CURRENT file:  CURRENT
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: IDENTITY file:  IDENTITY
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                         Options.error_if_exists: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                       Options.create_if_missing: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                         Options.paranoid_checks: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                                     Options.env: 0x55a4eca227e0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                                Options.info_log: 0x55a4ec885a80
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.max_file_opening_threads: 16
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                              Options.statistics: (nil)
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                               Options.use_fsync: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                       Options.max_log_file_size: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                         Options.allow_fallocate: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                        Options.use_direct_reads: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.create_missing_column_families: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                              Options.db_log_dir: 
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                                 Options.wal_dir: db.wal
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                   Options.advise_random_on_open: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                    Options.write_buffer_manager: 0x55a4ec9926e0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                            Options.rate_limiter: (nil)
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.unordered_write: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                               Options.row_cache: None
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                              Options.wal_filter: None
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:             Options.allow_ingest_behind: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:             Options.two_write_queues: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:             Options.manual_wal_flush: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:             Options.wal_compression: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:             Options.atomic_flush: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                 Options.log_readahead_size: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:             Options.allow_data_in_errors: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:             Options.db_host_id: __hostname__
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:             Options.max_background_jobs: 4
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:             Options.max_background_compactions: -1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:             Options.max_subcompactions: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                          Options.max_open_files: -1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                          Options.bytes_per_sync: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.max_background_flushes: -1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Compression algorithms supported:
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         kZSTD supported: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         kXpressCompression supported: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         kBZip2Compression supported: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         kLZ4Compression supported: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         kZlibCompression supported: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         kLZ4HCCompression supported: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         kSnappyCompression supported: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:        Options.compaction_filter: None
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a4eba7c980)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55a4eba691f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.compression: LZ4
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:             Options.num_levels: 7
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                           Options.bloom_locality: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                               Options.ttl: 2592000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                       Options.enable_blob_files: false
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                           Options.min_blob_size: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:           Options.merge_operator: None
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:        Options.compaction_filter: None
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a4eba7c980)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55a4eba691f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.compression: LZ4
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:             Options.num_levels: 7
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                           Options.bloom_locality: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                               Options.ttl: 2592000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                       Options.enable_blob_files: false
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                           Options.min_blob_size: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:           Options.merge_operator: None
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:        Options.compaction_filter: None
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a4eba7c980)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55a4eba691f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.compression: LZ4
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:             Options.num_levels: 7
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                           Options.bloom_locality: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                               Options.ttl: 2592000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                       Options.enable_blob_files: false
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                           Options.min_blob_size: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:           Options.merge_operator: None
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:        Options.compaction_filter: None
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a4eba7c980)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55a4eba691f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.compression: LZ4
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:             Options.num_levels: 7
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                           Options.bloom_locality: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                               Options.ttl: 2592000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                       Options.enable_blob_files: false
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                           Options.min_blob_size: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:           Options.merge_operator: None
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:        Options.compaction_filter: None
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a4eba7c980)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55a4eba691f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.compression: LZ4
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:             Options.num_levels: 7
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                           Options.bloom_locality: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                               Options.ttl: 2592000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                       Options.enable_blob_files: false
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                           Options.min_blob_size: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:           Options.merge_operator: None
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:        Options.compaction_filter: None
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a4eba7c980)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55a4eba691f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.compression: LZ4
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:             Options.num_levels: 7
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                           Options.bloom_locality: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                               Options.ttl: 2592000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                       Options.enable_blob_files: false
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                           Options.min_blob_size: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:           Options.merge_operator: None
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:        Options.compaction_filter: None
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a4eba7c980)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55a4eba691f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.compression: LZ4
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:             Options.num_levels: 7
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                           Options.bloom_locality: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                               Options.ttl: 2592000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                       Options.enable_blob_files: false
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                           Options.min_blob_size: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:           Options.merge_operator: None
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:        Options.compaction_filter: None
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a4eba7c300)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55a4eba69090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.compression: LZ4
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:             Options.num_levels: 7
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                           Options.bloom_locality: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                               Options.ttl: 2592000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                       Options.enable_blob_files: false
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                           Options.min_blob_size: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:           Options.merge_operator: None
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:        Options.compaction_filter: None
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a4eba7c300)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55a4eba69090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.compression: LZ4
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:             Options.num_levels: 7
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                           Options.bloom_locality: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                               Options.ttl: 2592000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                       Options.enable_blob_files: false
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                           Options.min_blob_size: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:           Options.merge_operator: None
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:        Options.compaction_filter: None
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a4eba7c300)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55a4eba69090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.compression: LZ4
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:             Options.num_levels: 7
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                           Options.bloom_locality: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                               Options.ttl: 2592000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                       Options.enable_blob_files: false
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                           Options.min_blob_size: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 3bce52ae-013e-4dc3-b6d7-f1899aea7616
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764113571452491, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764113571456885, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764113571, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3bce52ae-013e-4dc3-b6d7-f1899aea7616", "db_session_id": "QLA8XFNANC8L6IOXF55F", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764113571459347, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764113571, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3bce52ae-013e-4dc3-b6d7-f1899aea7616", "db_session_id": "QLA8XFNANC8L6IOXF55F", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764113571462058, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764113571, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3bce52ae-013e-4dc3-b6d7-f1899aea7616", "db_session_id": "QLA8XFNANC8L6IOXF55F", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764113571464364, "job": 1, "event": "recovery_finished"}
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55a4ebbd6000
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: DB pointer 0x55a4ec97ba00
Nov 25 23:32:51 compute-0 ceph-osd[89044]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 25 23:32:51 compute-0 ceph-osd[89044]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Nov 25 23:32:51 compute-0 ceph-osd[89044]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 23:32:51 compute-0 ceph-osd[89044]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a4eba691f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a4eba691f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a4eba691f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a4eba691f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a4eba691f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a4eba691f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a4eba691f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a4eba69090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a4eba69090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a4eba69090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a4eba691f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a4eba691f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 25 23:32:51 compute-0 ceph-osd[89044]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Nov 25 23:32:51 compute-0 ceph-osd[89044]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Nov 25 23:32:51 compute-0 ceph-osd[89044]: _get_class not permitted to load lua
Nov 25 23:32:51 compute-0 ceph-osd[89044]: _get_class not permitted to load sdk
Nov 25 23:32:51 compute-0 ceph-osd[89044]: _get_class not permitted to load test_remote_reads
Nov 25 23:32:51 compute-0 ceph-osd[89044]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Nov 25 23:32:51 compute-0 ceph-osd[89044]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Nov 25 23:32:51 compute-0 ceph-osd[89044]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Nov 25 23:32:51 compute-0 ceph-osd[89044]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Nov 25 23:32:51 compute-0 ceph-osd[89044]: osd.0 0 load_pgs
Nov 25 23:32:51 compute-0 ceph-osd[89044]: osd.0 0 load_pgs opened 0 pgs
Nov 25 23:32:51 compute-0 ceph-osd[89044]: osd.0 0 log_to_monitors true
Nov 25 23:32:51 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-osd-0[89040]: 2025-11-25T23:32:51.507+0000 7f3007b5f740 -1 osd.0 0 log_to_monitors true
Nov 25 23:32:51 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0) v1
Nov 25 23:32:51 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1106816781,v1:192.168.122.100:6803/1106816781]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Nov 25 23:32:51 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-osd-1-activate-test[89364]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Nov 25 23:32:51 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-osd-1-activate-test[89364]:                             [--no-systemd] [--no-tmpfs]
Nov 25 23:32:51 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-osd-1-activate-test[89364]: ceph-volume activate: error: unrecognized arguments: --bad-option
Nov 25 23:32:51 compute-0 systemd[1]: libpod-98937659c6335a1a88af14a2503e4213849250a33d4a594104c0206b3cd23197.scope: Deactivated successfully.
Nov 25 23:32:51 compute-0 podman[89258]: 2025-11-25 23:32:51.809440452 +0000 UTC m=+0.723794821 container died 98937659c6335a1a88af14a2503e4213849250a33d4a594104c0206b3cd23197 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-osd-1-activate-test, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 23:32:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-857f4e6a8c699251961491629bfd1d38b5f878bb2f9389e34cb114e7641e1b4a-merged.mount: Deactivated successfully.
Nov 25 23:32:51 compute-0 podman[89258]: 2025-11-25 23:32:51.886262347 +0000 UTC m=+0.800616756 container remove 98937659c6335a1a88af14a2503e4213849250a33d4a594104c0206b3cd23197 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-osd-1-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 23:32:51 compute-0 systemd[1]: libpod-conmon-98937659c6335a1a88af14a2503e4213849250a33d4a594104c0206b3cd23197.scope: Deactivated successfully.
Nov 25 23:32:51 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 23:32:52 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Nov 25 23:32:52 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 25 23:32:52 compute-0 ceph-mon[75654]: from='osd.0 [v2:192.168.122.100:6802/1106816781,v1:192.168.122.100:6803/1106816781]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Nov 25 23:32:52 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1106816781,v1:192.168.122.100:6803/1106816781]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Nov 25 23:32:52 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e7 e7: 3 total, 0 up, 3 in
Nov 25 23:32:52 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e7: 3 total, 0 up, 3 in
Nov 25 23:32:52 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Nov 25 23:32:52 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1106816781,v1:192.168.122.100:6803/1106816781]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 25 23:32:52 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e7 create-or-move crush item name 'osd.0' initial_weight 0.0195 at location {host=compute-0,root=default}
Nov 25 23:32:52 compute-0 ceph-mgr[75954]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 25 23:32:52 compute-0 ceph-mgr[75954]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 23:32:52 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 25 23:32:52 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 23:32:52 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 23:32:52 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 23:32:52 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 23:32:52 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 23:32:52 compute-0 ceph-mgr[75954]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 23:32:52 compute-0 systemd[1]: Reloading.
Nov 25 23:32:52 compute-0 systemd-rc-local-generator[89748]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 23:32:52 compute-0 systemd-sysv-generator[89751]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 23:32:52 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Nov 25 23:32:52 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Nov 25 23:32:52 compute-0 systemd[1]: Reloading.
Nov 25 23:32:52 compute-0 systemd-rc-local-generator[89783]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 23:32:52 compute-0 systemd-sysv-generator[89786]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 23:32:52 compute-0 systemd[1]: Starting Ceph osd.1 for 101922db-575f-58e2-980f-928050464f69...
Nov 25 23:32:52 compute-0 podman[89848]: 2025-11-25 23:32:52.998755961 +0000 UTC m=+0.064677778 container create 43d84d64c2b680d592d665a0aefa7c645bbbc17b5359f2f8d6b1c5851f71c313 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-osd-1-activate, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 23:32:53 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:32:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5942c5a27c07409ae2efe8e5d20486cc05504bcbdd829aaed05b262f06c8556f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5942c5a27c07409ae2efe8e5d20486cc05504bcbdd829aaed05b262f06c8556f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5942c5a27c07409ae2efe8e5d20486cc05504bcbdd829aaed05b262f06c8556f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5942c5a27c07409ae2efe8e5d20486cc05504bcbdd829aaed05b262f06c8556f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5942c5a27c07409ae2efe8e5d20486cc05504bcbdd829aaed05b262f06c8556f/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:53 compute-0 podman[89848]: 2025-11-25 23:32:52.973552484 +0000 UTC m=+0.039474411 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:32:53 compute-0 podman[89848]: 2025-11-25 23:32:53.078689407 +0000 UTC m=+0.144611244 container init 43d84d64c2b680d592d665a0aefa7c645bbbc17b5359f2f8d6b1c5851f71c313 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-osd-1-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 23:32:53 compute-0 podman[89848]: 2025-11-25 23:32:53.090738702 +0000 UTC m=+0.156660519 container start 43d84d64c2b680d592d665a0aefa7c645bbbc17b5359f2f8d6b1c5851f71c313 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-osd-1-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Nov 25 23:32:53 compute-0 podman[89848]: 2025-11-25 23:32:53.096920763 +0000 UTC m=+0.162842580 container attach 43d84d64c2b680d592d665a0aefa7c645bbbc17b5359f2f8d6b1c5851f71c313 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-osd-1-activate, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:32:53 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Nov 25 23:32:53 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 25 23:32:53 compute-0 ceph-mon[75654]: pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 23:32:53 compute-0 ceph-mon[75654]: from='osd.0 [v2:192.168.122.100:6802/1106816781,v1:192.168.122.100:6803/1106816781]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Nov 25 23:32:53 compute-0 ceph-mon[75654]: osdmap e7: 3 total, 0 up, 3 in
Nov 25 23:32:53 compute-0 ceph-mon[75654]: from='osd.0 [v2:192.168.122.100:6802/1106816781,v1:192.168.122.100:6803/1106816781]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 25 23:32:53 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 23:32:53 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 23:32:53 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 23:32:53 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1106816781,v1:192.168.122.100:6803/1106816781]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 25 23:32:53 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e8 e8: 3 total, 0 up, 3 in
Nov 25 23:32:53 compute-0 ceph-osd[89044]: osd.0 0 done with init, starting boot process
Nov 25 23:32:53 compute-0 ceph-osd[89044]: osd.0 0 start_boot
Nov 25 23:32:53 compute-0 ceph-osd[89044]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Nov 25 23:32:53 compute-0 ceph-osd[89044]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Nov 25 23:32:53 compute-0 ceph-osd[89044]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Nov 25 23:32:53 compute-0 ceph-osd[89044]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Nov 25 23:32:53 compute-0 ceph-osd[89044]: osd.0 0  bench count 12288000 bsize 4 KiB
Nov 25 23:32:53 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e8: 3 total, 0 up, 3 in
Nov 25 23:32:53 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 25 23:32:53 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 23:32:53 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 23:32:53 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 23:32:53 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 23:32:53 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 23:32:53 compute-0 ceph-mgr[75954]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 25 23:32:53 compute-0 ceph-mgr[75954]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 23:32:53 compute-0 ceph-mgr[75954]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 23:32:53 compute-0 ceph-mgr[75954]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1106816781; not ready for session (expect reconnect)
Nov 25 23:32:53 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 25 23:32:53 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 23:32:53 compute-0 ceph-mgr[75954]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 25 23:32:53 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 23:32:54 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-osd-1-activate[89863]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 25 23:32:54 compute-0 bash[89848]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 25 23:32:54 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-osd-1-activate[89863]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1
Nov 25 23:32:54 compute-0 bash[89848]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1
Nov 25 23:32:54 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-osd-1-activate[89863]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1
Nov 25 23:32:54 compute-0 bash[89848]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1
Nov 25 23:32:54 compute-0 ceph-mon[75654]: from='osd.0 [v2:192.168.122.100:6802/1106816781,v1:192.168.122.100:6803/1106816781]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 25 23:32:54 compute-0 ceph-mon[75654]: osdmap e8: 3 total, 0 up, 3 in
Nov 25 23:32:54 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 23:32:54 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 23:32:54 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 23:32:54 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 23:32:54 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-osd-1-activate[89863]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Nov 25 23:32:54 compute-0 bash[89848]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Nov 25 23:32:54 compute-0 ceph-mgr[75954]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1106816781; not ready for session (expect reconnect)
Nov 25 23:32:54 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-osd-1-activate[89863]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Nov 25 23:32:54 compute-0 bash[89848]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Nov 25 23:32:54 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 25 23:32:54 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 23:32:54 compute-0 ceph-mgr[75954]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 25 23:32:54 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-osd-1-activate[89863]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 25 23:32:54 compute-0 bash[89848]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 25 23:32:54 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-osd-1-activate[89863]: --> ceph-volume raw activate successful for osd ID: 1
Nov 25 23:32:54 compute-0 bash[89848]: --> ceph-volume raw activate successful for osd ID: 1
Nov 25 23:32:54 compute-0 systemd[1]: libpod-43d84d64c2b680d592d665a0aefa7c645bbbc17b5359f2f8d6b1c5851f71c313.scope: Deactivated successfully.
Nov 25 23:32:54 compute-0 systemd[1]: libpod-43d84d64c2b680d592d665a0aefa7c645bbbc17b5359f2f8d6b1c5851f71c313.scope: Consumed 1.124s CPU time.
Nov 25 23:32:54 compute-0 conmon[89863]: conmon 43d84d64c2b680d592d6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-43d84d64c2b680d592d665a0aefa7c645bbbc17b5359f2f8d6b1c5851f71c313.scope/container/memory.events
Nov 25 23:32:54 compute-0 podman[89848]: 2025-11-25 23:32:54.218805052 +0000 UTC m=+1.284726909 container died 43d84d64c2b680d592d665a0aefa7c645bbbc17b5359f2f8d6b1c5851f71c313 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-osd-1-activate, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True)
Nov 25 23:32:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-5942c5a27c07409ae2efe8e5d20486cc05504bcbdd829aaed05b262f06c8556f-merged.mount: Deactivated successfully.
Nov 25 23:32:54 compute-0 podman[89848]: 2025-11-25 23:32:54.298956703 +0000 UTC m=+1.364878560 container remove 43d84d64c2b680d592d665a0aefa7c645bbbc17b5359f2f8d6b1c5851f71c313 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-osd-1-activate, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:32:54 compute-0 podman[90034]: 2025-11-25 23:32:54.517718193 +0000 UTC m=+0.046054103 container create 210a65a79e016cf56b9bf57fc9f3a856ecf3e518112f002c5580b96eeb0ff119 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-osd-1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:32:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c3c948bbf5dd1d5dba0d61b01af3573cb4bf2211b1d6b182cb9e0579712af9b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c3c948bbf5dd1d5dba0d61b01af3573cb4bf2211b1d6b182cb9e0579712af9b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c3c948bbf5dd1d5dba0d61b01af3573cb4bf2211b1d6b182cb9e0579712af9b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c3c948bbf5dd1d5dba0d61b01af3573cb4bf2211b1d6b182cb9e0579712af9b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c3c948bbf5dd1d5dba0d61b01af3573cb4bf2211b1d6b182cb9e0579712af9b/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:54 compute-0 podman[90034]: 2025-11-25 23:32:54.493318986 +0000 UTC m=+0.021654906 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:32:54 compute-0 podman[90034]: 2025-11-25 23:32:54.594795035 +0000 UTC m=+0.123130985 container init 210a65a79e016cf56b9bf57fc9f3a856ecf3e518112f002c5580b96eeb0ff119 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-osd-1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 23:32:54 compute-0 podman[90034]: 2025-11-25 23:32:54.606530291 +0000 UTC m=+0.134866201 container start 210a65a79e016cf56b9bf57fc9f3a856ecf3e518112f002c5580b96eeb0ff119 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-osd-1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:32:54 compute-0 bash[90034]: 210a65a79e016cf56b9bf57fc9f3a856ecf3e518112f002c5580b96eeb0ff119
Nov 25 23:32:54 compute-0 systemd[1]: Started Ceph osd.1 for 101922db-575f-58e2-980f-928050464f69.
Nov 25 23:32:54 compute-0 ceph-osd[90055]: set uid:gid to 167:167 (ceph:ceph)
Nov 25 23:32:54 compute-0 ceph-osd[90055]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Nov 25 23:32:54 compute-0 ceph-osd[90055]: pidfile_write: ignore empty --pid-file
Nov 25 23:32:54 compute-0 ceph-osd[90055]: bdev(0x5613e95a3800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 25 23:32:54 compute-0 ceph-osd[90055]: bdev(0x5613e95a3800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 25 23:32:54 compute-0 ceph-osd[90055]: bdev(0x5613e95a3800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 23:32:54 compute-0 ceph-osd[90055]: bdev(0x5613e95a3800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 23:32:54 compute-0 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 25 23:32:54 compute-0 ceph-osd[90055]: bdev(0x5613ea3db800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 25 23:32:54 compute-0 ceph-osd[90055]: bdev(0x5613ea3db800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 25 23:32:54 compute-0 ceph-osd[90055]: bdev(0x5613ea3db800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 23:32:54 compute-0 ceph-osd[90055]: bdev(0x5613ea3db800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 23:32:54 compute-0 ceph-osd[90055]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Nov 25 23:32:54 compute-0 ceph-osd[90055]: bdev(0x5613ea3db800 /var/lib/ceph/osd/ceph-1/block) close
Nov 25 23:32:54 compute-0 sudo[89132]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:54 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 23:32:54 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:54 compute-0 ceph-osd[90055]: bdev(0x5613e95a3800 /var/lib/ceph/osd/ceph-1/block) close
Nov 25 23:32:54 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 23:32:54 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:54 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0) v1
Nov 25 23:32:54 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Nov 25 23:32:54 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 23:32:54 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:32:54 compute-0 ceph-mgr[75954]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-0
Nov 25 23:32:54 compute-0 ceph-mgr[75954]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-0
Nov 25 23:32:54 compute-0 sudo[90068]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:32:54 compute-0 sudo[90068]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:54 compute-0 sudo[90068]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:54 compute-0 sudo[90095]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:32:54 compute-0 sudo[90095]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:54 compute-0 sudo[90095]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:54 compute-0 ceph-osd[90055]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Nov 25 23:32:54 compute-0 ceph-osd[90055]: load: jerasure load: lrc 
Nov 25 23:32:54 compute-0 ceph-osd[90055]: bdev(0x5613ea45cc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 25 23:32:54 compute-0 ceph-osd[90055]: bdev(0x5613ea45cc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 25 23:32:54 compute-0 ceph-osd[90055]: bdev(0x5613ea45cc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 23:32:54 compute-0 ceph-osd[90055]: bdev(0x5613ea45cc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 23:32:54 compute-0 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 25 23:32:54 compute-0 ceph-osd[90055]: bdev(0x5613ea45cc00 /var/lib/ceph/osd/ceph-1/block) close
Nov 25 23:32:54 compute-0 sudo[90120]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:32:54 compute-0 sudo[90120]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:54 compute-0 sudo[90120]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:55 compute-0 sudo[90150]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 101922db-575f-58e2-980f-928050464f69
Nov 25 23:32:55 compute-0 sudo[90150]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:55 compute-0 ceph-mgr[75954]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1106816781; not ready for session (expect reconnect)
Nov 25 23:32:55 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 25 23:32:55 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 23:32:55 compute-0 ceph-mgr[75954]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 25 23:32:55 compute-0 ceph-mon[75654]: purged_snaps scrub starts
Nov 25 23:32:55 compute-0 ceph-mon[75654]: purged_snaps scrub ok
Nov 25 23:32:55 compute-0 ceph-mon[75654]: pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 23:32:55 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 23:32:55 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:55 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:55 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Nov 25 23:32:55 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:32:55 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e8 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:32:55 compute-0 ceph-osd[90055]: bdev(0x5613ea45cc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 25 23:32:55 compute-0 ceph-osd[90055]: bdev(0x5613ea45cc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 25 23:32:55 compute-0 ceph-osd[90055]: bdev(0x5613ea45cc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 23:32:55 compute-0 ceph-osd[90055]: bdev(0x5613ea45cc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 23:32:55 compute-0 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 25 23:32:55 compute-0 ceph-osd[90055]: bdev(0x5613ea45cc00 /var/lib/ceph/osd/ceph-1/block) close
Nov 25 23:32:55 compute-0 podman[90220]: 2025-11-25 23:32:55.403371367 +0000 UTC m=+0.043361593 container create e0b6b9df4dcc5738f932897d6d04fc9ce34ba0c094ff0f9e4a53d91a8d8425a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_moser, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:32:55 compute-0 ceph-osd[89044]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 36.538 iops: 9353.738 elapsed_sec: 0.321
Nov 25 23:32:55 compute-0 ceph-osd[89044]: log_channel(cluster) log [WRN] : OSD bench result of 9353.738310 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 25 23:32:55 compute-0 ceph-osd[89044]: osd.0 0 waiting for initial osdmap
Nov 25 23:32:55 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-osd-0[89040]: 2025-11-25T23:32:55.439+0000 7f3003adf640 -1 osd.0 0 waiting for initial osdmap
Nov 25 23:32:55 compute-0 ceph-osd[89044]: osd.0 8 crush map has features 288514050185494528, adjusting msgr requires for clients
Nov 25 23:32:55 compute-0 ceph-osd[89044]: osd.0 8 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Nov 25 23:32:55 compute-0 ceph-osd[89044]: osd.0 8 crush map has features 3314932999778484224, adjusting msgr requires for osds
Nov 25 23:32:55 compute-0 ceph-osd[89044]: osd.0 8 check_osdmap_features require_osd_release unknown -> reef
Nov 25 23:32:55 compute-0 systemd[1]: Started libpod-conmon-e0b6b9df4dcc5738f932897d6d04fc9ce34ba0c094ff0f9e4a53d91a8d8425a7.scope.
Nov 25 23:32:55 compute-0 ceph-osd[89044]: osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 25 23:32:55 compute-0 ceph-osd[89044]: osd.0 8 set_numa_affinity not setting numa affinity
Nov 25 23:32:55 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-osd-0[89040]: 2025-11-25T23:32:55.460+0000 7f2fff107640 -1 osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 25 23:32:55 compute-0 ceph-osd[89044]: osd.0 8 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial
Nov 25 23:32:55 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:32:55 compute-0 podman[90220]: 2025-11-25 23:32:55.383481478 +0000 UTC m=+0.023471734 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Nov 25 23:32:55 compute-0 ceph-osd[90055]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Nov 25 23:32:55 compute-0 ceph-osd[90055]: bdev(0x5613ea45cc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 25 23:32:55 compute-0 ceph-osd[90055]: bdev(0x5613ea45cc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 25 23:32:55 compute-0 ceph-osd[90055]: bdev(0x5613ea45cc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 23:32:55 compute-0 podman[90220]: 2025-11-25 23:32:55.489050882 +0000 UTC m=+0.129041118 container init e0b6b9df4dcc5738f932897d6d04fc9ce34ba0c094ff0f9e4a53d91a8d8425a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_moser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:32:55 compute-0 ceph-osd[90055]: bdev(0x5613ea45cc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 23:32:55 compute-0 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 25 23:32:55 compute-0 ceph-osd[90055]: bdev(0x5613ea45d400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 25 23:32:55 compute-0 ceph-osd[90055]: bdev(0x5613ea45d400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 25 23:32:55 compute-0 ceph-osd[90055]: bdev(0x5613ea45d400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 23:32:55 compute-0 ceph-osd[90055]: bdev(0x5613ea45d400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 23:32:55 compute-0 ceph-osd[90055]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Nov 25 23:32:55 compute-0 ceph-osd[90055]: bluefs mount
Nov 25 23:32:55 compute-0 ceph-osd[90055]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: bluefs mount shared_bdev_used = 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 25 23:32:55 compute-0 podman[90220]: 2025-11-25 23:32:55.495890651 +0000 UTC m=+0.135880887 container start e0b6b9df4dcc5738f932897d6d04fc9ce34ba0c094ff0f9e4a53d91a8d8425a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_moser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:32:55 compute-0 podman[90220]: 2025-11-25 23:32:55.499370782 +0000 UTC m=+0.139361028 container attach e0b6b9df4dcc5738f932897d6d04fc9ce34ba0c094ff0f9e4a53d91a8d8425a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_moser, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 23:32:55 compute-0 laughing_moser[90236]: 167 167
Nov 25 23:32:55 compute-0 systemd[1]: libpod-e0b6b9df4dcc5738f932897d6d04fc9ce34ba0c094ff0f9e4a53d91a8d8425a7.scope: Deactivated successfully.
Nov 25 23:32:55 compute-0 podman[90220]: 2025-11-25 23:32:55.502280438 +0000 UTC m=+0.142270684 container died e0b6b9df4dcc5738f932897d6d04fc9ce34ba0c094ff0f9e4a53d91a8d8425a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_moser, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: RocksDB version: 7.9.2
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Git sha 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: DB SUMMARY
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: DB Session ID:  5XYP8PC920X025ZAXNWI
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: CURRENT file:  CURRENT
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: IDENTITY file:  IDENTITY
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                         Options.error_if_exists: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                       Options.create_if_missing: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                         Options.paranoid_checks: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                                     Options.env: 0x5613ea42dc70
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                                Options.info_log: 0x5613e962a8a0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.max_file_opening_threads: 16
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                              Options.statistics: (nil)
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                               Options.use_fsync: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                       Options.max_log_file_size: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                         Options.allow_fallocate: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                        Options.use_direct_reads: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.create_missing_column_families: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                              Options.db_log_dir: 
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                                 Options.wal_dir: db.wal
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                   Options.advise_random_on_open: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                    Options.write_buffer_manager: 0x5613ea540460
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                            Options.rate_limiter: (nil)
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.unordered_write: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                               Options.row_cache: None
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                              Options.wal_filter: None
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:             Options.allow_ingest_behind: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:             Options.two_write_queues: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:             Options.manual_wal_flush: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:             Options.wal_compression: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:             Options.atomic_flush: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                 Options.log_readahead_size: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:             Options.allow_data_in_errors: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:             Options.db_host_id: __hostname__
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:             Options.max_background_jobs: 4
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:             Options.max_background_compactions: -1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:             Options.max_subcompactions: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                          Options.max_open_files: -1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                          Options.bytes_per_sync: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.max_background_flushes: -1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Compression algorithms supported:
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         kZSTD supported: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         kXpressCompression supported: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         kBZip2Compression supported: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         kLZ4Compression supported: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         kZlibCompression supported: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         kLZ4HCCompression supported: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         kSnappyCompression supported: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:        Options.compaction_filter: None
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5613e962a2c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5613e96171f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.compression: LZ4
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:             Options.num_levels: 7
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                           Options.bloom_locality: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                               Options.ttl: 2592000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                       Options.enable_blob_files: false
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                           Options.min_blob_size: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:           Options.merge_operator: None
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:        Options.compaction_filter: None
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5613e962a2c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5613e96171f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.compression: LZ4
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:             Options.num_levels: 7
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                           Options.bloom_locality: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                               Options.ttl: 2592000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                       Options.enable_blob_files: false
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                           Options.min_blob_size: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:           Options.merge_operator: None
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:        Options.compaction_filter: None
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5613e962a2c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5613e96171f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.compression: LZ4
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:             Options.num_levels: 7
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                           Options.bloom_locality: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                               Options.ttl: 2592000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                       Options.enable_blob_files: false
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                           Options.min_blob_size: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:           Options.merge_operator: None
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:        Options.compaction_filter: None
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5613e962a2c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5613e96171f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.compression: LZ4
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:             Options.num_levels: 7
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                           Options.bloom_locality: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                               Options.ttl: 2592000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                       Options.enable_blob_files: false
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                           Options.min_blob_size: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:           Options.merge_operator: None
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:        Options.compaction_filter: None
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5613e962a2c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5613e96171f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.compression: LZ4
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:             Options.num_levels: 7
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                           Options.bloom_locality: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                               Options.ttl: 2592000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                       Options.enable_blob_files: false
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                           Options.min_blob_size: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:           Options.merge_operator: None
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:        Options.compaction_filter: None
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5613e962a2c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5613e96171f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.compression: LZ4
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:             Options.num_levels: 7
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                           Options.bloom_locality: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                               Options.ttl: 2592000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                       Options.enable_blob_files: false
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                           Options.min_blob_size: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:           Options.merge_operator: None
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:        Options.compaction_filter: None
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5613e962a2c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5613e96171f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.compression: LZ4
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:             Options.num_levels: 7
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                           Options.bloom_locality: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                               Options.ttl: 2592000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                       Options.enable_blob_files: false
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                           Options.min_blob_size: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:           Options.merge_operator: None
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:        Options.compaction_filter: None
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5613e962a240)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5613e9617090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.compression: LZ4
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:             Options.num_levels: 7
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                           Options.bloom_locality: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                               Options.ttl: 2592000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                       Options.enable_blob_files: false
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                           Options.min_blob_size: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:           Options.merge_operator: None
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:        Options.compaction_filter: None
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5613e962a240)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5613e9617090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.compression: LZ4
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:             Options.num_levels: 7
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                           Options.bloom_locality: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                               Options.ttl: 2592000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                       Options.enable_blob_files: false
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                           Options.min_blob_size: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:           Options.merge_operator: None
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:        Options.compaction_filter: None
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5613e962a240)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5613e9617090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.compression: LZ4
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:             Options.num_levels: 7
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                           Options.bloom_locality: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                               Options.ttl: 2592000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                       Options.enable_blob_files: false
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                           Options.min_blob_size: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 25 23:32:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-1f6bcf38e55bac725bd126f9fa0309029912e9d3ffce053e386992e5aa2c96eb-merged.mount: Deactivated successfully.
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: a522a4cb-9102-4aa4-a86b-8971b6d4b06b
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764113575542603, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764113575542806, "job": 1, "event": "recovery_finished"}
Nov 25 23:32:55 compute-0 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Nov 25 23:32:55 compute-0 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Nov 25 23:32:55 compute-0 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Nov 25 23:32:55 compute-0 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: freelist init
Nov 25 23:32:55 compute-0 ceph-osd[90055]: freelist _read_cfg
Nov 25 23:32:55 compute-0 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 25 23:32:55 compute-0 ceph-osd[90055]: bluefs umount
Nov 25 23:32:55 compute-0 ceph-osd[90055]: bdev(0x5613ea45d400 /var/lib/ceph/osd/ceph-1/block) close
Nov 25 23:32:55 compute-0 podman[90220]: 2025-11-25 23:32:55.545793804 +0000 UTC m=+0.185784040 container remove e0b6b9df4dcc5738f932897d6d04fc9ce34ba0c094ff0f9e4a53d91a8d8425a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_moser, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:32:55 compute-0 systemd[1]: libpod-conmon-e0b6b9df4dcc5738f932897d6d04fc9ce34ba0c094ff0f9e4a53d91a8d8425a7.scope: Deactivated successfully.
Nov 25 23:32:55 compute-0 ceph-osd[90055]: bdev(0x5613ea45d400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 25 23:32:55 compute-0 ceph-osd[90055]: bdev(0x5613ea45d400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 25 23:32:55 compute-0 ceph-osd[90055]: bdev(0x5613ea45d400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 23:32:55 compute-0 ceph-osd[90055]: bdev(0x5613ea45d400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 23:32:55 compute-0 ceph-osd[90055]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Nov 25 23:32:55 compute-0 ceph-osd[90055]: bluefs mount
Nov 25 23:32:55 compute-0 ceph-osd[90055]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: bluefs mount shared_bdev_used = 4718592
Nov 25 23:32:55 compute-0 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: RocksDB version: 7.9.2
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Git sha 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: DB SUMMARY
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: DB Session ID:  5XYP8PC920X025ZAXNWJ
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: CURRENT file:  CURRENT
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: IDENTITY file:  IDENTITY
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                         Options.error_if_exists: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                       Options.create_if_missing: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                         Options.paranoid_checks: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                                     Options.env: 0x5613ea5e8460
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                                Options.info_log: 0x5613e962a600
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.max_file_opening_threads: 16
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                              Options.statistics: (nil)
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                               Options.use_fsync: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                       Options.max_log_file_size: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                         Options.allow_fallocate: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                        Options.use_direct_reads: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.create_missing_column_families: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                              Options.db_log_dir: 
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                                 Options.wal_dir: db.wal
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                   Options.advise_random_on_open: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                    Options.write_buffer_manager: 0x5613ea540460
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                            Options.rate_limiter: (nil)
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.unordered_write: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                               Options.row_cache: None
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                              Options.wal_filter: None
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:             Options.allow_ingest_behind: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:             Options.two_write_queues: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:             Options.manual_wal_flush: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:             Options.wal_compression: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:             Options.atomic_flush: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                 Options.log_readahead_size: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:             Options.allow_data_in_errors: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:             Options.db_host_id: __hostname__
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:             Options.max_background_jobs: 4
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:             Options.max_background_compactions: -1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:             Options.max_subcompactions: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                          Options.max_open_files: -1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                          Options.bytes_per_sync: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.max_background_flushes: -1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Compression algorithms supported:
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         kZSTD supported: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         kXpressCompression supported: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         kBZip2Compression supported: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         kLZ4Compression supported: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         kZlibCompression supported: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         kLZ4HCCompression supported: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         kSnappyCompression supported: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:        Options.compaction_filter: None
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5613e962aa20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5613e96171f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.compression: LZ4
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:             Options.num_levels: 7
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                           Options.bloom_locality: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                               Options.ttl: 2592000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                       Options.enable_blob_files: false
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                           Options.min_blob_size: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:           Options.merge_operator: None
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:        Options.compaction_filter: None
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5613e962aa20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5613e96171f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.compression: LZ4
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:             Options.num_levels: 7
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                           Options.bloom_locality: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                               Options.ttl: 2592000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                       Options.enable_blob_files: false
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                           Options.min_blob_size: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:           Options.merge_operator: None
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:        Options.compaction_filter: None
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5613e962aa20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5613e96171f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.compression: LZ4
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:             Options.num_levels: 7
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                           Options.bloom_locality: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                               Options.ttl: 2592000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                       Options.enable_blob_files: false
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                           Options.min_blob_size: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:           Options.merge_operator: None
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:        Options.compaction_filter: None
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5613e962aa20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5613e96171f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.compression: LZ4
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:             Options.num_levels: 7
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                           Options.bloom_locality: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                               Options.ttl: 2592000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                       Options.enable_blob_files: false
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                           Options.min_blob_size: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:           Options.merge_operator: None
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:        Options.compaction_filter: None
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5613e962aa20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5613e96171f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.compression: LZ4
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:             Options.num_levels: 7
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                           Options.bloom_locality: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                               Options.ttl: 2592000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                       Options.enable_blob_files: false
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                           Options.min_blob_size: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:           Options.merge_operator: None
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:        Options.compaction_filter: None
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5613e962aa20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5613e96171f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.compression: LZ4
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:             Options.num_levels: 7
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                           Options.bloom_locality: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                               Options.ttl: 2592000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                       Options.enable_blob_files: false
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                           Options.min_blob_size: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:           Options.merge_operator: None
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:        Options.compaction_filter: None
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5613e962aa20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5613e96171f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.compression: LZ4
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:             Options.num_levels: 7
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                           Options.bloom_locality: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                               Options.ttl: 2592000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                       Options.enable_blob_files: false
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                           Options.min_blob_size: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:           Options.merge_operator: None
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:        Options.compaction_filter: None
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5613e962a380)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5613e9617090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.compression: LZ4
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:             Options.num_levels: 7
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                           Options.bloom_locality: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                               Options.ttl: 2592000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                       Options.enable_blob_files: false
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                           Options.min_blob_size: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:           Options.merge_operator: None
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:        Options.compaction_filter: None
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5613e962a380)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5613e9617090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.compression: LZ4
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:             Options.num_levels: 7
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 23:32:55 compute-0 podman[90464]: 2025-11-25 23:32:55.788237251 +0000 UTC m=+0.054604576 container create 25019c7bd12a039afd42868f5ce40e9535b572be8f18b50d1f4c78b874dd140e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-osd-2-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                           Options.bloom_locality: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                               Options.ttl: 2592000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                       Options.enable_blob_files: false
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                           Options.min_blob_size: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:           Options.merge_operator: None
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:        Options.compaction_filter: None
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5613e962a380)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5613e9617090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.compression: LZ4
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:             Options.num_levels: 7
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                           Options.bloom_locality: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                               Options.ttl: 2592000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                       Options.enable_blob_files: false
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                           Options.min_blob_size: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: a522a4cb-9102-4aa4-a86b-8971b6d4b06b
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764113575781136, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764113575786014, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764113575, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a522a4cb-9102-4aa4-a86b-8971b6d4b06b", "db_session_id": "5XYP8PC920X025ZAXNWJ", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764113575789027, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764113575, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a522a4cb-9102-4aa4-a86b-8971b6d4b06b", "db_session_id": "5XYP8PC920X025ZAXNWJ", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764113575791846, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764113575, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a522a4cb-9102-4aa4-a86b-8971b6d4b06b", "db_session_id": "5XYP8PC920X025ZAXNWJ", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764113575793533, "job": 1, "event": "recovery_finished"}
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5613e965e000
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: DB pointer 0x5613ea51fa00
Nov 25 23:32:55 compute-0 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 25 23:32:55 compute-0 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Nov 25 23:32:55 compute-0 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 23:32:55 compute-0 ceph-osd[90055]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613e96171f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613e96171f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613e96171f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613e96171f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613e96171f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613e96171f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613e96171f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613e9617090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613e9617090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613e9617090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613e96171f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613e96171f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 25 23:32:55 compute-0 ceph-osd[90055]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Nov 25 23:32:55 compute-0 ceph-osd[90055]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Nov 25 23:32:55 compute-0 ceph-osd[90055]: _get_class not permitted to load lua
Nov 25 23:32:55 compute-0 ceph-osd[90055]: _get_class not permitted to load sdk
Nov 25 23:32:55 compute-0 ceph-osd[90055]: _get_class not permitted to load test_remote_reads
Nov 25 23:32:55 compute-0 ceph-osd[90055]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Nov 25 23:32:55 compute-0 ceph-osd[90055]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Nov 25 23:32:55 compute-0 ceph-osd[90055]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Nov 25 23:32:55 compute-0 ceph-osd[90055]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Nov 25 23:32:55 compute-0 ceph-osd[90055]: osd.1 0 load_pgs
Nov 25 23:32:55 compute-0 ceph-osd[90055]: osd.1 0 load_pgs opened 0 pgs
Nov 25 23:32:55 compute-0 ceph-osd[90055]: osd.1 0 log_to_monitors true
Nov 25 23:32:55 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-osd-1[90051]: 2025-11-25T23:32:55.816+0000 7f5fc9dd1740 -1 osd.1 0 log_to_monitors true
Nov 25 23:32:55 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0) v1
Nov 25 23:32:55 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/1453008780,v1:192.168.122.100:6807/1453008780]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Nov 25 23:32:55 compute-0 systemd[1]: Started libpod-conmon-25019c7bd12a039afd42868f5ce40e9535b572be8f18b50d1f4c78b874dd140e.scope.
Nov 25 23:32:55 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:32:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9700ba41fbcc74721d15b52e1822dd919f9e5e3ead4009f25d9917e250eb0fe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9700ba41fbcc74721d15b52e1822dd919f9e5e3ead4009f25d9917e250eb0fe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9700ba41fbcc74721d15b52e1822dd919f9e5e3ead4009f25d9917e250eb0fe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9700ba41fbcc74721d15b52e1822dd919f9e5e3ead4009f25d9917e250eb0fe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9700ba41fbcc74721d15b52e1822dd919f9e5e3ead4009f25d9917e250eb0fe/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:55 compute-0 podman[90464]: 2025-11-25 23:32:55.764665796 +0000 UTC m=+0.031033201 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:32:55 compute-0 podman[90464]: 2025-11-25 23:32:55.876044813 +0000 UTC m=+0.142412168 container init 25019c7bd12a039afd42868f5ce40e9535b572be8f18b50d1f4c78b874dd140e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-osd-2-activate-test, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:32:55 compute-0 podman[90464]: 2025-11-25 23:32:55.881482125 +0000 UTC m=+0.147849440 container start 25019c7bd12a039afd42868f5ce40e9535b572be8f18b50d1f4c78b874dd140e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-osd-2-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 25 23:32:55 compute-0 podman[90464]: 2025-11-25 23:32:55.892513143 +0000 UTC m=+0.158880468 container attach 25019c7bd12a039afd42868f5ce40e9535b572be8f18b50d1f4c78b874dd140e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-osd-2-activate-test, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 23:32:55 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 23:32:55 compute-0 ceph-mgr[75954]: [balancer INFO root] Optimize plan auto_2025-11-25_23:32:55
Nov 25 23:32:55 compute-0 ceph-mgr[75954]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 23:32:55 compute-0 ceph-mgr[75954]: [balancer INFO root] do_upmap
Nov 25 23:32:55 compute-0 ceph-mgr[75954]: [balancer INFO root] No pools available
Nov 25 23:32:56 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 23:32:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 23:32:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:32:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:32:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 23:32:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:32:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:32:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:32:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:32:56 compute-0 ceph-mgr[75954]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1106816781; not ready for session (expect reconnect)
Nov 25 23:32:56 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 25 23:32:56 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 23:32:56 compute-0 ceph-mgr[75954]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 25 23:32:56 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Nov 25 23:32:56 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 25 23:32:56 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/1453008780,v1:192.168.122.100:6807/1453008780]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Nov 25 23:32:56 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e9 e9: 3 total, 1 up, 3 in
Nov 25 23:32:56 compute-0 ceph-mon[75654]: Deploying daemon osd.2 on compute-0
Nov 25 23:32:56 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 23:32:56 compute-0 ceph-mon[75654]: OSD bench result of 9353.738310 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 25 23:32:56 compute-0 ceph-mon[75654]: from='osd.1 [v2:192.168.122.100:6806/1453008780,v1:192.168.122.100:6807/1453008780]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Nov 25 23:32:56 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 23:32:56 compute-0 ceph-osd[89044]: osd.0 9 state: booting -> active
Nov 25 23:32:56 compute-0 ceph-mon[75654]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.100:6802/1106816781,v1:192.168.122.100:6803/1106816781] boot
Nov 25 23:32:56 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e9: 3 total, 1 up, 3 in
Nov 25 23:32:56 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Nov 25 23:32:56 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/1453008780,v1:192.168.122.100:6807/1453008780]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 25 23:32:56 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e9 create-or-move crush item name 'osd.1' initial_weight 0.0195 at location {host=compute-0,root=default}
Nov 25 23:32:56 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 25 23:32:56 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 23:32:56 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 23:32:56 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 23:32:56 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 23:32:56 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 23:32:56 compute-0 ceph-mgr[75954]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 23:32:56 compute-0 ceph-mgr[75954]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 23:32:56 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-osd-2-activate-test[90695]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Nov 25 23:32:56 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-osd-2-activate-test[90695]:                             [--no-systemd] [--no-tmpfs]
Nov 25 23:32:56 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-osd-2-activate-test[90695]: ceph-volume activate: error: unrecognized arguments: --bad-option
Nov 25 23:32:56 compute-0 systemd[1]: libpod-25019c7bd12a039afd42868f5ce40e9535b572be8f18b50d1f4c78b874dd140e.scope: Deactivated successfully.
Nov 25 23:32:56 compute-0 podman[90464]: 2025-11-25 23:32:56.491425253 +0000 UTC m=+0.757792578 container died 25019c7bd12a039afd42868f5ce40e9535b572be8f18b50d1f4c78b874dd140e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-osd-2-activate-test, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:32:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-a9700ba41fbcc74721d15b52e1822dd919f9e5e3ead4009f25d9917e250eb0fe-merged.mount: Deactivated successfully.
Nov 25 23:32:56 compute-0 podman[90464]: 2025-11-25 23:32:56.547084565 +0000 UTC m=+0.813451910 container remove 25019c7bd12a039afd42868f5ce40e9535b572be8f18b50d1f4c78b874dd140e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-osd-2-activate-test, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:32:56 compute-0 systemd[1]: libpod-conmon-25019c7bd12a039afd42868f5ce40e9535b572be8f18b50d1f4c78b874dd140e.scope: Deactivated successfully.
Nov 25 23:32:56 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Nov 25 23:32:56 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Nov 25 23:32:56 compute-0 systemd[1]: Reloading.
Nov 25 23:32:56 compute-0 systemd-rc-local-generator[90757]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 23:32:56 compute-0 systemd-sysv-generator[90763]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 23:32:57 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Nov 25 23:32:57 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 25 23:32:57 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/1453008780,v1:192.168.122.100:6807/1453008780]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 25 23:32:57 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e10 e10: 3 total, 1 up, 3 in
Nov 25 23:32:57 compute-0 ceph-osd[90055]: osd.1 0 done with init, starting boot process
Nov 25 23:32:57 compute-0 ceph-osd[90055]: osd.1 0 start_boot
Nov 25 23:32:57 compute-0 ceph-osd[90055]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Nov 25 23:32:57 compute-0 ceph-osd[90055]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Nov 25 23:32:57 compute-0 ceph-osd[90055]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Nov 25 23:32:57 compute-0 ceph-osd[90055]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Nov 25 23:32:57 compute-0 ceph-osd[90055]: osd.1 0  bench count 12288000 bsize 4 KiB
Nov 25 23:32:57 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e10: 3 total, 1 up, 3 in
Nov 25 23:32:57 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 23:32:57 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 23:32:57 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 23:32:57 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 23:32:57 compute-0 ceph-mgr[75954]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 23:32:57 compute-0 ceph-mgr[75954]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 23:32:57 compute-0 ceph-mon[75654]: pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 25 23:32:57 compute-0 ceph-mon[75654]: from='osd.1 [v2:192.168.122.100:6806/1453008780,v1:192.168.122.100:6807/1453008780]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Nov 25 23:32:57 compute-0 ceph-mon[75654]: osd.0 [v2:192.168.122.100:6802/1106816781,v1:192.168.122.100:6803/1106816781] boot
Nov 25 23:32:57 compute-0 ceph-mon[75654]: osdmap e9: 3 total, 1 up, 3 in
Nov 25 23:32:57 compute-0 ceph-mon[75654]: from='osd.1 [v2:192.168.122.100:6806/1453008780,v1:192.168.122.100:6807/1453008780]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 25 23:32:57 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 25 23:32:57 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 23:32:57 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 23:32:57 compute-0 ceph-mgr[75954]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1453008780; not ready for session (expect reconnect)
Nov 25 23:32:57 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 23:32:57 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 23:32:57 compute-0 ceph-mgr[75954]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 23:32:57 compute-0 systemd[1]: Reloading.
Nov 25 23:32:57 compute-0 systemd-sysv-generator[90801]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 23:32:57 compute-0 systemd-rc-local-generator[90796]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 23:32:57 compute-0 systemd[1]: Starting Ceph osd.2 for 101922db-575f-58e2-980f-928050464f69...
Nov 25 23:32:57 compute-0 podman[90855]: 2025-11-25 23:32:57.789459339 +0000 UTC m=+0.054019571 container create df90cc3d5c5011392da723cb9b1e77b902cfc39aecd82b4c1bec98b0eaa80b9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-osd-2-activate, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 23:32:57 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:32:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36e3fd58c3cece4f3cf004d5b549eddd7eb1e2bf4230c4408ae5065ef717c713/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36e3fd58c3cece4f3cf004d5b549eddd7eb1e2bf4230c4408ae5065ef717c713/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36e3fd58c3cece4f3cf004d5b549eddd7eb1e2bf4230c4408ae5065ef717c713/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36e3fd58c3cece4f3cf004d5b549eddd7eb1e2bf4230c4408ae5065ef717c713/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36e3fd58c3cece4f3cf004d5b549eddd7eb1e2bf4230c4408ae5065ef717c713/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:57 compute-0 podman[90855]: 2025-11-25 23:32:57.774204941 +0000 UTC m=+0.038765263 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:32:57 compute-0 podman[90855]: 2025-11-25 23:32:57.883440152 +0000 UTC m=+0.148000404 container init df90cc3d5c5011392da723cb9b1e77b902cfc39aecd82b4c1bec98b0eaa80b9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-osd-2-activate, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:32:57 compute-0 podman[90855]: 2025-11-25 23:32:57.902980252 +0000 UTC m=+0.167540484 container start df90cc3d5c5011392da723cb9b1e77b902cfc39aecd82b4c1bec98b0eaa80b9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-osd-2-activate, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 23:32:57 compute-0 podman[90855]: 2025-11-25 23:32:57.908445764 +0000 UTC m=+0.173006026 container attach df90cc3d5c5011392da723cb9b1e77b902cfc39aecd82b4c1bec98b0eaa80b9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-osd-2-activate, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:32:57 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v32: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Nov 25 23:32:58 compute-0 ceph-mgr[75954]: [devicehealth INFO root] creating mgr pool
Nov 25 23:32:58 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0) v1
Nov 25 23:32:58 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Nov 25 23:32:58 compute-0 ceph-mgr[75954]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1453008780; not ready for session (expect reconnect)
Nov 25 23:32:58 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Nov 25 23:32:58 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e10 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 25 23:32:58 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 23:32:58 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 23:32:58 compute-0 ceph-mgr[75954]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 23:32:58 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Nov 25 23:32:58 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e11 e11: 3 total, 1 up, 3 in
Nov 25 23:32:58 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e11 crush map has features 3314933000852226048, adjusting msgr requires
Nov 25 23:32:58 compute-0 ceph-mon[75654]: from='osd.1 [v2:192.168.122.100:6806/1453008780,v1:192.168.122.100:6807/1453008780]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 25 23:32:58 compute-0 ceph-mon[75654]: osdmap e10: 3 total, 1 up, 3 in
Nov 25 23:32:58 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 23:32:58 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 23:32:58 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 23:32:58 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Nov 25 23:32:58 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Nov 25 23:32:58 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Nov 25 23:32:58 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Nov 25 23:32:58 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e11: 3 total, 1 up, 3 in
Nov 25 23:32:58 compute-0 ceph-osd[89044]: osd.0 11 crush map has features 288514051259236352, adjusting msgr requires for clients
Nov 25 23:32:58 compute-0 ceph-osd[89044]: osd.0 11 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Nov 25 23:32:58 compute-0 ceph-osd[89044]: osd.0 11 crush map has features 3314933000852226048, adjusting msgr requires for osds
Nov 25 23:32:58 compute-0 ceph-mgr[75954]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 23:32:58 compute-0 ceph-mgr[75954]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 23:32:58 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 23:32:58 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 23:32:58 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 23:32:58 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 23:32:58 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0) v1
Nov 25 23:32:58 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Nov 25 23:32:59 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-osd-2-activate[90869]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 25 23:32:59 compute-0 bash[90855]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 25 23:32:59 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-osd-2-activate[90869]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg2-ceph_lv2
Nov 25 23:32:59 compute-0 bash[90855]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg2-ceph_lv2
Nov 25 23:32:59 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-osd-2-activate[90869]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg2-ceph_lv2
Nov 25 23:32:59 compute-0 bash[90855]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg2-ceph_lv2
Nov 25 23:32:59 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-osd-2-activate[90869]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Nov 25 23:32:59 compute-0 bash[90855]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Nov 25 23:32:59 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-osd-2-activate[90869]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg2-ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Nov 25 23:32:59 compute-0 bash[90855]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg2-ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Nov 25 23:32:59 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-osd-2-activate[90869]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 25 23:32:59 compute-0 bash[90855]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 25 23:32:59 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-osd-2-activate[90869]: --> ceph-volume raw activate successful for osd ID: 2
Nov 25 23:32:59 compute-0 bash[90855]: --> ceph-volume raw activate successful for osd ID: 2
Nov 25 23:32:59 compute-0 systemd[1]: libpod-df90cc3d5c5011392da723cb9b1e77b902cfc39aecd82b4c1bec98b0eaa80b9f.scope: Deactivated successfully.
Nov 25 23:32:59 compute-0 systemd[1]: libpod-df90cc3d5c5011392da723cb9b1e77b902cfc39aecd82b4c1bec98b0eaa80b9f.scope: Consumed 1.264s CPU time.
Nov 25 23:32:59 compute-0 sudo[91013]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgyiaigpgecztwtgyjkeblvoqcfqcykn ; /usr/bin/python3'
Nov 25 23:32:59 compute-0 sudo[91013]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:32:59 compute-0 ceph-mgr[75954]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1453008780; not ready for session (expect reconnect)
Nov 25 23:32:59 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 23:32:59 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 23:32:59 compute-0 ceph-mgr[75954]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 23:32:59 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Nov 25 23:32:59 compute-0 ceph-mon[75654]: purged_snaps scrub starts
Nov 25 23:32:59 compute-0 ceph-mon[75654]: purged_snaps scrub ok
Nov 25 23:32:59 compute-0 ceph-mon[75654]: pgmap v32: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Nov 25 23:32:59 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 23:32:59 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Nov 25 23:32:59 compute-0 ceph-mon[75654]: osdmap e11: 3 total, 1 up, 3 in
Nov 25 23:32:59 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 23:32:59 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 23:32:59 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Nov 25 23:32:59 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 23:32:59 compute-0 podman[91014]: 2025-11-25 23:32:59.231545094 +0000 UTC m=+0.049912083 container died df90cc3d5c5011392da723cb9b1e77b902cfc39aecd82b4c1bec98b0eaa80b9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-osd-2-activate, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:32:59 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Nov 25 23:32:59 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e12 e12: 3 total, 1 up, 3 in
Nov 25 23:32:59 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e12: 3 total, 1 up, 3 in
Nov 25 23:32:59 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 23:32:59 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 23:32:59 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 23:32:59 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 23:32:59 compute-0 ceph-mgr[75954]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 23:32:59 compute-0 ceph-mgr[75954]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 23:32:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-36e3fd58c3cece4f3cf004d5b549eddd7eb1e2bf4230c4408ae5065ef717c713-merged.mount: Deactivated successfully.
Nov 25 23:32:59 compute-0 podman[91014]: 2025-11-25 23:32:59.306626484 +0000 UTC m=+0.124993413 container remove df90cc3d5c5011392da723cb9b1e77b902cfc39aecd82b4c1bec98b0eaa80b9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-osd-2-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:32:59 compute-0 python3[91021]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 101922db-575f-58e2-980f-928050464f69 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:32:59 compute-0 podman[91041]: 2025-11-25 23:32:59.449887713 +0000 UTC m=+0.049589085 container create 93332430d001d6689d9429ef2b79931ced4cd157db021fbc101e95bc1d6eb974 (image=quay.io/ceph/ceph:v18, name=elastic_johnson, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:32:59 compute-0 systemd[1]: Started libpod-conmon-93332430d001d6689d9429ef2b79931ced4cd157db021fbc101e95bc1d6eb974.scope.
Nov 25 23:32:59 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:32:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2eb4d003a71d0e07d19be191d06af2442c0497305bb1e87b30e80cb5fb9abf58/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2eb4d003a71d0e07d19be191d06af2442c0497305bb1e87b30e80cb5fb9abf58/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2eb4d003a71d0e07d19be191d06af2442c0497305bb1e87b30e80cb5fb9abf58/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:59 compute-0 podman[91041]: 2025-11-25 23:32:59.430887297 +0000 UTC m=+0.030588689 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 23:32:59 compute-0 podman[91041]: 2025-11-25 23:32:59.529023088 +0000 UTC m=+0.128724490 container init 93332430d001d6689d9429ef2b79931ced4cd157db021fbc101e95bc1d6eb974 (image=quay.io/ceph/ceph:v18, name=elastic_johnson, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:32:59 compute-0 podman[91041]: 2025-11-25 23:32:59.536970196 +0000 UTC m=+0.136671568 container start 93332430d001d6689d9429ef2b79931ced4cd157db021fbc101e95bc1d6eb974 (image=quay.io/ceph/ceph:v18, name=elastic_johnson, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 23:32:59 compute-0 podman[91041]: 2025-11-25 23:32:59.541620727 +0000 UTC m=+0.141322099 container attach 93332430d001d6689d9429ef2b79931ced4cd157db021fbc101e95bc1d6eb974 (image=quay.io/ceph/ceph:v18, name=elastic_johnson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 25 23:32:59 compute-0 podman[91091]: 2025-11-25 23:32:59.553591599 +0000 UTC m=+0.043160477 container create 4adea0c725a0ee7ed4d56b00f30b101c6071ad14e4ab599d343ac613042f9d40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-osd-2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default)
Nov 25 23:32:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3efc4fd8ff99b7c9a8786efa1fea789779bb133c619b97afae2f5b38590bb86/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3efc4fd8ff99b7c9a8786efa1fea789779bb133c619b97afae2f5b38590bb86/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3efc4fd8ff99b7c9a8786efa1fea789779bb133c619b97afae2f5b38590bb86/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3efc4fd8ff99b7c9a8786efa1fea789779bb133c619b97afae2f5b38590bb86/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3efc4fd8ff99b7c9a8786efa1fea789779bb133c619b97afae2f5b38590bb86/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Nov 25 23:32:59 compute-0 podman[91091]: 2025-11-25 23:32:59.607356593 +0000 UTC m=+0.096925491 container init 4adea0c725a0ee7ed4d56b00f30b101c6071ad14e4ab599d343ac613042f9d40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-osd-2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 25 23:32:59 compute-0 ceph-osd[90055]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 35.825 iops: 9171.129 elapsed_sec: 0.327
Nov 25 23:32:59 compute-0 ceph-osd[90055]: log_channel(cluster) log [WRN] : OSD bench result of 9171.128788 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 25 23:32:59 compute-0 ceph-osd[90055]: osd.1 0 waiting for initial osdmap
Nov 25 23:32:59 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-osd-1[90051]: 2025-11-25T23:32:59.607+0000 7f5fc5d51640 -1 osd.1 0 waiting for initial osdmap
Nov 25 23:32:59 compute-0 ceph-osd[90055]: osd.1 12 crush map has features 288514051259236352, adjusting msgr requires for clients
Nov 25 23:32:59 compute-0 ceph-osd[90055]: osd.1 12 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Nov 25 23:32:59 compute-0 ceph-osd[90055]: osd.1 12 crush map has features 3314933000852226048, adjusting msgr requires for osds
Nov 25 23:32:59 compute-0 ceph-osd[90055]: osd.1 12 check_osdmap_features require_osd_release unknown -> reef
Nov 25 23:32:59 compute-0 podman[91091]: 2025-11-25 23:32:59.617672902 +0000 UTC m=+0.107241780 container start 4adea0c725a0ee7ed4d56b00f30b101c6071ad14e4ab599d343ac613042f9d40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-osd-2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 23:32:59 compute-0 bash[91091]: 4adea0c725a0ee7ed4d56b00f30b101c6071ad14e4ab599d343ac613042f9d40
Nov 25 23:32:59 compute-0 podman[91091]: 2025-11-25 23:32:59.535014324 +0000 UTC m=+0.024583212 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:32:59 compute-0 systemd[1]: Started Ceph osd.2 for 101922db-575f-58e2-980f-928050464f69.
Nov 25 23:32:59 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-osd-1[90051]: 2025-11-25T23:32:59.632+0000 7f5fc1379640 -1 osd.1 12 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 25 23:32:59 compute-0 ceph-osd[90055]: osd.1 12 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 25 23:32:59 compute-0 ceph-osd[90055]: osd.1 12 set_numa_affinity not setting numa affinity
Nov 25 23:32:59 compute-0 ceph-osd[90055]: osd.1 12 _collect_metadata loop4:  no unique device id for loop4: fallback method has no model nor serial
Nov 25 23:32:59 compute-0 ceph-osd[91111]: set uid:gid to 167:167 (ceph:ceph)
Nov 25 23:32:59 compute-0 ceph-osd[91111]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Nov 25 23:32:59 compute-0 ceph-osd[91111]: pidfile_write: ignore empty --pid-file
Nov 25 23:32:59 compute-0 ceph-osd[91111]: bdev(0x56223ce45800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 25 23:32:59 compute-0 ceph-osd[91111]: bdev(0x56223ce45800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 25 23:32:59 compute-0 ceph-osd[91111]: bdev(0x56223ce45800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 23:32:59 compute-0 ceph-osd[91111]: bdev(0x56223ce45800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 23:32:59 compute-0 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 25 23:32:59 compute-0 ceph-osd[91111]: bdev(0x56223dc87800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 25 23:32:59 compute-0 ceph-osd[91111]: bdev(0x56223dc87800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 25 23:32:59 compute-0 ceph-osd[91111]: bdev(0x56223dc87800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 23:32:59 compute-0 ceph-osd[91111]: bdev(0x56223dc87800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 23:32:59 compute-0 ceph-osd[91111]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Nov 25 23:32:59 compute-0 ceph-osd[91111]: bdev(0x56223dc87800 /var/lib/ceph/osd/ceph-2/block) close
Nov 25 23:32:59 compute-0 sudo[90150]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:59 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 23:32:59 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:59 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 23:32:59 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:32:59 compute-0 sudo[91126]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:32:59 compute-0 sudo[91126]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:59 compute-0 sudo[91126]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:59 compute-0 sudo[91151]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:32:59 compute-0 sudo[91151]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:59 compute-0 sudo[91151]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:59 compute-0 sudo[91176]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:32:59 compute-0 sudo[91176]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:59 compute-0 sudo[91176]: pam_unix(sudo:session): session closed for user root
Nov 25 23:32:59 compute-0 sudo[91202]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 -- raw list --format json
Nov 25 23:32:59 compute-0 sudo[91202]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:32:59 compute-0 ceph-osd[91111]: bdev(0x56223ce45800 /var/lib/ceph/osd/ceph-2/block) close
Nov 25 23:32:59 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v35: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Nov 25 23:33:00 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 25 23:33:00 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4021532597' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 25 23:33:00 compute-0 elastic_johnson[91083]: 
Nov 25 23:33:00 compute-0 elastic_johnson[91083]: {"fsid":"101922db-575f-58e2-980f-928050464f69","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":109,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":12,"num_osds":3,"num_up_osds":1,"osd_up_since":1764113576,"num_in_osds":3,"osd_in_since":1764113559,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":446984192,"bytes_avail":21023657984,"bytes_total":21470642176},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-11-25T23:32:57.989925+0000","services":{"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Nov 25 23:33:00 compute-0 systemd[1]: libpod-93332430d001d6689d9429ef2b79931ced4cd157db021fbc101e95bc1d6eb974.scope: Deactivated successfully.
Nov 25 23:33:00 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e12 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:33:00 compute-0 ceph-osd[91111]: starting osd.2 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal
Nov 25 23:33:00 compute-0 ceph-mgr[75954]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1453008780; not ready for session (expect reconnect)
Nov 25 23:33:00 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 23:33:00 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 23:33:00 compute-0 ceph-mgr[75954]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 25 23:33:00 compute-0 ceph-osd[91111]: load: jerasure load: lrc 
Nov 25 23:33:00 compute-0 ceph-osd[91111]: bdev(0x56223dd08c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 25 23:33:00 compute-0 ceph-osd[91111]: bdev(0x56223dd08c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 25 23:33:00 compute-0 ceph-osd[91111]: bdev(0x56223dd08c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 23:33:00 compute-0 ceph-osd[91111]: bdev(0x56223dd08c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 23:33:00 compute-0 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 25 23:33:00 compute-0 ceph-osd[91111]: bdev(0x56223dd08c00 /var/lib/ceph/osd/ceph-2/block) close
Nov 25 23:33:00 compute-0 podman[91276]: 2025-11-25 23:33:00.191566169 +0000 UTC m=+0.024286574 container died 93332430d001d6689d9429ef2b79931ced4cd157db021fbc101e95bc1d6eb974 (image=quay.io/ceph/ceph:v18, name=elastic_johnson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 23:33:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-2eb4d003a71d0e07d19be191d06af2442c0497305bb1e87b30e80cb5fb9abf58-merged.mount: Deactivated successfully.
Nov 25 23:33:00 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Nov 25 23:33:00 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e13 e13: 3 total, 2 up, 3 in
Nov 25 23:33:00 compute-0 podman[91276]: 2025-11-25 23:33:00.246070662 +0000 UTC m=+0.078791087 container remove 93332430d001d6689d9429ef2b79931ced4cd157db021fbc101e95bc1d6eb974 (image=quay.io/ceph/ceph:v18, name=elastic_johnson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:33:00 compute-0 ceph-mon[75654]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.100:6806/1453008780,v1:192.168.122.100:6807/1453008780] boot
Nov 25 23:33:00 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e13: 3 total, 2 up, 3 in
Nov 25 23:33:00 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 25 23:33:00 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 23:33:00 compute-0 systemd[1]: libpod-conmon-93332430d001d6689d9429ef2b79931ced4cd157db021fbc101e95bc1d6eb974.scope: Deactivated successfully.
Nov 25 23:33:00 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 23:33:00 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 23:33:00 compute-0 ceph-mgr[75954]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 23:33:00 compute-0 ceph-osd[90055]: osd.1 13 state: booting -> active
Nov 25 23:33:00 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 13 pg[1.0( empty local-lis/les=0/0 n=0 ec=11/11 lis/c=0/0 les/c/f=0/0/0 sis=13) [1] r=0 lpr=13 pi=[11,13)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:33:00 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Nov 25 23:33:00 compute-0 ceph-mon[75654]: osdmap e12: 3 total, 1 up, 3 in
Nov 25 23:33:00 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 23:33:00 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 23:33:00 compute-0 ceph-mon[75654]: OSD bench result of 9171.128788 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 25 23:33:00 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:33:00 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:33:00 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/4021532597' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 25 23:33:00 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 23:33:00 compute-0 sudo[91013]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:00 compute-0 podman[91302]: 2025-11-25 23:33:00.366281309 +0000 UTC m=+0.073500380 container create 5dbd7ab1509172b7d0e323f9b41b06fd739157fd61bf3592509bf55c9ef47543 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_hermann, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:33:00 compute-0 systemd[1]: Started libpod-conmon-5dbd7ab1509172b7d0e323f9b41b06fd739157fd61bf3592509bf55c9ef47543.scope.
Nov 25 23:33:00 compute-0 podman[91302]: 2025-11-25 23:33:00.333616116 +0000 UTC m=+0.040835237 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:33:00 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:33:00 compute-0 ceph-osd[91111]: bdev(0x56223dd08c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 25 23:33:00 compute-0 ceph-osd[91111]: bdev(0x56223dd08c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 25 23:33:00 compute-0 ceph-osd[91111]: bdev(0x56223dd08c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 23:33:00 compute-0 ceph-osd[91111]: bdev(0x56223dd08c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 23:33:00 compute-0 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 25 23:33:00 compute-0 ceph-osd[91111]: bdev(0x56223dd08c00 /var/lib/ceph/osd/ceph-2/block) close
Nov 25 23:33:00 compute-0 podman[91302]: 2025-11-25 23:33:00.48436384 +0000 UTC m=+0.191582961 container init 5dbd7ab1509172b7d0e323f9b41b06fd739157fd61bf3592509bf55c9ef47543 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:33:00 compute-0 podman[91302]: 2025-11-25 23:33:00.497506914 +0000 UTC m=+0.204725985 container start 5dbd7ab1509172b7d0e323f9b41b06fd739157fd61bf3592509bf55c9ef47543 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_hermann, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:33:00 compute-0 podman[91302]: 2025-11-25 23:33:00.501676053 +0000 UTC m=+0.208895194 container attach 5dbd7ab1509172b7d0e323f9b41b06fd739157fd61bf3592509bf55c9ef47543 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 23:33:00 compute-0 funny_hermann[91318]: 167 167
Nov 25 23:33:00 compute-0 systemd[1]: libpod-5dbd7ab1509172b7d0e323f9b41b06fd739157fd61bf3592509bf55c9ef47543.scope: Deactivated successfully.
Nov 25 23:33:00 compute-0 podman[91302]: 2025-11-25 23:33:00.505676736 +0000 UTC m=+0.212895777 container died 5dbd7ab1509172b7d0e323f9b41b06fd739157fd61bf3592509bf55c9ef47543 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_hermann, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:33:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-825c979f592cd15ffb4ba87c7aa2f7094477f82e27d5ce872badf9220eb0d55d-merged.mount: Deactivated successfully.
Nov 25 23:33:00 compute-0 podman[91302]: 2025-11-25 23:33:00.548165986 +0000 UTC m=+0.255385047 container remove 5dbd7ab1509172b7d0e323f9b41b06fd739157fd61bf3592509bf55c9ef47543 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_hermann, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:33:00 compute-0 systemd[1]: libpod-conmon-5dbd7ab1509172b7d0e323f9b41b06fd739157fd61bf3592509bf55c9ef47543.scope: Deactivated successfully.
Nov 25 23:33:00 compute-0 sudo[91364]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aetyvnxioqxgkltfzqzkxfullxtonjjg ; /usr/bin/python3'
Nov 25 23:33:00 compute-0 sudo[91364]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:33:00 compute-0 ceph-osd[91111]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Nov 25 23:33:00 compute-0 ceph-osd[91111]: osd.2:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Nov 25 23:33:00 compute-0 ceph-osd[91111]: bdev(0x56223dd08c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 25 23:33:00 compute-0 ceph-osd[91111]: bdev(0x56223dd08c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 25 23:33:00 compute-0 ceph-osd[91111]: bdev(0x56223dd08c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 23:33:00 compute-0 ceph-osd[91111]: bdev(0x56223dd08c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 23:33:00 compute-0 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 25 23:33:00 compute-0 ceph-osd[91111]: bdev(0x56223dd09400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 25 23:33:00 compute-0 ceph-osd[91111]: bdev(0x56223dd09400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 25 23:33:00 compute-0 ceph-osd[91111]: bdev(0x56223dd09400 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 23:33:00 compute-0 ceph-osd[91111]: bdev(0x56223dd09400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 23:33:00 compute-0 ceph-osd[91111]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Nov 25 23:33:00 compute-0 ceph-osd[91111]: bluefs mount
Nov 25 23:33:00 compute-0 ceph-osd[91111]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 25 23:33:00 compute-0 ceph-osd[91111]: bluefs mount shared_bdev_used = 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: RocksDB version: 7.9.2
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Git sha 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: DB SUMMARY
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: DB Session ID:  QPL9YOS3W6R72EW4HN3U
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: CURRENT file:  CURRENT
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: IDENTITY file:  IDENTITY
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                         Options.error_if_exists: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                       Options.create_if_missing: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                         Options.paranoid_checks: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                                     Options.env: 0x56223dcd9c70
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                                Options.info_log: 0x56223cecc8a0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                Options.max_file_opening_threads: 16
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                              Options.statistics: (nil)
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                               Options.use_fsync: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                       Options.max_log_file_size: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                         Options.allow_fallocate: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                        Options.use_direct_reads: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:          Options.create_missing_column_families: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                              Options.db_log_dir: 
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                                 Options.wal_dir: db.wal
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                   Options.advise_random_on_open: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                    Options.write_buffer_manager: 0x56223dde2460
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                            Options.rate_limiter: (nil)
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                  Options.unordered_write: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                               Options.row_cache: None
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                              Options.wal_filter: None
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:             Options.allow_ingest_behind: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:             Options.two_write_queues: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:             Options.manual_wal_flush: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:             Options.wal_compression: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:             Options.atomic_flush: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                 Options.log_readahead_size: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:             Options.allow_data_in_errors: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:             Options.db_host_id: __hostname__
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:             Options.max_background_jobs: 4
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:             Options.max_background_compactions: -1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:             Options.max_subcompactions: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                          Options.max_open_files: -1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                          Options.bytes_per_sync: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                  Options.max_background_flushes: -1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Compression algorithms supported:
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         kZSTD supported: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         kXpressCompression supported: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         kBZip2Compression supported: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         kLZ4Compression supported: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         kZlibCompression supported: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         kLZ4HCCompression supported: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         kSnappyCompression supported: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:        Options.compaction_filter: None
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56223cecc2c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56223ceb91f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:          Options.compression: LZ4
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:             Options.num_levels: 7
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                           Options.bloom_locality: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                               Options.ttl: 2592000
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                       Options.enable_blob_files: false
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                           Options.min_blob_size: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:           Options.merge_operator: None
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:        Options.compaction_filter: None
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56223cecc2c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56223ceb91f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:          Options.compression: LZ4
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:             Options.num_levels: 7
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                           Options.bloom_locality: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                               Options.ttl: 2592000
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                       Options.enable_blob_files: false
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                           Options.min_blob_size: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:           Options.merge_operator: None
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:        Options.compaction_filter: None
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56223cecc2c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56223ceb91f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:          Options.compression: LZ4
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:             Options.num_levels: 7
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                           Options.bloom_locality: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                               Options.ttl: 2592000
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                       Options.enable_blob_files: false
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                           Options.min_blob_size: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:           Options.merge_operator: None
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:        Options.compaction_filter: None
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56223cecc2c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56223ceb91f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:          Options.compression: LZ4
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:             Options.num_levels: 7
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                           Options.bloom_locality: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                               Options.ttl: 2592000
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                       Options.enable_blob_files: false
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                           Options.min_blob_size: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:           Options.merge_operator: None
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:        Options.compaction_filter: None
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56223cecc2c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56223ceb91f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:          Options.compression: LZ4
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:             Options.num_levels: 7
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                           Options.bloom_locality: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                               Options.ttl: 2592000
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                       Options.enable_blob_files: false
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                           Options.min_blob_size: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:           Options.merge_operator: None
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:        Options.compaction_filter: None
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56223cecc2c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56223ceb91f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:          Options.compression: LZ4
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:             Options.num_levels: 7
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                           Options.bloom_locality: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                               Options.ttl: 2592000
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                       Options.enable_blob_files: false
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                           Options.min_blob_size: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:           Options.merge_operator: None
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:        Options.compaction_filter: None
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56223cecc2c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56223ceb91f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:          Options.compression: LZ4
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:             Options.num_levels: 7
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                           Options.bloom_locality: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                               Options.ttl: 2592000
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                       Options.enable_blob_files: false
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                           Options.min_blob_size: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:           Options.merge_operator: None
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:        Options.compaction_filter: None
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56223cecc240)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56223ceb9090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:          Options.compression: LZ4
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:             Options.num_levels: 7
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 23:33:00 compute-0 podman[91372]: 2025-11-25 23:33:00.775609441 +0000 UTC m=+0.070489790 container create 0bcdf0aa7eeb3b9a4c37f68326888bbce480b7b53d0f6f1b01bafd48b96f2f81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_yalow, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                           Options.bloom_locality: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                               Options.ttl: 2592000
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                       Options.enable_blob_files: false
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                           Options.min_blob_size: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:           Options.merge_operator: None
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:        Options.compaction_filter: None
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56223cecc240)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56223ceb9090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:          Options.compression: LZ4
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:             Options.num_levels: 7
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                           Options.bloom_locality: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                               Options.ttl: 2592000
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                       Options.enable_blob_files: false
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                           Options.min_blob_size: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:           Options.merge_operator: None
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:        Options.compaction_filter: None
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56223cecc240)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56223ceb9090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:          Options.compression: LZ4
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:             Options.num_levels: 7
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                           Options.bloom_locality: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                               Options.ttl: 2592000
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                       Options.enable_blob_files: false
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                           Options.min_blob_size: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 2f73a468-cc6b-47d0-b2f7-880fcbf5b7b1
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764113580766822, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764113580767032, "job": 1, "event": "recovery_finished"}
Nov 25 23:33:00 compute-0 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 25 23:33:00 compute-0 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old nid_max 1025
Nov 25 23:33:00 compute-0 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old blobid_max 10240
Nov 25 23:33:00 compute-0 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Nov 25 23:33:00 compute-0 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta min_alloc_size 0x1000
Nov 25 23:33:00 compute-0 ceph-osd[91111]: freelist init
Nov 25 23:33:00 compute-0 ceph-osd[91111]: freelist _read_cfg
Nov 25 23:33:00 compute-0 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 25 23:33:00 compute-0 ceph-osd[91111]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 25 23:33:00 compute-0 ceph-osd[91111]: bluefs umount
Nov 25 23:33:00 compute-0 ceph-osd[91111]: bdev(0x56223dd09400 /var/lib/ceph/osd/ceph-2/block) close
Nov 25 23:33:00 compute-0 systemd[1]: Started libpod-conmon-0bcdf0aa7eeb3b9a4c37f68326888bbce480b7b53d0f6f1b01bafd48b96f2f81.scope.
Nov 25 23:33:00 compute-0 python3[91366]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 101922db-575f-58e2-980f-928050464f69 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:33:00 compute-0 podman[91372]: 2025-11-25 23:33:00.745911986 +0000 UTC m=+0.040792365 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:33:00 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:33:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45ef8d5fc0aab03f52ed72e912e677db081d0e69e6fc4755a57d05a4899f6c19/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45ef8d5fc0aab03f52ed72e912e677db081d0e69e6fc4755a57d05a4899f6c19/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45ef8d5fc0aab03f52ed72e912e677db081d0e69e6fc4755a57d05a4899f6c19/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45ef8d5fc0aab03f52ed72e912e677db081d0e69e6fc4755a57d05a4899f6c19/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:00 compute-0 podman[91372]: 2025-11-25 23:33:00.872701296 +0000 UTC m=+0.167581715 container init 0bcdf0aa7eeb3b9a4c37f68326888bbce480b7b53d0f6f1b01bafd48b96f2f81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_yalow, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:33:00 compute-0 podman[91372]: 2025-11-25 23:33:00.884346699 +0000 UTC m=+0.179227038 container start 0bcdf0aa7eeb3b9a4c37f68326888bbce480b7b53d0f6f1b01bafd48b96f2f81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_yalow, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 25 23:33:00 compute-0 podman[91372]: 2025-11-25 23:33:00.888003384 +0000 UTC m=+0.182883733 container attach 0bcdf0aa7eeb3b9a4c37f68326888bbce480b7b53d0f6f1b01bafd48b96f2f81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:33:00 compute-0 podman[91585]: 2025-11-25 23:33:00.903270553 +0000 UTC m=+0.061780423 container create 61d318b2d4bf7b48acd1bf8bba263d726d120ed5cf9e8624da0aff2eb1b80ef9 (image=quay.io/ceph/ceph:v18, name=gallant_goldberg, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 23:33:00 compute-0 systemd[1]: Started libpod-conmon-61d318b2d4bf7b48acd1bf8bba263d726d120ed5cf9e8624da0aff2eb1b80ef9.scope.
Nov 25 23:33:00 compute-0 podman[91585]: 2025-11-25 23:33:00.868978968 +0000 UTC m=+0.027488838 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 23:33:00 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:33:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22645ea58337fa75ac8ae6aabdfec1c0c8e15d8d6394f43cbf0f4fc36a0edcb3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22645ea58337fa75ac8ae6aabdfec1c0c8e15d8d6394f43cbf0f4fc36a0edcb3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:00 compute-0 podman[91585]: 2025-11-25 23:33:00.989585406 +0000 UTC m=+0.148095316 container init 61d318b2d4bf7b48acd1bf8bba263d726d120ed5cf9e8624da0aff2eb1b80ef9 (image=quay.io/ceph/ceph:v18, name=gallant_goldberg, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:33:00 compute-0 podman[91585]: 2025-11-25 23:33:00.995911001 +0000 UTC m=+0.154420861 container start 61d318b2d4bf7b48acd1bf8bba263d726d120ed5cf9e8624da0aff2eb1b80ef9 (image=quay.io/ceph/ceph:v18, name=gallant_goldberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:33:01 compute-0 podman[91585]: 2025-11-25 23:33:01.000655215 +0000 UTC m=+0.159165095 container attach 61d318b2d4bf7b48acd1bf8bba263d726d120ed5cf9e8624da0aff2eb1b80ef9 (image=quay.io/ceph/ceph:v18, name=gallant_goldberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 23:33:01 compute-0 ceph-osd[91111]: bdev(0x56223dd09400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 25 23:33:01 compute-0 ceph-osd[91111]: bdev(0x56223dd09400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 25 23:33:01 compute-0 ceph-osd[91111]: bdev(0x56223dd09400 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 23:33:01 compute-0 ceph-osd[91111]: bdev(0x56223dd09400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 23:33:01 compute-0 ceph-osd[91111]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Nov 25 23:33:01 compute-0 ceph-osd[91111]: bluefs mount
Nov 25 23:33:01 compute-0 ceph-osd[91111]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 25 23:33:01 compute-0 ceph-osd[91111]: bluefs mount shared_bdev_used = 4718592
Nov 25 23:33:01 compute-0 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: RocksDB version: 7.9.2
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Git sha 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: DB SUMMARY
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: DB Session ID:  QPL9YOS3W6R72EW4HN3V
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: CURRENT file:  CURRENT
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: IDENTITY file:  IDENTITY
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                         Options.error_if_exists: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                       Options.create_if_missing: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                         Options.paranoid_checks: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                                     Options.env: 0x56223de8ab60
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                                Options.info_log: 0x56223dcd5a20
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                Options.max_file_opening_threads: 16
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                              Options.statistics: (nil)
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                               Options.use_fsync: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                       Options.max_log_file_size: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                         Options.allow_fallocate: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                        Options.use_direct_reads: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:          Options.create_missing_column_families: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                              Options.db_log_dir: 
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                                 Options.wal_dir: db.wal
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                   Options.advise_random_on_open: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                    Options.write_buffer_manager: 0x56223dde26e0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                            Options.rate_limiter: (nil)
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                  Options.unordered_write: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                               Options.row_cache: None
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                              Options.wal_filter: None
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:             Options.allow_ingest_behind: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:             Options.two_write_queues: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:             Options.manual_wal_flush: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:             Options.wal_compression: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:             Options.atomic_flush: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                 Options.log_readahead_size: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:             Options.allow_data_in_errors: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:             Options.db_host_id: __hostname__
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:             Options.max_background_jobs: 4
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:             Options.max_background_compactions: -1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:             Options.max_subcompactions: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                          Options.max_open_files: -1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                          Options.bytes_per_sync: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                  Options.max_background_flushes: -1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Compression algorithms supported:
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         kZSTD supported: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         kXpressCompression supported: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         kBZip2Compression supported: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         kLZ4Compression supported: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         kZlibCompression supported: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         kLZ4HCCompression supported: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         kSnappyCompression supported: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:        Options.compaction_filter: None
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56223cecca20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56223ceb91f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:          Options.compression: LZ4
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:             Options.num_levels: 7
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                           Options.bloom_locality: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                               Options.ttl: 2592000
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                       Options.enable_blob_files: false
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                           Options.min_blob_size: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:           Options.merge_operator: None
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:        Options.compaction_filter: None
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56223cecca20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56223ceb91f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:          Options.compression: LZ4
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:             Options.num_levels: 7
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                           Options.bloom_locality: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                               Options.ttl: 2592000
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                       Options.enable_blob_files: false
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                           Options.min_blob_size: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:           Options.merge_operator: None
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:        Options.compaction_filter: None
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56223cecca20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56223ceb91f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:          Options.compression: LZ4
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:             Options.num_levels: 7
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                           Options.bloom_locality: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                               Options.ttl: 2592000
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                       Options.enable_blob_files: false
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                           Options.min_blob_size: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:           Options.merge_operator: None
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:        Options.compaction_filter: None
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56223cecca20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56223ceb91f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:          Options.compression: LZ4
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:             Options.num_levels: 7
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                           Options.bloom_locality: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                               Options.ttl: 2592000
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                       Options.enable_blob_files: false
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                           Options.min_blob_size: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:           Options.merge_operator: None
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:        Options.compaction_filter: None
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56223cecca20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56223ceb91f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:          Options.compression: LZ4
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:             Options.num_levels: 7
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                           Options.bloom_locality: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                               Options.ttl: 2592000
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                       Options.enable_blob_files: false
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                           Options.min_blob_size: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:           Options.merge_operator: None
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:        Options.compaction_filter: None
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56223cecca20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56223ceb91f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:          Options.compression: LZ4
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:             Options.num_levels: 7
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                           Options.bloom_locality: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                               Options.ttl: 2592000
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                       Options.enable_blob_files: false
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                           Options.min_blob_size: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:           Options.merge_operator: None
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:        Options.compaction_filter: None
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56223cecca20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56223ceb91f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:          Options.compression: LZ4
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:             Options.num_levels: 7
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                           Options.bloom_locality: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                               Options.ttl: 2592000
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                       Options.enable_blob_files: false
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                           Options.min_blob_size: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:           Options.merge_operator: None
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:        Options.compaction_filter: None
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56223cecc380)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56223ceb9090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:          Options.compression: LZ4
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:             Options.num_levels: 7
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                           Options.bloom_locality: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                               Options.ttl: 2592000
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                       Options.enable_blob_files: false
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                           Options.min_blob_size: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:           Options.merge_operator: None
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:        Options.compaction_filter: None
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56223cecc380)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56223ceb9090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:          Options.compression: LZ4
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:             Options.num_levels: 7
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                           Options.bloom_locality: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                               Options.ttl: 2592000
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                       Options.enable_blob_files: false
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                           Options.min_blob_size: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:           Options.merge_operator: None
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:        Options.compaction_filter: None
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56223cecc380)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56223ceb9090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:          Options.compression: LZ4
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:             Options.num_levels: 7
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                           Options.bloom_locality: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                               Options.ttl: 2592000
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                       Options.enable_blob_files: false
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                           Options.min_blob_size: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 2f73a468-cc6b-47d0-b2f7-880fcbf5b7b1
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764113581041856, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764113581047549, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764113581, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2f73a468-cc6b-47d0-b2f7-880fcbf5b7b1", "db_session_id": "QPL9YOS3W6R72EW4HN3V", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764113581051046, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764113581, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2f73a468-cc6b-47d0-b2f7-880fcbf5b7b1", "db_session_id": "QPL9YOS3W6R72EW4HN3V", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764113581054833, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764113581, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2f73a468-cc6b-47d0-b2f7-880fcbf5b7b1", "db_session_id": "QPL9YOS3W6R72EW4HN3V", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764113581056787, "job": 1, "event": "recovery_finished"}
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x56223d026000
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: DB pointer 0x56223ddcba00
Nov 25 23:33:01 compute-0 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 25 23:33:01 compute-0 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super from 4, latest 4
Nov 25 23:33:01 compute-0 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super done
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 23:33:01 compute-0 ceph-osd[91111]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56223ceb91f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56223ceb91f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56223ceb91f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56223ceb91f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56223ceb91f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56223ceb91f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56223ceb91f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56223ceb9090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56223ceb9090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56223ceb9090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56223ceb91f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56223ceb91f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 25 23:33:01 compute-0 ceph-osd[91111]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Nov 25 23:33:01 compute-0 ceph-osd[91111]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Nov 25 23:33:01 compute-0 ceph-osd[91111]: _get_class not permitted to load lua
Nov 25 23:33:01 compute-0 ceph-osd[91111]: _get_class not permitted to load sdk
Nov 25 23:33:01 compute-0 ceph-osd[91111]: _get_class not permitted to load test_remote_reads
Nov 25 23:33:01 compute-0 ceph-osd[91111]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Nov 25 23:33:01 compute-0 ceph-osd[91111]: osd.2 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Nov 25 23:33:01 compute-0 ceph-osd[91111]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Nov 25 23:33:01 compute-0 ceph-osd[91111]: osd.2 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Nov 25 23:33:01 compute-0 ceph-osd[91111]: osd.2 0 load_pgs
Nov 25 23:33:01 compute-0 ceph-osd[91111]: osd.2 0 load_pgs opened 0 pgs
Nov 25 23:33:01 compute-0 ceph-osd[91111]: osd.2 0 log_to_monitors true
Nov 25 23:33:01 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-osd-2[91107]: 2025-11-25T23:33:01.098+0000 7f7cf2338740 -1 osd.2 0 log_to_monitors true
Nov 25 23:33:01 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0) v1
Nov 25 23:33:01 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/2972592351,v1:192.168.122.100:6811/2972592351]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Nov 25 23:33:01 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Nov 25 23:33:01 compute-0 ceph-mon[75654]: pgmap v35: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Nov 25 23:33:01 compute-0 ceph-mon[75654]: osd.1 [v2:192.168.122.100:6806/1453008780,v1:192.168.122.100:6807/1453008780] boot
Nov 25 23:33:01 compute-0 ceph-mon[75654]: osdmap e13: 3 total, 2 up, 3 in
Nov 25 23:33:01 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 25 23:33:01 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 23:33:01 compute-0 ceph-mon[75654]: from='osd.2 [v2:192.168.122.100:6810/2972592351,v1:192.168.122.100:6811/2972592351]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Nov 25 23:33:01 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/2972592351,v1:192.168.122.100:6811/2972592351]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Nov 25 23:33:01 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e14 e14: 3 total, 2 up, 3 in
Nov 25 23:33:01 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e14: 3 total, 2 up, 3 in
Nov 25 23:33:01 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Nov 25 23:33:01 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/2972592351,v1:192.168.122.100:6811/2972592351]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 25 23:33:01 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e14 create-or-move crush item name 'osd.2' initial_weight 0.0195 at location {host=compute-0,root=default}
Nov 25 23:33:01 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 23:33:01 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 23:33:01 compute-0 ceph-mgr[75954]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 23:33:01 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 14 pg[1.0( empty local-lis/les=13/14 n=0 ec=11/11 lis/c=0/0 les/c/f=0/0/0 sis=13) [1] r=0 lpr=13 pi=[11,13)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:33:01 compute-0 ceph-mgr[75954]: [devicehealth INFO root] creating main.db for devicehealth
Nov 25 23:33:01 compute-0 ceph-mgr[75954]: [devicehealth INFO root] Check health
Nov 25 23:33:01 compute-0 ceph-mgr[75954]: [devicehealth ERROR root] Fail to parse JSON result from daemon osd.2 ()
Nov 25 23:33:01 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Nov 25 23:33:01 compute-0 sudo[91852]:     ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda
Nov 25 23:33:01 compute-0 sudo[91852]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 25 23:33:01 compute-0 sudo[91852]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167)
Nov 25 23:33:01 compute-0 sudo[91852]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:01 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Nov 25 23:33:01 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Nov 25 23:33:01 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 25 23:33:01 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 25 23:33:01 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4012357197' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 25 23:33:01 compute-0 brave_yalow[91582]: {
Nov 25 23:33:01 compute-0 brave_yalow[91582]:     "019d967b-1a56-4e90-8682-a890da577e20": {
Nov 25 23:33:01 compute-0 brave_yalow[91582]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:33:01 compute-0 brave_yalow[91582]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 23:33:01 compute-0 brave_yalow[91582]:         "osd_id": 2,
Nov 25 23:33:01 compute-0 brave_yalow[91582]:         "osd_uuid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 23:33:01 compute-0 brave_yalow[91582]:         "type": "bluestore"
Nov 25 23:33:01 compute-0 brave_yalow[91582]:     },
Nov 25 23:33:01 compute-0 brave_yalow[91582]:     "21bbab34-bea3-466b-8bf7-812749fcef47": {
Nov 25 23:33:01 compute-0 brave_yalow[91582]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:33:01 compute-0 brave_yalow[91582]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 23:33:01 compute-0 brave_yalow[91582]:         "osd_id": 1,
Nov 25 23:33:01 compute-0 brave_yalow[91582]:         "osd_uuid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 23:33:01 compute-0 brave_yalow[91582]:         "type": "bluestore"
Nov 25 23:33:01 compute-0 brave_yalow[91582]:     },
Nov 25 23:33:01 compute-0 brave_yalow[91582]:     "c5ddf08a-6193-41e0-8332-60b5083aa62e": {
Nov 25 23:33:01 compute-0 brave_yalow[91582]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:33:01 compute-0 brave_yalow[91582]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 23:33:01 compute-0 brave_yalow[91582]:         "osd_id": 0,
Nov 25 23:33:01 compute-0 brave_yalow[91582]:         "osd_uuid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 23:33:01 compute-0 brave_yalow[91582]:         "type": "bluestore"
Nov 25 23:33:01 compute-0 brave_yalow[91582]:     }
Nov 25 23:33:01 compute-0 brave_yalow[91582]: }
Nov 25 23:33:01 compute-0 systemd[1]: libpod-0bcdf0aa7eeb3b9a4c37f68326888bbce480b7b53d0f6f1b01bafd48b96f2f81.scope: Deactivated successfully.
Nov 25 23:33:01 compute-0 podman[91372]: 2025-11-25 23:33:01.988051834 +0000 UTC m=+1.282932183 container died 0bcdf0aa7eeb3b9a4c37f68326888bbce480b7b53d0f6f1b01bafd48b96f2f81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_yalow, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 23:33:01 compute-0 systemd[1]: libpod-0bcdf0aa7eeb3b9a4c37f68326888bbce480b7b53d0f6f1b01bafd48b96f2f81.scope: Consumed 1.089s CPU time.
Nov 25 23:33:01 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v38: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Nov 25 23:33:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-45ef8d5fc0aab03f52ed72e912e677db081d0e69e6fc4755a57d05a4899f6c19-merged.mount: Deactivated successfully.
Nov 25 23:33:02 compute-0 podman[91372]: 2025-11-25 23:33:02.064959271 +0000 UTC m=+1.359839660 container remove 0bcdf0aa7eeb3b9a4c37f68326888bbce480b7b53d0f6f1b01bafd48b96f2f81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_yalow, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 23:33:02 compute-0 systemd[1]: libpod-conmon-0bcdf0aa7eeb3b9a4c37f68326888bbce480b7b53d0f6f1b01bafd48b96f2f81.scope: Deactivated successfully.
Nov 25 23:33:02 compute-0 sudo[91202]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:02 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 23:33:02 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:33:02 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 23:33:02 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Nov 25 23:33:02 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Nov 25 23:33:02 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:33:02 compute-0 sudo[91900]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:33:02 compute-0 sudo[91900]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:02 compute-0 sudo[91900]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:02 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Nov 25 23:33:02 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/2972592351,v1:192.168.122.100:6811/2972592351]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 25 23:33:02 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4012357197' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 25 23:33:02 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e15 e15: 3 total, 2 up, 3 in
Nov 25 23:33:02 compute-0 ceph-osd[91111]: osd.2 0 done with init, starting boot process
Nov 25 23:33:02 compute-0 ceph-osd[91111]: osd.2 0 start_boot
Nov 25 23:33:02 compute-0 ceph-osd[91111]: osd.2 0 maybe_override_options_for_qos osd_max_backfills set to 1
Nov 25 23:33:02 compute-0 ceph-osd[91111]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Nov 25 23:33:02 compute-0 ceph-osd[91111]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Nov 25 23:33:02 compute-0 ceph-osd[91111]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Nov 25 23:33:02 compute-0 ceph-osd[91111]: osd.2 0  bench count 12288000 bsize 4 KiB
Nov 25 23:33:02 compute-0 gallant_goldberg[91602]: pool 'vms' created
Nov 25 23:33:02 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e15: 3 total, 2 up, 3 in
Nov 25 23:33:02 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 23:33:02 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 23:33:02 compute-0 ceph-mgr[75954]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 23:33:02 compute-0 ceph-mon[75654]: from='osd.2 [v2:192.168.122.100:6810/2972592351,v1:192.168.122.100:6811/2972592351]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Nov 25 23:33:02 compute-0 ceph-mon[75654]: osdmap e14: 3 total, 2 up, 3 in
Nov 25 23:33:02 compute-0 ceph-mon[75654]: from='osd.2 [v2:192.168.122.100:6810/2972592351,v1:192.168.122.100:6811/2972592351]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 25 23:33:02 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 23:33:02 compute-0 ceph-mon[75654]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Nov 25 23:33:02 compute-0 ceph-mon[75654]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Nov 25 23:33:02 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 25 23:33:02 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/4012357197' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 25 23:33:02 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:33:02 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:33:02 compute-0 ceph-mgr[75954]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2972592351; not ready for session (expect reconnect)
Nov 25 23:33:02 compute-0 systemd[1]: libpod-61d318b2d4bf7b48acd1bf8bba263d726d120ed5cf9e8624da0aff2eb1b80ef9.scope: Deactivated successfully.
Nov 25 23:33:02 compute-0 podman[91585]: 2025-11-25 23:33:02.318638541 +0000 UTC m=+1.477148401 container died 61d318b2d4bf7b48acd1bf8bba263d726d120ed5cf9e8624da0aff2eb1b80ef9 (image=quay.io/ceph/ceph:v18, name=gallant_goldberg, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 23:33:02 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.gwqfsl(active, since 66s)
Nov 25 23:33:02 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 23:33:02 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 23:33:02 compute-0 ceph-mgr[75954]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 23:33:02 compute-0 sudo[91925]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 23:33:02 compute-0 sudo[91925]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:02 compute-0 sudo[91925]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-22645ea58337fa75ac8ae6aabdfec1c0c8e15d8d6394f43cbf0f4fc36a0edcb3-merged.mount: Deactivated successfully.
Nov 25 23:33:02 compute-0 podman[91585]: 2025-11-25 23:33:02.429532965 +0000 UTC m=+1.588042805 container remove 61d318b2d4bf7b48acd1bf8bba263d726d120ed5cf9e8624da0aff2eb1b80ef9 (image=quay.io/ceph/ceph:v18, name=gallant_goldberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:33:02 compute-0 systemd[1]: libpod-conmon-61d318b2d4bf7b48acd1bf8bba263d726d120ed5cf9e8624da0aff2eb1b80ef9.scope: Deactivated successfully.
Nov 25 23:33:02 compute-0 sudo[91364]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:02 compute-0 sudo[91962]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:33:02 compute-0 sudo[91962]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:02 compute-0 sudo[91962]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:02 compute-0 sudo[91987]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:33:02 compute-0 sudo[91987]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:02 compute-0 sudo[91987]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:02 compute-0 sudo[92058]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzvyutmpfctybjkrbauupqrlolnlrhli ; /usr/bin/python3'
Nov 25 23:33:02 compute-0 sudo[92058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:33:02 compute-0 sudo[92016]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:33:02 compute-0 sudo[92016]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:02 compute-0 sudo[92016]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:02 compute-0 sudo[92063]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 25 23:33:02 compute-0 sudo[92063]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:02 compute-0 python3[92061]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 101922db-575f-58e2-980f-928050464f69 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:33:02 compute-0 podman[92089]: 2025-11-25 23:33:02.905116077 +0000 UTC m=+0.051671009 container create adcc9482bfecea459aa6240a7fb508ce4d2cd3e21133d919bff813bbe7d89fb7 (image=quay.io/ceph/ceph:v18, name=focused_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:33:02 compute-0 systemd[1]: Started libpod-conmon-adcc9482bfecea459aa6240a7fb508ce4d2cd3e21133d919bff813bbe7d89fb7.scope.
Nov 25 23:33:02 compute-0 podman[92089]: 2025-11-25 23:33:02.887795316 +0000 UTC m=+0.034350338 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 23:33:02 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:33:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d085a551c01f9c012618accd6ccd03f4f0ad02cb851fe43aa5447daddd85da5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d085a551c01f9c012618accd6ccd03f4f0ad02cb851fe43aa5447daddd85da5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:03 compute-0 podman[92089]: 2025-11-25 23:33:03.000310482 +0000 UTC m=+0.146865424 container init adcc9482bfecea459aa6240a7fb508ce4d2cd3e21133d919bff813bbe7d89fb7 (image=quay.io/ceph/ceph:v18, name=focused_curie, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 23:33:03 compute-0 podman[92089]: 2025-11-25 23:33:03.009991584 +0000 UTC m=+0.156546556 container start adcc9482bfecea459aa6240a7fb508ce4d2cd3e21133d919bff813bbe7d89fb7 (image=quay.io/ceph/ceph:v18, name=focused_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:33:03 compute-0 podman[92089]: 2025-11-25 23:33:03.016183616 +0000 UTC m=+0.162738558 container attach adcc9482bfecea459aa6240a7fb508ce4d2cd3e21133d919bff813bbe7d89fb7 (image=quay.io/ceph/ceph:v18, name=focused_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:33:03 compute-0 ceph-mgr[75954]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2972592351; not ready for session (expect reconnect)
Nov 25 23:33:03 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 23:33:03 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 23:33:03 compute-0 ceph-mgr[75954]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 23:33:03 compute-0 ceph-mon[75654]: pgmap v38: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Nov 25 23:33:03 compute-0 ceph-mon[75654]: from='osd.2 [v2:192.168.122.100:6810/2972592351,v1:192.168.122.100:6811/2972592351]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 25 23:33:03 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/4012357197' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 25 23:33:03 compute-0 ceph-mon[75654]: osdmap e15: 3 total, 2 up, 3 in
Nov 25 23:33:03 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 23:33:03 compute-0 ceph-mon[75654]: mgrmap e9: compute-0.gwqfsl(active, since 66s)
Nov 25 23:33:03 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 23:33:03 compute-0 podman[92180]: 2025-11-25 23:33:03.36579041 +0000 UTC m=+0.086580471 container exec 42789e176a5dd28a3c234ac87642cbe4e8e4b6a7a43641214b6da8a496b18af2 (image=quay.io/ceph/ceph:v18, name=ceph-101922db-575f-58e2-980f-928050464f69-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:33:03 compute-0 podman[92180]: 2025-11-25 23:33:03.48073571 +0000 UTC m=+0.201525731 container exec_died 42789e176a5dd28a3c234ac87642cbe4e8e4b6a7a43641214b6da8a496b18af2 (image=quay.io/ceph/ceph:v18, name=ceph-101922db-575f-58e2-980f-928050464f69-mon-compute-0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 25 23:33:03 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 25 23:33:03 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2148731035' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 25 23:33:03 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v40: 2 pgs: 1 unknown, 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Nov 25 23:33:04 compute-0 sudo[92063]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:04 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 23:33:04 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:33:04 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 23:33:04 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:33:04 compute-0 sudo[92322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:33:04 compute-0 sudo[92322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:04 compute-0 sudo[92322]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:04 compute-0 sudo[92347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:33:04 compute-0 sudo[92347]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:04 compute-0 sudo[92347]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:04 compute-0 ceph-mgr[75954]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2972592351; not ready for session (expect reconnect)
Nov 25 23:33:04 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 23:33:04 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 23:33:04 compute-0 ceph-mgr[75954]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 23:33:04 compute-0 ceph-mon[75654]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 25 23:33:04 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Nov 25 23:33:04 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2148731035' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 25 23:33:04 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e16 e16: 3 total, 2 up, 3 in
Nov 25 23:33:04 compute-0 focused_curie[92125]: pool 'volumes' created
Nov 25 23:33:04 compute-0 ceph-mon[75654]: purged_snaps scrub starts
Nov 25 23:33:04 compute-0 ceph-mon[75654]: purged_snaps scrub ok
Nov 25 23:33:04 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 23:33:04 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/2148731035' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 25 23:33:04 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:33:04 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:33:04 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 23:33:04 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e16: 3 total, 2 up, 3 in
Nov 25 23:33:04 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 23:33:04 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 23:33:04 compute-0 ceph-mgr[75954]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 23:33:04 compute-0 systemd[1]: libpod-adcc9482bfecea459aa6240a7fb508ce4d2cd3e21133d919bff813bbe7d89fb7.scope: Deactivated successfully.
Nov 25 23:33:04 compute-0 podman[92089]: 2025-11-25 23:33:04.364824713 +0000 UTC m=+1.511379675 container died adcc9482bfecea459aa6240a7fb508ce4d2cd3e21133d919bff813bbe7d89fb7 (image=quay.io/ceph/ceph:v18, name=focused_curie, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:33:04 compute-0 sudo[92372]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:33:04 compute-0 sudo[92372]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:04 compute-0 sudo[92372]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d085a551c01f9c012618accd6ccd03f4f0ad02cb851fe43aa5447daddd85da5-merged.mount: Deactivated successfully.
Nov 25 23:33:04 compute-0 podman[92089]: 2025-11-25 23:33:04.449820911 +0000 UTC m=+1.596375853 container remove adcc9482bfecea459aa6240a7fb508ce4d2cd3e21133d919bff813bbe7d89fb7 (image=quay.io/ceph/ceph:v18, name=focused_curie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 25 23:33:04 compute-0 sudo[92400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 -- inventory --format=json-pretty --filter-for-batch
Nov 25 23:33:04 compute-0 sudo[92400]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:04 compute-0 systemd[1]: libpod-conmon-adcc9482bfecea459aa6240a7fb508ce4d2cd3e21133d919bff813bbe7d89fb7.scope: Deactivated successfully.
Nov 25 23:33:04 compute-0 sudo[92058]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:04 compute-0 sudo[92465]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fakbkirfwfdzetszfbwtvfujzjscbnaz ; /usr/bin/python3'
Nov 25 23:33:04 compute-0 sudo[92465]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:33:04 compute-0 python3[92470]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 101922db-575f-58e2-980f-928050464f69 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:33:04 compute-0 podman[92499]: 2025-11-25 23:33:04.813292197 +0000 UTC m=+0.043649220 container create 12db613ded0e8168c8a9b0561221e25ff4791e3f71435f62d14d70643ce2ab19 (image=quay.io/ceph/ceph:v18, name=musing_tharp, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:33:04 compute-0 systemd[1]: Started libpod-conmon-12db613ded0e8168c8a9b0561221e25ff4791e3f71435f62d14d70643ce2ab19.scope.
Nov 25 23:33:04 compute-0 podman[92500]: 2025-11-25 23:33:04.851236567 +0000 UTC m=+0.070175302 container create 2c2900f04f4b3e0747677f29f4926014ccf1cfe4274e14050d6fbae795244f06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_tesla, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:33:04 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:33:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45389f5750efdbd029f68dbb9ca2288cc69393e186d0b5c5cbcbe4a424ffad9e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45389f5750efdbd029f68dbb9ca2288cc69393e186d0b5c5cbcbe4a424ffad9e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:04 compute-0 systemd[1]: Started libpod-conmon-2c2900f04f4b3e0747677f29f4926014ccf1cfe4274e14050d6fbae795244f06.scope.
Nov 25 23:33:04 compute-0 podman[92499]: 2025-11-25 23:33:04.881359744 +0000 UTC m=+0.111716797 container init 12db613ded0e8168c8a9b0561221e25ff4791e3f71435f62d14d70643ce2ab19 (image=quay.io/ceph/ceph:v18, name=musing_tharp, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:33:04 compute-0 podman[92499]: 2025-11-25 23:33:04.791879168 +0000 UTC m=+0.022236201 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 23:33:04 compute-0 podman[92499]: 2025-11-25 23:33:04.888552281 +0000 UTC m=+0.118909324 container start 12db613ded0e8168c8a9b0561221e25ff4791e3f71435f62d14d70643ce2ab19 (image=quay.io/ceph/ceph:v18, name=musing_tharp, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 23:33:04 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:33:04 compute-0 podman[92499]: 2025-11-25 23:33:04.895947084 +0000 UTC m=+0.126304157 container attach 12db613ded0e8168c8a9b0561221e25ff4791e3f71435f62d14d70643ce2ab19 (image=quay.io/ceph/ceph:v18, name=musing_tharp, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:33:04 compute-0 ceph-osd[91111]: osd.2 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 32.763 iops: 8387.282 elapsed_sec: 0.358
Nov 25 23:33:04 compute-0 ceph-osd[91111]: log_channel(cluster) log [WRN] : OSD bench result of 8387.282290 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 25 23:33:04 compute-0 podman[92500]: 2025-11-25 23:33:04.811395168 +0000 UTC m=+0.030333933 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:33:04 compute-0 ceph-osd[91111]: osd.2 0 waiting for initial osdmap
Nov 25 23:33:04 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-osd-2[91107]: 2025-11-25T23:33:04.905+0000 7f7ceeacf640 -1 osd.2 0 waiting for initial osdmap
Nov 25 23:33:04 compute-0 podman[92500]: 2025-11-25 23:33:04.909080706 +0000 UTC m=+0.128019441 container init 2c2900f04f4b3e0747677f29f4926014ccf1cfe4274e14050d6fbae795244f06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_tesla, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 23:33:04 compute-0 podman[92500]: 2025-11-25 23:33:04.913841841 +0000 UTC m=+0.132780586 container start 2c2900f04f4b3e0747677f29f4926014ccf1cfe4274e14050d6fbae795244f06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_tesla, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 23:33:04 compute-0 ceph-osd[91111]: osd.2 16 crush map has features 288514051259236352, adjusting msgr requires for clients
Nov 25 23:33:04 compute-0 ceph-osd[91111]: osd.2 16 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Nov 25 23:33:04 compute-0 ceph-osd[91111]: osd.2 16 crush map has features 3314933000852226048, adjusting msgr requires for osds
Nov 25 23:33:04 compute-0 ceph-osd[91111]: osd.2 16 check_osdmap_features require_osd_release unknown -> reef
Nov 25 23:33:04 compute-0 modest_tesla[92534]: 167 167
Nov 25 23:33:04 compute-0 systemd[1]: libpod-2c2900f04f4b3e0747677f29f4926014ccf1cfe4274e14050d6fbae795244f06.scope: Deactivated successfully.
Nov 25 23:33:04 compute-0 podman[92500]: 2025-11-25 23:33:04.921129631 +0000 UTC m=+0.140068366 container attach 2c2900f04f4b3e0747677f29f4926014ccf1cfe4274e14050d6fbae795244f06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_tesla, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:33:04 compute-0 podman[92500]: 2025-11-25 23:33:04.921955963 +0000 UTC m=+0.140894678 container died 2c2900f04f4b3e0747677f29f4926014ccf1cfe4274e14050d6fbae795244f06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 25 23:33:04 compute-0 ceph-osd[91111]: osd.2 16 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 25 23:33:04 compute-0 ceph-osd[91111]: osd.2 16 set_numa_affinity not setting numa affinity
Nov 25 23:33:04 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-osd-2[91107]: 2025-11-25T23:33:04.931+0000 7f7ce98e0640 -1 osd.2 16 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 25 23:33:04 compute-0 ceph-osd[91111]: osd.2 16 _collect_metadata loop5:  no unique device id for loop5: fallback method has no model nor serial
Nov 25 23:33:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-bd3ab010e57c0a76640193618e37e3a410729a70eec1d6d28d6bbe0c64334b8c-merged.mount: Deactivated successfully.
Nov 25 23:33:04 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 16 pg[3.0( empty local-lis/les=0/0 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=16) [1] r=0 lpr=16 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:33:04 compute-0 podman[92500]: 2025-11-25 23:33:04.963891967 +0000 UTC m=+0.182830682 container remove 2c2900f04f4b3e0747677f29f4926014ccf1cfe4274e14050d6fbae795244f06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True)
Nov 25 23:33:04 compute-0 systemd[1]: libpod-conmon-2c2900f04f4b3e0747677f29f4926014ccf1cfe4274e14050d6fbae795244f06.scope: Deactivated successfully.
Nov 25 23:33:05 compute-0 podman[92560]: 2025-11-25 23:33:05.097296149 +0000 UTC m=+0.037011197 container create b9cf2433755b13e09ff932f459169b0f7fdf385cfc24d87a334303e95772ca59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 25 23:33:05 compute-0 systemd[1]: Started libpod-conmon-b9cf2433755b13e09ff932f459169b0f7fdf385cfc24d87a334303e95772ca59.scope.
Nov 25 23:33:05 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:33:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/374f0b48235f07547c63280cb20ea184c43db0de5905c416c947c81d29e0fe41/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/374f0b48235f07547c63280cb20ea184c43db0de5905c416c947c81d29e0fe41/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/374f0b48235f07547c63280cb20ea184c43db0de5905c416c947c81d29e0fe41/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/374f0b48235f07547c63280cb20ea184c43db0de5905c416c947c81d29e0fe41/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:05 compute-0 podman[92560]: 2025-11-25 23:33:05.156518684 +0000 UTC m=+0.096233782 container init b9cf2433755b13e09ff932f459169b0f7fdf385cfc24d87a334303e95772ca59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_pascal, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 25 23:33:05 compute-0 podman[92560]: 2025-11-25 23:33:05.164977925 +0000 UTC m=+0.104692963 container start b9cf2433755b13e09ff932f459169b0f7fdf385cfc24d87a334303e95772ca59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_pascal, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 23:33:05 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e16 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:33:05 compute-0 podman[92560]: 2025-11-25 23:33:05.167960813 +0000 UTC m=+0.107675891 container attach b9cf2433755b13e09ff932f459169b0f7fdf385cfc24d87a334303e95772ca59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_pascal, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 25 23:33:05 compute-0 podman[92560]: 2025-11-25 23:33:05.08126848 +0000 UTC m=+0.020983548 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:33:05 compute-0 ceph-mgr[75954]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/2972592351; not ready for session (expect reconnect)
Nov 25 23:33:05 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 23:33:05 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 23:33:05 compute-0 ceph-mgr[75954]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 25 23:33:05 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Nov 25 23:33:05 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e17 e17: 3 total, 3 up, 3 in
Nov 25 23:33:05 compute-0 ceph-mon[75654]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.100:6810/2972592351,v1:192.168.122.100:6811/2972592351] boot
Nov 25 23:33:05 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e17: 3 total, 3 up, 3 in
Nov 25 23:33:05 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 25 23:33:05 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 23:33:05 compute-0 ceph-mon[75654]: pgmap v40: 2 pgs: 1 unknown, 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Nov 25 23:33:05 compute-0 ceph-mon[75654]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 25 23:33:05 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/2148731035' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 25 23:33:05 compute-0 ceph-mon[75654]: osdmap e16: 3 total, 2 up, 3 in
Nov 25 23:33:05 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 23:33:05 compute-0 ceph-mon[75654]: OSD bench result of 8387.282290 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 25 23:33:05 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 23:33:05 compute-0 ceph-osd[91111]: osd.2 17 state: booting -> active
Nov 25 23:33:05 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 17 pg[2.0( empty local-lis/les=0/0 n=0 ec=15/15 lis/c=0/0 les/c/f=0/0/0 sis=17) [2] r=0 lpr=17 pi=[15,17)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:33:05 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 17 pg[3.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=16) [1] r=0 lpr=16 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:33:05 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 25 23:33:05 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2947604844' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 25 23:33:05 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v43: 3 pgs: 2 unknown, 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Nov 25 23:33:06 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Nov 25 23:33:06 compute-0 ceph-mon[75654]: osd.2 [v2:192.168.122.100:6810/2972592351,v1:192.168.122.100:6811/2972592351] boot
Nov 25 23:33:06 compute-0 ceph-mon[75654]: osdmap e17: 3 total, 3 up, 3 in
Nov 25 23:33:06 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 25 23:33:06 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/2947604844' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 25 23:33:06 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2947604844' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 25 23:33:06 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e18 e18: 3 total, 3 up, 3 in
Nov 25 23:33:06 compute-0 musing_tharp[92528]: pool 'backups' created
Nov 25 23:33:06 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e18: 3 total, 3 up, 3 in
Nov 25 23:33:06 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 18 pg[2.0( empty local-lis/les=17/18 n=0 ec=15/15 lis/c=0/0 les/c/f=0/0/0 sis=17) [2] r=0 lpr=17 pi=[15,17)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:33:06 compute-0 systemd[1]: libpod-12db613ded0e8168c8a9b0561221e25ff4791e3f71435f62d14d70643ce2ab19.scope: Deactivated successfully.
Nov 25 23:33:06 compute-0 podman[92499]: 2025-11-25 23:33:06.390345665 +0000 UTC m=+1.620702718 container died 12db613ded0e8168c8a9b0561221e25ff4791e3f71435f62d14d70643ce2ab19 (image=quay.io/ceph/ceph:v18, name=musing_tharp, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 23:33:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-45389f5750efdbd029f68dbb9ca2288cc69393e186d0b5c5cbcbe4a424ffad9e-merged.mount: Deactivated successfully.
Nov 25 23:33:06 compute-0 podman[92499]: 2025-11-25 23:33:06.432454533 +0000 UTC m=+1.662811556 container remove 12db613ded0e8168c8a9b0561221e25ff4791e3f71435f62d14d70643ce2ab19 (image=quay.io/ceph/ceph:v18, name=musing_tharp, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:33:06 compute-0 systemd[1]: libpod-conmon-12db613ded0e8168c8a9b0561221e25ff4791e3f71435f62d14d70643ce2ab19.scope: Deactivated successfully.
Nov 25 23:33:06 compute-0 sudo[92465]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:06 compute-0 sleepy_pascal[92577]: [
Nov 25 23:33:06 compute-0 sleepy_pascal[92577]:     {
Nov 25 23:33:06 compute-0 sleepy_pascal[92577]:         "available": false,
Nov 25 23:33:06 compute-0 sleepy_pascal[92577]:         "ceph_device": false,
Nov 25 23:33:06 compute-0 sleepy_pascal[92577]:         "device_id": "QEMU_DVD-ROM_QM00001",
Nov 25 23:33:06 compute-0 sleepy_pascal[92577]:         "lsm_data": {},
Nov 25 23:33:06 compute-0 sleepy_pascal[92577]:         "lvs": [],
Nov 25 23:33:06 compute-0 sleepy_pascal[92577]:         "path": "/dev/sr0",
Nov 25 23:33:06 compute-0 sleepy_pascal[92577]:         "rejected_reasons": [
Nov 25 23:33:06 compute-0 sleepy_pascal[92577]:             "Insufficient space (<5GB)",
Nov 25 23:33:06 compute-0 sleepy_pascal[92577]:             "Has a FileSystem"
Nov 25 23:33:06 compute-0 sleepy_pascal[92577]:         ],
Nov 25 23:33:06 compute-0 sleepy_pascal[92577]:         "sys_api": {
Nov 25 23:33:06 compute-0 sleepy_pascal[92577]:             "actuators": null,
Nov 25 23:33:06 compute-0 sleepy_pascal[92577]:             "device_nodes": "sr0",
Nov 25 23:33:06 compute-0 sleepy_pascal[92577]:             "devname": "sr0",
Nov 25 23:33:06 compute-0 sleepy_pascal[92577]:             "human_readable_size": "482.00 KB",
Nov 25 23:33:06 compute-0 sleepy_pascal[92577]:             "id_bus": "ata",
Nov 25 23:33:06 compute-0 sleepy_pascal[92577]:             "model": "QEMU DVD-ROM",
Nov 25 23:33:06 compute-0 sleepy_pascal[92577]:             "nr_requests": "2",
Nov 25 23:33:06 compute-0 sleepy_pascal[92577]:             "parent": "/dev/sr0",
Nov 25 23:33:06 compute-0 sleepy_pascal[92577]:             "partitions": {},
Nov 25 23:33:06 compute-0 sleepy_pascal[92577]:             "path": "/dev/sr0",
Nov 25 23:33:06 compute-0 sleepy_pascal[92577]:             "removable": "1",
Nov 25 23:33:06 compute-0 sleepy_pascal[92577]:             "rev": "2.5+",
Nov 25 23:33:06 compute-0 sleepy_pascal[92577]:             "ro": "0",
Nov 25 23:33:06 compute-0 sleepy_pascal[92577]:             "rotational": "1",
Nov 25 23:33:06 compute-0 sleepy_pascal[92577]:             "sas_address": "",
Nov 25 23:33:06 compute-0 sleepy_pascal[92577]:             "sas_device_handle": "",
Nov 25 23:33:06 compute-0 sleepy_pascal[92577]:             "scheduler_mode": "mq-deadline",
Nov 25 23:33:06 compute-0 sleepy_pascal[92577]:             "sectors": 0,
Nov 25 23:33:06 compute-0 sleepy_pascal[92577]:             "sectorsize": "2048",
Nov 25 23:33:06 compute-0 sleepy_pascal[92577]:             "size": 493568.0,
Nov 25 23:33:06 compute-0 sleepy_pascal[92577]:             "support_discard": "2048",
Nov 25 23:33:06 compute-0 sleepy_pascal[92577]:             "type": "disk",
Nov 25 23:33:06 compute-0 sleepy_pascal[92577]:             "vendor": "QEMU"
Nov 25 23:33:06 compute-0 sleepy_pascal[92577]:         }
Nov 25 23:33:06 compute-0 sleepy_pascal[92577]:     }
Nov 25 23:33:06 compute-0 sleepy_pascal[92577]: ]
Nov 25 23:33:06 compute-0 systemd[1]: libpod-b9cf2433755b13e09ff932f459169b0f7fdf385cfc24d87a334303e95772ca59.scope: Deactivated successfully.
Nov 25 23:33:06 compute-0 podman[92560]: 2025-11-25 23:33:06.486555516 +0000 UTC m=+1.426270564 container died b9cf2433755b13e09ff932f459169b0f7fdf385cfc24d87a334303e95772ca59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_pascal, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:33:06 compute-0 systemd[1]: libpod-b9cf2433755b13e09ff932f459169b0f7fdf385cfc24d87a334303e95772ca59.scope: Consumed 1.344s CPU time.
Nov 25 23:33:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-374f0b48235f07547c63280cb20ea184c43db0de5905c416c947c81d29e0fe41-merged.mount: Deactivated successfully.
Nov 25 23:33:06 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 18 pg[4.0( empty local-lis/les=0/0 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [0] r=0 lpr=18 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:33:06 compute-0 podman[92560]: 2025-11-25 23:33:06.539062797 +0000 UTC m=+1.478777855 container remove b9cf2433755b13e09ff932f459169b0f7fdf385cfc24d87a334303e95772ca59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_pascal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:33:06 compute-0 systemd[1]: libpod-conmon-b9cf2433755b13e09ff932f459169b0f7fdf385cfc24d87a334303e95772ca59.scope: Deactivated successfully.
Nov 25 23:33:06 compute-0 sudo[92400]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:06 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 23:33:06 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:33:06 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 23:33:06 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:33:06 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) v1
Nov 25 23:33:06 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Nov 25 23:33:06 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) v1
Nov 25 23:33:06 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Nov 25 23:33:06 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) v1
Nov 25 23:33:06 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Nov 25 23:33:06 compute-0 ceph-mgr[75954]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 43690k
Nov 25 23:33:06 compute-0 ceph-mgr[75954]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 43690k
Nov 25 23:33:06 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) v1
Nov 25 23:33:06 compute-0 ceph-mgr[75954]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 44739242: error parsing value: Value '44739242' is below minimum 939524096
Nov 25 23:33:06 compute-0 ceph-mgr[75954]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 44739242: error parsing value: Value '44739242' is below minimum 939524096
Nov 25 23:33:06 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 23:33:06 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:33:06 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 23:33:06 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 23:33:06 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 23:33:06 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:33:06 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev 1354eb9c-04eb-4daa-b985-60786bec6847 does not exist
Nov 25 23:33:06 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev 0d4b1bb6-3fe4-42c2-af98-0e58d1c5aa27 does not exist
Nov 25 23:33:06 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev 84cd5a23-c4d2-4bc8-9ac8-3bc82ffdeca4 does not exist
Nov 25 23:33:06 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 23:33:06 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 23:33:06 compute-0 sudo[94437]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oelfkongttfcjubmlpacybhidcnvhsib ; /usr/bin/python3'
Nov 25 23:33:06 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 23:33:06 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 23:33:06 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 23:33:06 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:33:06 compute-0 sudo[94437]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:33:06 compute-0 sudo[94440]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:33:06 compute-0 sudo[94440]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:06 compute-0 sudo[94440]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:06 compute-0 sudo[94465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:33:06 compute-0 sudo[94465]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:06 compute-0 sudo[94465]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:06 compute-0 python3[94439]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 101922db-575f-58e2-980f-928050464f69 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:33:06 compute-0 sudo[94490]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:33:06 compute-0 sudo[94490]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:06 compute-0 sudo[94490]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:06 compute-0 podman[94496]: 2025-11-25 23:33:06.780158389 +0000 UTC m=+0.039359699 container create c3e69e5fa3707aa6a37c3f95414a8f2b41102479b8e49fb83e864ee9f8365b7e (image=quay.io/ceph/ceph:v18, name=trusting_nobel, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:33:06 compute-0 systemd[1]: Started libpod-conmon-c3e69e5fa3707aa6a37c3f95414a8f2b41102479b8e49fb83e864ee9f8365b7e.scope.
Nov 25 23:33:06 compute-0 sudo[94528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 23:33:06 compute-0 sudo[94528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:06 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:33:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a81d21b0c9b652a6a308411996b5d46834deac692be7557f52cfe10850c471f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a81d21b0c9b652a6a308411996b5d46834deac692be7557f52cfe10850c471f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:06 compute-0 podman[94496]: 2025-11-25 23:33:06.766360328 +0000 UTC m=+0.025561658 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 23:33:06 compute-0 podman[94496]: 2025-11-25 23:33:06.863592045 +0000 UTC m=+0.122793385 container init c3e69e5fa3707aa6a37c3f95414a8f2b41102479b8e49fb83e864ee9f8365b7e (image=quay.io/ceph/ceph:v18, name=trusting_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:33:06 compute-0 podman[94496]: 2025-11-25 23:33:06.870454745 +0000 UTC m=+0.129656095 container start c3e69e5fa3707aa6a37c3f95414a8f2b41102479b8e49fb83e864ee9f8365b7e (image=quay.io/ceph/ceph:v18, name=trusting_nobel, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:33:06 compute-0 podman[94496]: 2025-11-25 23:33:06.874261344 +0000 UTC m=+0.133462684 container attach c3e69e5fa3707aa6a37c3f95414a8f2b41102479b8e49fb83e864ee9f8365b7e (image=quay.io/ceph/ceph:v18, name=trusting_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:33:07 compute-0 podman[94599]: 2025-11-25 23:33:07.14309758 +0000 UTC m=+0.037177731 container create 96acf4c0e158046e16ef23fd71bb2c3abd813766b91110dda557b43cf76f743e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_bohr, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:33:07 compute-0 systemd[1]: Started libpod-conmon-96acf4c0e158046e16ef23fd71bb2c3abd813766b91110dda557b43cf76f743e.scope.
Nov 25 23:33:07 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:33:07 compute-0 podman[94599]: 2025-11-25 23:33:07.207951813 +0000 UTC m=+0.102031984 container init 96acf4c0e158046e16ef23fd71bb2c3abd813766b91110dda557b43cf76f743e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:33:07 compute-0 podman[94599]: 2025-11-25 23:33:07.21513494 +0000 UTC m=+0.109215091 container start 96acf4c0e158046e16ef23fd71bb2c3abd813766b91110dda557b43cf76f743e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_bohr, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 23:33:07 compute-0 epic_bohr[94617]: 167 167
Nov 25 23:33:07 compute-0 systemd[1]: libpod-96acf4c0e158046e16ef23fd71bb2c3abd813766b91110dda557b43cf76f743e.scope: Deactivated successfully.
Nov 25 23:33:07 compute-0 podman[94599]: 2025-11-25 23:33:07.219087513 +0000 UTC m=+0.113167694 container attach 96acf4c0e158046e16ef23fd71bb2c3abd813766b91110dda557b43cf76f743e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 25 23:33:07 compute-0 podman[94599]: 2025-11-25 23:33:07.219835743 +0000 UTC m=+0.113915894 container died 96acf4c0e158046e16ef23fd71bb2c3abd813766b91110dda557b43cf76f743e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:33:07 compute-0 podman[94599]: 2025-11-25 23:33:07.125345497 +0000 UTC m=+0.019425668 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:33:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-c61c6730fe8643d0fbea1f08bf1531670e9f447dce0266a11c55fc2680ebc05d-merged.mount: Deactivated successfully.
Nov 25 23:33:07 compute-0 podman[94599]: 2025-11-25 23:33:07.249749084 +0000 UTC m=+0.143829235 container remove 96acf4c0e158046e16ef23fd71bb2c3abd813766b91110dda557b43cf76f743e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:33:07 compute-0 systemd[1]: libpod-conmon-96acf4c0e158046e16ef23fd71bb2c3abd813766b91110dda557b43cf76f743e.scope: Deactivated successfully.
Nov 25 23:33:07 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Nov 25 23:33:07 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e19 e19: 3 total, 3 up, 3 in
Nov 25 23:33:07 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e19: 3 total, 3 up, 3 in
Nov 25 23:33:07 compute-0 ceph-mon[75654]: pgmap v43: 3 pgs: 2 unknown, 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Nov 25 23:33:07 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/2947604844' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 25 23:33:07 compute-0 ceph-mon[75654]: osdmap e18: 3 total, 3 up, 3 in
Nov 25 23:33:07 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:33:07 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:33:07 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Nov 25 23:33:07 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Nov 25 23:33:07 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Nov 25 23:33:07 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:33:07 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 23:33:07 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:33:07 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 23:33:07 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 23:33:07 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:33:07 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 19 pg[4.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [0] r=0 lpr=18 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:33:07 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 25 23:33:07 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3214176959' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 25 23:33:07 compute-0 podman[94658]: 2025-11-25 23:33:07.427860692 +0000 UTC m=+0.052226804 container create a261bb2134be6bda42492c109a9b96ec7b2d0ec9d397b58b985615fbe14d2d7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_bartik, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 25 23:33:07 compute-0 systemd[1]: Started libpod-conmon-a261bb2134be6bda42492c109a9b96ec7b2d0ec9d397b58b985615fbe14d2d7b.scope.
Nov 25 23:33:07 compute-0 podman[94658]: 2025-11-25 23:33:07.408484006 +0000 UTC m=+0.032850148 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:33:07 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:33:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bb7572dfa03ddec6f7bc1dd5332f0d50a418e4cf0282dc1ff9e6cf5a4da2333/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bb7572dfa03ddec6f7bc1dd5332f0d50a418e4cf0282dc1ff9e6cf5a4da2333/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bb7572dfa03ddec6f7bc1dd5332f0d50a418e4cf0282dc1ff9e6cf5a4da2333/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bb7572dfa03ddec6f7bc1dd5332f0d50a418e4cf0282dc1ff9e6cf5a4da2333/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bb7572dfa03ddec6f7bc1dd5332f0d50a418e4cf0282dc1ff9e6cf5a4da2333/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:07 compute-0 podman[94658]: 2025-11-25 23:33:07.515837348 +0000 UTC m=+0.140203480 container init a261bb2134be6bda42492c109a9b96ec7b2d0ec9d397b58b985615fbe14d2d7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_bartik, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:33:07 compute-0 podman[94658]: 2025-11-25 23:33:07.525129231 +0000 UTC m=+0.149495333 container start a261bb2134be6bda42492c109a9b96ec7b2d0ec9d397b58b985615fbe14d2d7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_bartik, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 23:33:07 compute-0 podman[94658]: 2025-11-25 23:33:07.528676943 +0000 UTC m=+0.153043045 container attach a261bb2134be6bda42492c109a9b96ec7b2d0ec9d397b58b985615fbe14d2d7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:33:07 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v46: 4 pgs: 2 creating+peering, 2 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:33:08 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Nov 25 23:33:08 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3214176959' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 25 23:33:08 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e20 e20: 3 total, 3 up, 3 in
Nov 25 23:33:08 compute-0 trusting_nobel[94555]: pool 'images' created
Nov 25 23:33:08 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e20: 3 total, 3 up, 3 in
Nov 25 23:33:08 compute-0 ceph-mon[75654]: Adjusting osd_memory_target on compute-0 to 43690k
Nov 25 23:33:08 compute-0 ceph-mon[75654]: Unable to set osd_memory_target on compute-0 to 44739242: error parsing value: Value '44739242' is below minimum 939524096
Nov 25 23:33:08 compute-0 ceph-mon[75654]: osdmap e19: 3 total, 3 up, 3 in
Nov 25 23:33:08 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/3214176959' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 25 23:33:08 compute-0 systemd[1]: libpod-c3e69e5fa3707aa6a37c3f95414a8f2b41102479b8e49fb83e864ee9f8365b7e.scope: Deactivated successfully.
Nov 25 23:33:08 compute-0 conmon[94555]: conmon c3e69e5fa3707aa6a37c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c3e69e5fa3707aa6a37c3f95414a8f2b41102479b8e49fb83e864ee9f8365b7e.scope/container/memory.events
Nov 25 23:33:08 compute-0 podman[94702]: 2025-11-25 23:33:08.495010502 +0000 UTC m=+0.042058779 container died c3e69e5fa3707aa6a37c3f95414a8f2b41102479b8e49fb83e864ee9f8365b7e (image=quay.io/ceph/ceph:v18, name=trusting_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3)
Nov 25 23:33:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-8a81d21b0c9b652a6a308411996b5d46834deac692be7557f52cfe10850c471f-merged.mount: Deactivated successfully.
Nov 25 23:33:08 compute-0 podman[94702]: 2025-11-25 23:33:08.537071591 +0000 UTC m=+0.084119858 container remove c3e69e5fa3707aa6a37c3f95414a8f2b41102479b8e49fb83e864ee9f8365b7e (image=quay.io/ceph/ceph:v18, name=trusting_nobel, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 25 23:33:08 compute-0 systemd[1]: libpod-conmon-c3e69e5fa3707aa6a37c3f95414a8f2b41102479b8e49fb83e864ee9f8365b7e.scope: Deactivated successfully.
Nov 25 23:33:08 compute-0 vigorous_bartik[94678]: --> passed data devices: 0 physical, 3 LVM
Nov 25 23:33:08 compute-0 vigorous_bartik[94678]: --> relative data size: 1.0
Nov 25 23:33:08 compute-0 vigorous_bartik[94678]: --> All data devices are unavailable
Nov 25 23:33:08 compute-0 sudo[94437]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:08 compute-0 systemd[1]: libpod-a261bb2134be6bda42492c109a9b96ec7b2d0ec9d397b58b985615fbe14d2d7b.scope: Deactivated successfully.
Nov 25 23:33:08 compute-0 podman[94658]: 2025-11-25 23:33:08.587777254 +0000 UTC m=+1.212143366 container died a261bb2134be6bda42492c109a9b96ec7b2d0ec9d397b58b985615fbe14d2d7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:33:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-0bb7572dfa03ddec6f7bc1dd5332f0d50a418e4cf0282dc1ff9e6cf5a4da2333-merged.mount: Deactivated successfully.
Nov 25 23:33:08 compute-0 podman[94658]: 2025-11-25 23:33:08.639091242 +0000 UTC m=+1.263457344 container remove a261bb2134be6bda42492c109a9b96ec7b2d0ec9d397b58b985615fbe14d2d7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Nov 25 23:33:08 compute-0 systemd[1]: libpod-conmon-a261bb2134be6bda42492c109a9b96ec7b2d0ec9d397b58b985615fbe14d2d7b.scope: Deactivated successfully.
Nov 25 23:33:08 compute-0 sudo[94528]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:08 compute-0 sudo[94780]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjglgtbiwmrkmwmolhzjutkzxxcwfnhz ; /usr/bin/python3'
Nov 25 23:33:08 compute-0 sudo[94737]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:33:08 compute-0 sudo[94780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:33:08 compute-0 sudo[94737]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:08 compute-0 sudo[94737]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:08 compute-0 sudo[94785]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:33:08 compute-0 sudo[94785]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:08 compute-0 sudo[94785]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:08 compute-0 sudo[94810]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:33:08 compute-0 sudo[94810]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:08 compute-0 sudo[94810]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:08 compute-0 python3[94783]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 101922db-575f-58e2-980f-928050464f69 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:33:08 compute-0 sudo[94835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 -- lvm list --format json
Nov 25 23:33:08 compute-0 sudo[94835]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:08 compute-0 podman[94842]: 2025-11-25 23:33:08.908271568 +0000 UTC m=+0.049127383 container create c6600f9c53a406c41aed29ce3aa520102f95ac8878c7d90c2d94a8ead67d8c39 (image=quay.io/ceph/ceph:v18, name=wizardly_swanson, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:33:08 compute-0 systemd[1]: Started libpod-conmon-c6600f9c53a406c41aed29ce3aa520102f95ac8878c7d90c2d94a8ead67d8c39.scope.
Nov 25 23:33:08 compute-0 podman[94842]: 2025-11-25 23:33:08.883630345 +0000 UTC m=+0.024486250 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 23:33:08 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:33:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9da050e78968385c17d86bd9934a095c93c62dfe03048f6adcbf1408a81c8d9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9da050e78968385c17d86bd9934a095c93c62dfe03048f6adcbf1408a81c8d9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:09 compute-0 podman[94842]: 2025-11-25 23:33:09.011963234 +0000 UTC m=+0.152819149 container init c6600f9c53a406c41aed29ce3aa520102f95ac8878c7d90c2d94a8ead67d8c39 (image=quay.io/ceph/ceph:v18, name=wizardly_swanson, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 23:33:09 compute-0 podman[94842]: 2025-11-25 23:33:09.024016548 +0000 UTC m=+0.164872373 container start c6600f9c53a406c41aed29ce3aa520102f95ac8878c7d90c2d94a8ead67d8c39 (image=quay.io/ceph/ceph:v18, name=wizardly_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:33:09 compute-0 podman[94842]: 2025-11-25 23:33:09.030559019 +0000 UTC m=+0.171415074 container attach c6600f9c53a406c41aed29ce3aa520102f95ac8878c7d90c2d94a8ead67d8c39 (image=quay.io/ceph/ceph:v18, name=wizardly_swanson, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 23:33:09 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 20 pg[5.0( empty local-lis/les=0/0 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [2] r=0 lpr=20 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:33:09 compute-0 podman[94918]: 2025-11-25 23:33:09.22371498 +0000 UTC m=+0.040831246 container create 49d5f6a3b9b309113529c47eec5aea0774db8663708de45ff11e8684897a0ab9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_borg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 23:33:09 compute-0 systemd[1]: Started libpod-conmon-49d5f6a3b9b309113529c47eec5aea0774db8663708de45ff11e8684897a0ab9.scope.
Nov 25 23:33:09 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:33:09 compute-0 podman[94918]: 2025-11-25 23:33:09.29190467 +0000 UTC m=+0.109020936 container init 49d5f6a3b9b309113529c47eec5aea0774db8663708de45ff11e8684897a0ab9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_borg, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 23:33:09 compute-0 podman[94918]: 2025-11-25 23:33:09.299209611 +0000 UTC m=+0.116325877 container start 49d5f6a3b9b309113529c47eec5aea0774db8663708de45ff11e8684897a0ab9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_borg, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:33:09 compute-0 podman[94918]: 2025-11-25 23:33:09.207777515 +0000 UTC m=+0.024893801 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:33:09 compute-0 podman[94918]: 2025-11-25 23:33:09.302353502 +0000 UTC m=+0.119469768 container attach 49d5f6a3b9b309113529c47eec5aea0774db8663708de45ff11e8684897a0ab9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_borg, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:33:09 compute-0 heuristic_borg[94936]: 167 167
Nov 25 23:33:09 compute-0 systemd[1]: libpod-49d5f6a3b9b309113529c47eec5aea0774db8663708de45ff11e8684897a0ab9.scope: Deactivated successfully.
Nov 25 23:33:09 compute-0 conmon[94936]: conmon 49d5f6a3b9b309113529 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-49d5f6a3b9b309113529c47eec5aea0774db8663708de45ff11e8684897a0ab9.scope/container/memory.events
Nov 25 23:33:09 compute-0 podman[94918]: 2025-11-25 23:33:09.30493129 +0000 UTC m=+0.122047566 container died 49d5f6a3b9b309113529c47eec5aea0774db8663708de45ff11e8684897a0ab9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_borg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:33:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-d65ba2f3699399f3c6879ba765f405421efbafad40fb55bc43344fde056530ed-merged.mount: Deactivated successfully.
Nov 25 23:33:09 compute-0 podman[94918]: 2025-11-25 23:33:09.336417811 +0000 UTC m=+0.153534077 container remove 49d5f6a3b9b309113529c47eec5aea0774db8663708de45ff11e8684897a0ab9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_borg, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 23:33:09 compute-0 systemd[1]: libpod-conmon-49d5f6a3b9b309113529c47eec5aea0774db8663708de45ff11e8684897a0ab9.scope: Deactivated successfully.
Nov 25 23:33:09 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Nov 25 23:33:09 compute-0 ceph-mon[75654]: pgmap v46: 4 pgs: 2 creating+peering, 2 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:33:09 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/3214176959' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 25 23:33:09 compute-0 ceph-mon[75654]: osdmap e20: 3 total, 3 up, 3 in
Nov 25 23:33:09 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e21 e21: 3 total, 3 up, 3 in
Nov 25 23:33:09 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e21: 3 total, 3 up, 3 in
Nov 25 23:33:09 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 21 pg[5.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [2] r=0 lpr=20 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:33:09 compute-0 podman[94978]: 2025-11-25 23:33:09.515044713 +0000 UTC m=+0.043225839 container create 6fdce1c85ba00771e97d003c6e1f7b7742bb7bb01bb5a6110b734cf1cc0a6619 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 25 23:33:09 compute-0 systemd[1]: Started libpod-conmon-6fdce1c85ba00771e97d003c6e1f7b7742bb7bb01bb5a6110b734cf1cc0a6619.scope.
Nov 25 23:33:09 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 25 23:33:09 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4149178136' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 25 23:33:09 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:33:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f3573148e0770a4f4548f09008db23f629ae09809e0b0d6773aa2f716a04190/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f3573148e0770a4f4548f09008db23f629ae09809e0b0d6773aa2f716a04190/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f3573148e0770a4f4548f09008db23f629ae09809e0b0d6773aa2f716a04190/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f3573148e0770a4f4548f09008db23f629ae09809e0b0d6773aa2f716a04190/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:09 compute-0 podman[94978]: 2025-11-25 23:33:09.498484851 +0000 UTC m=+0.026665987 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:33:09 compute-0 podman[94978]: 2025-11-25 23:33:09.597907376 +0000 UTC m=+0.126088532 container init 6fdce1c85ba00771e97d003c6e1f7b7742bb7bb01bb5a6110b734cf1cc0a6619 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_bartik, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 25 23:33:09 compute-0 podman[94978]: 2025-11-25 23:33:09.603227965 +0000 UTC m=+0.131409081 container start 6fdce1c85ba00771e97d003c6e1f7b7742bb7bb01bb5a6110b734cf1cc0a6619 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_bartik, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:33:09 compute-0 podman[94978]: 2025-11-25 23:33:09.606951262 +0000 UTC m=+0.135132388 container attach 6fdce1c85ba00771e97d003c6e1f7b7742bb7bb01bb5a6110b734cf1cc0a6619 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_bartik, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Nov 25 23:33:09 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v49: 5 pgs: 1 unknown, 2 creating+peering, 2 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:33:10 compute-0 ceph-mon[75654]: log_channel(cluster) log [WRN] : Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 25 23:33:10 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e21 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:33:10 compute-0 stoic_bartik[94995]: {
Nov 25 23:33:10 compute-0 stoic_bartik[94995]:     "0": [
Nov 25 23:33:10 compute-0 stoic_bartik[94995]:         {
Nov 25 23:33:10 compute-0 stoic_bartik[94995]:             "devices": [
Nov 25 23:33:10 compute-0 stoic_bartik[94995]:                 "/dev/loop3"
Nov 25 23:33:10 compute-0 stoic_bartik[94995]:             ],
Nov 25 23:33:10 compute-0 stoic_bartik[94995]:             "lv_name": "ceph_lv0",
Nov 25 23:33:10 compute-0 stoic_bartik[94995]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 23:33:10 compute-0 stoic_bartik[94995]:             "lv_size": "21470642176",
Nov 25 23:33:10 compute-0 stoic_bartik[94995]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c5ddf08a-6193-41e0-8332-60b5083aa62e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 23:33:10 compute-0 stoic_bartik[94995]:             "lv_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 23:33:10 compute-0 stoic_bartik[94995]:             "name": "ceph_lv0",
Nov 25 23:33:10 compute-0 stoic_bartik[94995]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 23:33:10 compute-0 stoic_bartik[94995]:             "tags": {
Nov 25 23:33:10 compute-0 stoic_bartik[94995]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 23:33:10 compute-0 stoic_bartik[94995]:                 "ceph.block_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 23:33:10 compute-0 stoic_bartik[94995]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 23:33:10 compute-0 stoic_bartik[94995]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:33:10 compute-0 stoic_bartik[94995]:                 "ceph.cluster_name": "ceph",
Nov 25 23:33:10 compute-0 stoic_bartik[94995]:                 "ceph.crush_device_class": "",
Nov 25 23:33:10 compute-0 stoic_bartik[94995]:                 "ceph.encrypted": "0",
Nov 25 23:33:10 compute-0 stoic_bartik[94995]:                 "ceph.osd_fsid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 23:33:10 compute-0 stoic_bartik[94995]:                 "ceph.osd_id": "0",
Nov 25 23:33:10 compute-0 stoic_bartik[94995]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 23:33:10 compute-0 stoic_bartik[94995]:                 "ceph.type": "block",
Nov 25 23:33:10 compute-0 stoic_bartik[94995]:                 "ceph.vdo": "0"
Nov 25 23:33:10 compute-0 stoic_bartik[94995]:             },
Nov 25 23:33:10 compute-0 stoic_bartik[94995]:             "type": "block",
Nov 25 23:33:10 compute-0 stoic_bartik[94995]:             "vg_name": "ceph_vg0"
Nov 25 23:33:10 compute-0 stoic_bartik[94995]:         }
Nov 25 23:33:10 compute-0 stoic_bartik[94995]:     ],
Nov 25 23:33:10 compute-0 stoic_bartik[94995]:     "1": [
Nov 25 23:33:10 compute-0 stoic_bartik[94995]:         {
Nov 25 23:33:10 compute-0 stoic_bartik[94995]:             "devices": [
Nov 25 23:33:10 compute-0 stoic_bartik[94995]:                 "/dev/loop4"
Nov 25 23:33:10 compute-0 stoic_bartik[94995]:             ],
Nov 25 23:33:10 compute-0 stoic_bartik[94995]:             "lv_name": "ceph_lv1",
Nov 25 23:33:10 compute-0 stoic_bartik[94995]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 23:33:10 compute-0 stoic_bartik[94995]:             "lv_size": "21470642176",
Nov 25 23:33:10 compute-0 stoic_bartik[94995]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21bbab34-bea3-466b-8bf7-812749fcef47,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 23:33:10 compute-0 stoic_bartik[94995]:             "lv_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 23:33:10 compute-0 stoic_bartik[94995]:             "name": "ceph_lv1",
Nov 25 23:33:10 compute-0 stoic_bartik[94995]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 23:33:10 compute-0 stoic_bartik[94995]:             "tags": {
Nov 25 23:33:10 compute-0 stoic_bartik[94995]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 23:33:10 compute-0 stoic_bartik[94995]:                 "ceph.block_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 23:33:10 compute-0 stoic_bartik[94995]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 23:33:10 compute-0 stoic_bartik[94995]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:33:10 compute-0 stoic_bartik[94995]:                 "ceph.cluster_name": "ceph",
Nov 25 23:33:10 compute-0 stoic_bartik[94995]:                 "ceph.crush_device_class": "",
Nov 25 23:33:10 compute-0 stoic_bartik[94995]:                 "ceph.encrypted": "0",
Nov 25 23:33:10 compute-0 stoic_bartik[94995]:                 "ceph.osd_fsid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 23:33:10 compute-0 stoic_bartik[94995]:                 "ceph.osd_id": "1",
Nov 25 23:33:10 compute-0 stoic_bartik[94995]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 23:33:10 compute-0 stoic_bartik[94995]:                 "ceph.type": "block",
Nov 25 23:33:10 compute-0 stoic_bartik[94995]:                 "ceph.vdo": "0"
Nov 25 23:33:10 compute-0 stoic_bartik[94995]:             },
Nov 25 23:33:10 compute-0 stoic_bartik[94995]:             "type": "block",
Nov 25 23:33:10 compute-0 stoic_bartik[94995]:             "vg_name": "ceph_vg1"
Nov 25 23:33:10 compute-0 stoic_bartik[94995]:         }
Nov 25 23:33:10 compute-0 stoic_bartik[94995]:     ],
Nov 25 23:33:10 compute-0 stoic_bartik[94995]:     "2": [
Nov 25 23:33:10 compute-0 stoic_bartik[94995]:         {
Nov 25 23:33:10 compute-0 stoic_bartik[94995]:             "devices": [
Nov 25 23:33:10 compute-0 stoic_bartik[94995]:                 "/dev/loop5"
Nov 25 23:33:10 compute-0 stoic_bartik[94995]:             ],
Nov 25 23:33:10 compute-0 stoic_bartik[94995]:             "lv_name": "ceph_lv2",
Nov 25 23:33:10 compute-0 stoic_bartik[94995]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 23:33:10 compute-0 stoic_bartik[94995]:             "lv_size": "21470642176",
Nov 25 23:33:10 compute-0 stoic_bartik[94995]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=019d967b-1a56-4e90-8682-a890da577e20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 23:33:10 compute-0 stoic_bartik[94995]:             "lv_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 23:33:10 compute-0 stoic_bartik[94995]:             "name": "ceph_lv2",
Nov 25 23:33:10 compute-0 stoic_bartik[94995]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 23:33:10 compute-0 stoic_bartik[94995]:             "tags": {
Nov 25 23:33:10 compute-0 stoic_bartik[94995]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 23:33:10 compute-0 stoic_bartik[94995]:                 "ceph.block_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 23:33:10 compute-0 stoic_bartik[94995]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 23:33:10 compute-0 stoic_bartik[94995]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:33:10 compute-0 stoic_bartik[94995]:                 "ceph.cluster_name": "ceph",
Nov 25 23:33:10 compute-0 stoic_bartik[94995]:                 "ceph.crush_device_class": "",
Nov 25 23:33:10 compute-0 stoic_bartik[94995]:                 "ceph.encrypted": "0",
Nov 25 23:33:10 compute-0 stoic_bartik[94995]:                 "ceph.osd_fsid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 23:33:10 compute-0 stoic_bartik[94995]:                 "ceph.osd_id": "2",
Nov 25 23:33:10 compute-0 stoic_bartik[94995]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 23:33:10 compute-0 stoic_bartik[94995]:                 "ceph.type": "block",
Nov 25 23:33:10 compute-0 stoic_bartik[94995]:                 "ceph.vdo": "0"
Nov 25 23:33:10 compute-0 stoic_bartik[94995]:             },
Nov 25 23:33:10 compute-0 stoic_bartik[94995]:             "type": "block",
Nov 25 23:33:10 compute-0 stoic_bartik[94995]:             "vg_name": "ceph_vg2"
Nov 25 23:33:10 compute-0 stoic_bartik[94995]:         }
Nov 25 23:33:10 compute-0 stoic_bartik[94995]:     ]
Nov 25 23:33:10 compute-0 stoic_bartik[94995]: }
Nov 25 23:33:10 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Nov 25 23:33:10 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4149178136' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 25 23:33:10 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e22 e22: 3 total, 3 up, 3 in
Nov 25 23:33:10 compute-0 wizardly_swanson[94875]: pool 'cephfs.cephfs.meta' created
Nov 25 23:33:10 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e22: 3 total, 3 up, 3 in
Nov 25 23:33:10 compute-0 ceph-mon[75654]: osdmap e21: 3 total, 3 up, 3 in
Nov 25 23:33:10 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/4149178136' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 25 23:33:10 compute-0 ceph-mon[75654]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 25 23:33:10 compute-0 systemd[1]: libpod-6fdce1c85ba00771e97d003c6e1f7b7742bb7bb01bb5a6110b734cf1cc0a6619.scope: Deactivated successfully.
Nov 25 23:33:10 compute-0 podman[94978]: 2025-11-25 23:33:10.441272417 +0000 UTC m=+0.969453553 container died 6fdce1c85ba00771e97d003c6e1f7b7742bb7bb01bb5a6110b734cf1cc0a6619 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 25 23:33:10 compute-0 systemd[1]: libpod-c6600f9c53a406c41aed29ce3aa520102f95ac8878c7d90c2d94a8ead67d8c39.scope: Deactivated successfully.
Nov 25 23:33:10 compute-0 podman[94842]: 2025-11-25 23:33:10.449956923 +0000 UTC m=+1.590812738 container died c6600f9c53a406c41aed29ce3aa520102f95ac8878c7d90c2d94a8ead67d8c39 (image=quay.io/ceph/ceph:v18, name=wizardly_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 23:33:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-5f3573148e0770a4f4548f09008db23f629ae09809e0b0d6773aa2f716a04190-merged.mount: Deactivated successfully.
Nov 25 23:33:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-b9da050e78968385c17d86bd9934a095c93c62dfe03048f6adcbf1408a81c8d9-merged.mount: Deactivated successfully.
Nov 25 23:33:10 compute-0 podman[94978]: 2025-11-25 23:33:10.519257961 +0000 UTC m=+1.047439077 container remove 6fdce1c85ba00771e97d003c6e1f7b7742bb7bb01bb5a6110b734cf1cc0a6619 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_bartik, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:33:10 compute-0 podman[94842]: 2025-11-25 23:33:10.527362253 +0000 UTC m=+1.668218068 container remove c6600f9c53a406c41aed29ce3aa520102f95ac8878c7d90c2d94a8ead67d8c39 (image=quay.io/ceph/ceph:v18, name=wizardly_swanson, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 25 23:33:10 compute-0 systemd[1]: libpod-conmon-6fdce1c85ba00771e97d003c6e1f7b7742bb7bb01bb5a6110b734cf1cc0a6619.scope: Deactivated successfully.
Nov 25 23:33:10 compute-0 sudo[94780]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:10 compute-0 systemd[1]: libpod-conmon-c6600f9c53a406c41aed29ce3aa520102f95ac8878c7d90c2d94a8ead67d8c39.scope: Deactivated successfully.
Nov 25 23:33:10 compute-0 sudo[94835]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:10 compute-0 sudo[95030]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:33:10 compute-0 sudo[95030]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:10 compute-0 sudo[95030]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:10 compute-0 sudo[95055]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:33:10 compute-0 sudo[95055]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:10 compute-0 sudo[95055]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:10 compute-0 sudo[95103]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkepkckqrtleigywfigplrzjxqxncqoe ; /usr/bin/python3'
Nov 25 23:33:10 compute-0 sudo[95103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:33:10 compute-0 sudo[95104]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:33:10 compute-0 sudo[95104]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:10 compute-0 sudo[95104]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:10 compute-0 sudo[95131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 -- raw list --format json
Nov 25 23:33:10 compute-0 sudo[95131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:10 compute-0 python3[95125]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 101922db-575f-58e2-980f-928050464f69 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:33:10 compute-0 podman[95156]: 2025-11-25 23:33:10.97135567 +0000 UTC m=+0.068941520 container create 7cd4ed8af6a8a0f570aa0759b029306f954b08f54a8a1b5f129e971df8c10373 (image=quay.io/ceph/ceph:v18, name=trusting_faraday, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 23:33:11 compute-0 systemd[1]: Started libpod-conmon-7cd4ed8af6a8a0f570aa0759b029306f954b08f54a8a1b5f129e971df8c10373.scope.
Nov 25 23:33:11 compute-0 podman[95156]: 2025-11-25 23:33:10.947117768 +0000 UTC m=+0.044703718 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 23:33:11 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:33:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d56d34920ed697ec855e458a6680b7134e6eda06ed8dd4d76975ac69081dbfad/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d56d34920ed697ec855e458a6680b7134e6eda06ed8dd4d76975ac69081dbfad/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:11 compute-0 podman[95156]: 2025-11-25 23:33:11.068822794 +0000 UTC m=+0.166408764 container init 7cd4ed8af6a8a0f570aa0759b029306f954b08f54a8a1b5f129e971df8c10373 (image=quay.io/ceph/ceph:v18, name=trusting_faraday, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 23:33:11 compute-0 podman[95156]: 2025-11-25 23:33:11.078920077 +0000 UTC m=+0.176505927 container start 7cd4ed8af6a8a0f570aa0759b029306f954b08f54a8a1b5f129e971df8c10373 (image=quay.io/ceph/ceph:v18, name=trusting_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 23:33:11 compute-0 podman[95156]: 2025-11-25 23:33:11.082591564 +0000 UTC m=+0.180177454 container attach 7cd4ed8af6a8a0f570aa0759b029306f954b08f54a8a1b5f129e971df8c10373 (image=quay.io/ceph/ceph:v18, name=trusting_faraday, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:33:11 compute-0 podman[95213]: 2025-11-25 23:33:11.262010956 +0000 UTC m=+0.065593793 container create 938971a3c8aae1a8b6f376c4e954f63b73f8d155cf72699f9724d57f7dd6e645 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_franklin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 25 23:33:11 compute-0 systemd[1]: Started libpod-conmon-938971a3c8aae1a8b6f376c4e954f63b73f8d155cf72699f9724d57f7dd6e645.scope.
Nov 25 23:33:11 compute-0 podman[95213]: 2025-11-25 23:33:11.233255555 +0000 UTC m=+0.036838452 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:33:11 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:33:11 compute-0 podman[95213]: 2025-11-25 23:33:11.355807934 +0000 UTC m=+0.159390791 container init 938971a3c8aae1a8b6f376c4e954f63b73f8d155cf72699f9724d57f7dd6e645 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_franklin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:33:11 compute-0 podman[95213]: 2025-11-25 23:33:11.363483834 +0000 UTC m=+0.167066681 container start 938971a3c8aae1a8b6f376c4e954f63b73f8d155cf72699f9724d57f7dd6e645 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_franklin, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 25 23:33:11 compute-0 podman[95213]: 2025-11-25 23:33:11.367483858 +0000 UTC m=+0.171066705 container attach 938971a3c8aae1a8b6f376c4e954f63b73f8d155cf72699f9724d57f7dd6e645 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True)
Nov 25 23:33:11 compute-0 inspiring_franklin[95230]: 167 167
Nov 25 23:33:11 compute-0 systemd[1]: libpod-938971a3c8aae1a8b6f376c4e954f63b73f8d155cf72699f9724d57f7dd6e645.scope: Deactivated successfully.
Nov 25 23:33:11 compute-0 podman[95213]: 2025-11-25 23:33:11.372114299 +0000 UTC m=+0.175697146 container died 938971a3c8aae1a8b6f376c4e954f63b73f8d155cf72699f9724d57f7dd6e645 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_franklin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 23:33:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-e39d43bcf002a7d0ad93d272ff05b226ff31da6f4424d70e2324b3b6ce29ee22-merged.mount: Deactivated successfully.
Nov 25 23:33:11 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 22 pg[6.0( empty local-lis/les=0/0 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [0] r=0 lpr=22 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:33:11 compute-0 podman[95213]: 2025-11-25 23:33:11.409211497 +0000 UTC m=+0.212794344 container remove 938971a3c8aae1a8b6f376c4e954f63b73f8d155cf72699f9724d57f7dd6e645 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_franklin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 23:33:11 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Nov 25 23:33:11 compute-0 ceph-mon[75654]: pgmap v49: 5 pgs: 1 unknown, 2 creating+peering, 2 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:33:11 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/4149178136' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 25 23:33:11 compute-0 ceph-mon[75654]: osdmap e22: 3 total, 3 up, 3 in
Nov 25 23:33:11 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e23 e23: 3 total, 3 up, 3 in
Nov 25 23:33:11 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e23: 3 total, 3 up, 3 in
Nov 25 23:33:11 compute-0 systemd[1]: libpod-conmon-938971a3c8aae1a8b6f376c4e954f63b73f8d155cf72699f9724d57f7dd6e645.scope: Deactivated successfully.
Nov 25 23:33:11 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 23 pg[6.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [0] r=0 lpr=22 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:33:11 compute-0 podman[95272]: 2025-11-25 23:33:11.600252003 +0000 UTC m=+0.057685526 container create 64e665b5a2c2214618f22090ffe82c9f9032fe0133c2e6ecf22dd57cc55455f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_banzai, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 23:33:11 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 25 23:33:11 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1077117350' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 25 23:33:11 compute-0 systemd[1]: Started libpod-conmon-64e665b5a2c2214618f22090ffe82c9f9032fe0133c2e6ecf22dd57cc55455f8.scope.
Nov 25 23:33:11 compute-0 podman[95272]: 2025-11-25 23:33:11.569364377 +0000 UTC m=+0.026797950 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:33:11 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:33:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8495845a22e298db098055c0a12dc52fb2ea1534470dba76046893639ef7588/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8495845a22e298db098055c0a12dc52fb2ea1534470dba76046893639ef7588/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8495845a22e298db098055c0a12dc52fb2ea1534470dba76046893639ef7588/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8495845a22e298db098055c0a12dc52fb2ea1534470dba76046893639ef7588/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:11 compute-0 podman[95272]: 2025-11-25 23:33:11.704384131 +0000 UTC m=+0.161817644 container init 64e665b5a2c2214618f22090ffe82c9f9032fe0133c2e6ecf22dd57cc55455f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_banzai, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:33:11 compute-0 podman[95272]: 2025-11-25 23:33:11.721034316 +0000 UTC m=+0.178467829 container start 64e665b5a2c2214618f22090ffe82c9f9032fe0133c2e6ecf22dd57cc55455f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_banzai, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 25 23:33:11 compute-0 podman[95272]: 2025-11-25 23:33:11.724985378 +0000 UTC m=+0.182418901 container attach 64e665b5a2c2214618f22090ffe82c9f9032fe0133c2e6ecf22dd57cc55455f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_banzai, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:33:11 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v52: 6 pgs: 1 unknown, 1 creating+peering, 4 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:33:12 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Nov 25 23:33:12 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1077117350' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 25 23:33:12 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e24 e24: 3 total, 3 up, 3 in
Nov 25 23:33:12 compute-0 trusting_faraday[95190]: pool 'cephfs.cephfs.data' created
Nov 25 23:33:12 compute-0 ceph-mon[75654]: osdmap e23: 3 total, 3 up, 3 in
Nov 25 23:33:12 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/1077117350' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 25 23:33:12 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e24: 3 total, 3 up, 3 in
Nov 25 23:33:12 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 24 pg[7.0( empty local-lis/les=0/0 n=0 ec=24/24 lis/c=0/0 les/c/f=0/0/0 sis=24) [1] r=0 lpr=24 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:33:12 compute-0 systemd[1]: libpod-7cd4ed8af6a8a0f570aa0759b029306f954b08f54a8a1b5f129e971df8c10373.scope: Deactivated successfully.
Nov 25 23:33:12 compute-0 podman[95309]: 2025-11-25 23:33:12.545568144 +0000 UTC m=+0.031546694 container died 7cd4ed8af6a8a0f570aa0759b029306f954b08f54a8a1b5f129e971df8c10373 (image=quay.io/ceph/ceph:v18, name=trusting_faraday, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:33:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-d56d34920ed697ec855e458a6680b7134e6eda06ed8dd4d76975ac69081dbfad-merged.mount: Deactivated successfully.
Nov 25 23:33:12 compute-0 podman[95309]: 2025-11-25 23:33:12.597581822 +0000 UTC m=+0.083560382 container remove 7cd4ed8af6a8a0f570aa0759b029306f954b08f54a8a1b5f129e971df8c10373 (image=quay.io/ceph/ceph:v18, name=trusting_faraday, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:33:12 compute-0 systemd[1]: libpod-conmon-7cd4ed8af6a8a0f570aa0759b029306f954b08f54a8a1b5f129e971df8c10373.scope: Deactivated successfully.
Nov 25 23:33:12 compute-0 sudo[95103]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:12 compute-0 epic_banzai[95292]: {
Nov 25 23:33:12 compute-0 epic_banzai[95292]:     "019d967b-1a56-4e90-8682-a890da577e20": {
Nov 25 23:33:12 compute-0 epic_banzai[95292]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:33:12 compute-0 epic_banzai[95292]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 23:33:12 compute-0 epic_banzai[95292]:         "osd_id": 2,
Nov 25 23:33:12 compute-0 epic_banzai[95292]:         "osd_uuid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 23:33:12 compute-0 epic_banzai[95292]:         "type": "bluestore"
Nov 25 23:33:12 compute-0 epic_banzai[95292]:     },
Nov 25 23:33:12 compute-0 epic_banzai[95292]:     "21bbab34-bea3-466b-8bf7-812749fcef47": {
Nov 25 23:33:12 compute-0 epic_banzai[95292]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:33:12 compute-0 epic_banzai[95292]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 23:33:12 compute-0 epic_banzai[95292]:         "osd_id": 1,
Nov 25 23:33:12 compute-0 epic_banzai[95292]:         "osd_uuid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 23:33:12 compute-0 epic_banzai[95292]:         "type": "bluestore"
Nov 25 23:33:12 compute-0 epic_banzai[95292]:     },
Nov 25 23:33:12 compute-0 epic_banzai[95292]:     "c5ddf08a-6193-41e0-8332-60b5083aa62e": {
Nov 25 23:33:12 compute-0 epic_banzai[95292]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:33:12 compute-0 epic_banzai[95292]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 23:33:12 compute-0 epic_banzai[95292]:         "osd_id": 0,
Nov 25 23:33:12 compute-0 epic_banzai[95292]:         "osd_uuid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 23:33:12 compute-0 epic_banzai[95292]:         "type": "bluestore"
Nov 25 23:33:12 compute-0 epic_banzai[95292]:     }
Nov 25 23:33:12 compute-0 epic_banzai[95292]: }
Nov 25 23:33:12 compute-0 systemd[1]: libpod-64e665b5a2c2214618f22090ffe82c9f9032fe0133c2e6ecf22dd57cc55455f8.scope: Deactivated successfully.
Nov 25 23:33:12 compute-0 podman[95272]: 2025-11-25 23:33:12.73889126 +0000 UTC m=+1.196324753 container died 64e665b5a2c2214618f22090ffe82c9f9032fe0133c2e6ecf22dd57cc55455f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_banzai, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:33:12 compute-0 systemd[1]: libpod-64e665b5a2c2214618f22090ffe82c9f9032fe0133c2e6ecf22dd57cc55455f8.scope: Consumed 1.020s CPU time.
Nov 25 23:33:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-b8495845a22e298db098055c0a12dc52fb2ea1534470dba76046893639ef7588-merged.mount: Deactivated successfully.
Nov 25 23:33:12 compute-0 podman[95272]: 2025-11-25 23:33:12.8071234 +0000 UTC m=+1.264556893 container remove 64e665b5a2c2214618f22090ffe82c9f9032fe0133c2e6ecf22dd57cc55455f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_banzai, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 23:33:12 compute-0 systemd[1]: libpod-conmon-64e665b5a2c2214618f22090ffe82c9f9032fe0133c2e6ecf22dd57cc55455f8.scope: Deactivated successfully.
Nov 25 23:33:12 compute-0 sudo[95131]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:12 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 23:33:12 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:33:12 compute-0 sudo[95372]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpkwowuldaubhuxtohtefeyviaorqzuu ; /usr/bin/python3'
Nov 25 23:33:12 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 23:33:12 compute-0 sudo[95372]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:33:12 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:33:12 compute-0 sudo[95375]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:33:12 compute-0 sudo[95375]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:12 compute-0 sudo[95375]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:12 compute-0 python3[95374]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 101922db-575f-58e2-980f-928050464f69 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:33:13 compute-0 sudo[95400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 23:33:13 compute-0 sudo[95400]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:13 compute-0 sudo[95400]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:13 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_user}] v 0) v1
Nov 25 23:33:13 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:33:13 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_password}] v 0) v1
Nov 25 23:33:13 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:33:13 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_user}] v 0) v1
Nov 25 23:33:13 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:33:13 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_password}] v 0) v1
Nov 25 23:33:13 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:33:13 compute-0 ceph-mgr[75954]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Nov 25 23:33:13 compute-0 ceph-mgr[75954]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Nov 25 23:33:13 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Nov 25 23:33:13 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 25 23:33:13 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Nov 25 23:33:13 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 25 23:33:13 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 23:33:13 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:33:13 compute-0 ceph-mgr[75954]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Nov 25 23:33:13 compute-0 ceph-mgr[75954]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Nov 25 23:33:13 compute-0 podman[95421]: 2025-11-25 23:33:13.077160567 +0000 UTC m=+0.062484822 container create da2f97b085a666897241ae531ce7ca2233c14c272ddfd1e2085337942b331cdc (image=quay.io/ceph/ceph:v18, name=flamboyant_jang, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:33:13 compute-0 systemd[1]: Started libpod-conmon-da2f97b085a666897241ae531ce7ca2233c14c272ddfd1e2085337942b331cdc.scope.
Nov 25 23:33:13 compute-0 podman[95421]: 2025-11-25 23:33:13.044236439 +0000 UTC m=+0.029560744 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 23:33:13 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:33:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d6be747229267e684140fdeaa16253ef6eb3ee8e0ed150a1ee3196b0f2a645a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d6be747229267e684140fdeaa16253ef6eb3ee8e0ed150a1ee3196b0f2a645a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:13 compute-0 sudo[95438]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:33:13 compute-0 sudo[95438]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:13 compute-0 sudo[95438]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:13 compute-0 podman[95421]: 2025-11-25 23:33:13.181822509 +0000 UTC m=+0.167146794 container init da2f97b085a666897241ae531ce7ca2233c14c272ddfd1e2085337942b331cdc (image=quay.io/ceph/ceph:v18, name=flamboyant_jang, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:33:13 compute-0 podman[95421]: 2025-11-25 23:33:13.18954254 +0000 UTC m=+0.174866805 container start da2f97b085a666897241ae531ce7ca2233c14c272ddfd1e2085337942b331cdc (image=quay.io/ceph/ceph:v18, name=flamboyant_jang, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:33:13 compute-0 podman[95421]: 2025-11-25 23:33:13.193462533 +0000 UTC m=+0.178786768 container attach da2f97b085a666897241ae531ce7ca2233c14c272ddfd1e2085337942b331cdc (image=quay.io/ceph/ceph:v18, name=flamboyant_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 23:33:13 compute-0 sudo[95468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:33:13 compute-0 sudo[95468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:13 compute-0 sudo[95468]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:13 compute-0 sudo[95494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:33:13 compute-0 sudo[95494]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:13 compute-0 sudo[95494]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:13 compute-0 sudo[95519]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 101922db-575f-58e2-980f-928050464f69
Nov 25 23:33:13 compute-0 sudo[95519]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:13 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Nov 25 23:33:13 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e25 e25: 3 total, 3 up, 3 in
Nov 25 23:33:13 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e25: 3 total, 3 up, 3 in
Nov 25 23:33:13 compute-0 ceph-mon[75654]: pgmap v52: 6 pgs: 1 unknown, 1 creating+peering, 4 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:33:13 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/1077117350' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 25 23:33:13 compute-0 ceph-mon[75654]: osdmap e24: 3 total, 3 up, 3 in
Nov 25 23:33:13 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:33:13 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:33:13 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:33:13 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:33:13 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:33:13 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:33:13 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 25 23:33:13 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 25 23:33:13 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:33:13 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 25 pg[7.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=0/0 les/c/f=0/0/0 sis=24) [1] r=0 lpr=24 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:33:13 compute-0 podman[95579]: 2025-11-25 23:33:13.658045517 +0000 UTC m=+0.048295011 container create ebd55332a902302297ee8c7fd75bfb70d5181a2492ad26650ac0bb89b50936cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_diffie, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 23:33:13 compute-0 systemd[1]: Started libpod-conmon-ebd55332a902302297ee8c7fd75bfb70d5181a2492ad26650ac0bb89b50936cd.scope.
Nov 25 23:33:13 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:33:13 compute-0 podman[95579]: 2025-11-25 23:33:13.630823897 +0000 UTC m=+0.021073441 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:33:13 compute-0 podman[95579]: 2025-11-25 23:33:13.741082594 +0000 UTC m=+0.131332088 container init ebd55332a902302297ee8c7fd75bfb70d5181a2492ad26650ac0bb89b50936cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3)
Nov 25 23:33:13 compute-0 podman[95579]: 2025-11-25 23:33:13.748469328 +0000 UTC m=+0.138718832 container start ebd55332a902302297ee8c7fd75bfb70d5181a2492ad26650ac0bb89b50936cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_diffie, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 23:33:13 compute-0 podman[95579]: 2025-11-25 23:33:13.75242204 +0000 UTC m=+0.142671534 container attach ebd55332a902302297ee8c7fd75bfb70d5181a2492ad26650ac0bb89b50936cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_diffie, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:33:13 compute-0 competent_diffie[95595]: 167 167
Nov 25 23:33:13 compute-0 systemd[1]: libpod-ebd55332a902302297ee8c7fd75bfb70d5181a2492ad26650ac0bb89b50936cd.scope: Deactivated successfully.
Nov 25 23:33:13 compute-0 conmon[95595]: conmon ebd55332a902302297ee <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ebd55332a902302297ee8c7fd75bfb70d5181a2492ad26650ac0bb89b50936cd.scope/container/memory.events
Nov 25 23:33:13 compute-0 podman[95579]: 2025-11-25 23:33:13.755130451 +0000 UTC m=+0.145379925 container died ebd55332a902302297ee8c7fd75bfb70d5181a2492ad26650ac0bb89b50936cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_diffie, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:33:13 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0) v1
Nov 25 23:33:13 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3830652392' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Nov 25 23:33:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-cd18963a384e11a008cc8fa1ecf0180f97aa6779f3b9b9abc7a8666fc4c178ef-merged.mount: Deactivated successfully.
Nov 25 23:33:13 compute-0 podman[95579]: 2025-11-25 23:33:13.802015645 +0000 UTC m=+0.192265139 container remove ebd55332a902302297ee8c7fd75bfb70d5181a2492ad26650ac0bb89b50936cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_diffie, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:33:13 compute-0 systemd[1]: libpod-conmon-ebd55332a902302297ee8c7fd75bfb70d5181a2492ad26650ac0bb89b50936cd.scope: Deactivated successfully.
Nov 25 23:33:13 compute-0 sudo[95519]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:13 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 23:33:13 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:33:13 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 23:33:13 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:33:13 compute-0 ceph-mgr[75954]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.gwqfsl (unknown last config time)...
Nov 25 23:33:13 compute-0 ceph-mgr[75954]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.gwqfsl (unknown last config time)...
Nov 25 23:33:13 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.gwqfsl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Nov 25 23:33:13 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.gwqfsl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 25 23:33:13 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 25 23:33:13 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 25 23:33:13 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 23:33:13 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:33:13 compute-0 ceph-mgr[75954]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.gwqfsl on compute-0
Nov 25 23:33:13 compute-0 ceph-mgr[75954]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.gwqfsl on compute-0
Nov 25 23:33:13 compute-0 sudo[95615]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:33:13 compute-0 sudo[95615]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:13 compute-0 sudo[95615]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:13 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v55: 7 pgs: 1 unknown, 6 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:33:14 compute-0 sudo[95640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:33:14 compute-0 sudo[95640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:14 compute-0 sudo[95640]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:14 compute-0 sudo[95665]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:33:14 compute-0 sudo[95665]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:14 compute-0 sudo[95665]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:14 compute-0 sudo[95690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 101922db-575f-58e2-980f-928050464f69
Nov 25 23:33:14 compute-0 sudo[95690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:14 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Nov 25 23:33:14 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3830652392' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Nov 25 23:33:14 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e26 e26: 3 total, 3 up, 3 in
Nov 25 23:33:14 compute-0 flamboyant_jang[95460]: enabled application 'rbd' on pool 'vms'
Nov 25 23:33:14 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 3 up, 3 in
Nov 25 23:33:14 compute-0 ceph-mon[75654]: Reconfiguring mon.compute-0 (unknown last config time)...
Nov 25 23:33:14 compute-0 ceph-mon[75654]: Reconfiguring daemon mon.compute-0 on compute-0
Nov 25 23:33:14 compute-0 ceph-mon[75654]: osdmap e25: 3 total, 3 up, 3 in
Nov 25 23:33:14 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/3830652392' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Nov 25 23:33:14 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:33:14 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:33:14 compute-0 ceph-mon[75654]: Reconfiguring mgr.compute-0.gwqfsl (unknown last config time)...
Nov 25 23:33:14 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.gwqfsl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 25 23:33:14 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 25 23:33:14 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:33:14 compute-0 ceph-mon[75654]: Reconfiguring daemon mgr.compute-0.gwqfsl on compute-0
Nov 25 23:33:14 compute-0 ceph-mon[75654]: pgmap v55: 7 pgs: 1 unknown, 6 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:33:14 compute-0 podman[95731]: 2025-11-25 23:33:14.511517202 +0000 UTC m=+0.075661807 container create 3aff5c0c585217ef880931b9e4d2f3a4403dbf8d55b81216bd235aafc5091ee5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_wilbur, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:33:14 compute-0 systemd[1]: libpod-da2f97b085a666897241ae531ce7ca2233c14c272ddfd1e2085337942b331cdc.scope: Deactivated successfully.
Nov 25 23:33:14 compute-0 podman[95421]: 2025-11-25 23:33:14.52526313 +0000 UTC m=+1.510587355 container died da2f97b085a666897241ae531ce7ca2233c14c272ddfd1e2085337942b331cdc (image=quay.io/ceph/ceph:v18, name=flamboyant_jang, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:33:14 compute-0 systemd[1]: Started libpod-conmon-3aff5c0c585217ef880931b9e4d2f3a4403dbf8d55b81216bd235aafc5091ee5.scope.
Nov 25 23:33:14 compute-0 podman[95731]: 2025-11-25 23:33:14.48158043 +0000 UTC m=+0.045725095 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:33:14 compute-0 podman[95421]: 2025-11-25 23:33:14.573816597 +0000 UTC m=+1.559140822 container remove da2f97b085a666897241ae531ce7ca2233c14c272ddfd1e2085337942b331cdc (image=quay.io/ceph/ceph:v18, name=flamboyant_jang, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 23:33:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d6be747229267e684140fdeaa16253ef6eb3ee8e0ed150a1ee3196b0f2a645a-merged.mount: Deactivated successfully.
Nov 25 23:33:14 compute-0 systemd[1]: libpod-conmon-da2f97b085a666897241ae531ce7ca2233c14c272ddfd1e2085337942b331cdc.scope: Deactivated successfully.
Nov 25 23:33:14 compute-0 sudo[95372]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:14 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:33:14 compute-0 podman[95731]: 2025-11-25 23:33:14.621481681 +0000 UTC m=+0.185626336 container init 3aff5c0c585217ef880931b9e4d2f3a4403dbf8d55b81216bd235aafc5091ee5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_wilbur, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 23:33:14 compute-0 podman[95731]: 2025-11-25 23:33:14.631567624 +0000 UTC m=+0.195712239 container start 3aff5c0c585217ef880931b9e4d2f3a4403dbf8d55b81216bd235aafc5091ee5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_wilbur, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 25 23:33:14 compute-0 podman[95731]: 2025-11-25 23:33:14.635153588 +0000 UTC m=+0.199298203 container attach 3aff5c0c585217ef880931b9e4d2f3a4403dbf8d55b81216bd235aafc5091ee5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:33:14 compute-0 thirsty_wilbur[95760]: 167 167
Nov 25 23:33:14 compute-0 systemd[1]: libpod-3aff5c0c585217ef880931b9e4d2f3a4403dbf8d55b81216bd235aafc5091ee5.scope: Deactivated successfully.
Nov 25 23:33:14 compute-0 podman[95731]: 2025-11-25 23:33:14.637627703 +0000 UTC m=+0.201772348 container died 3aff5c0c585217ef880931b9e4d2f3a4403dbf8d55b81216bd235aafc5091ee5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_wilbur, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:33:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-31b8adea9c525b0760fd26a1f31c84a6aed6f296a2544568ed706b8e3f308b2c-merged.mount: Deactivated successfully.
Nov 25 23:33:14 compute-0 podman[95731]: 2025-11-25 23:33:14.686326333 +0000 UTC m=+0.250470918 container remove 3aff5c0c585217ef880931b9e4d2f3a4403dbf8d55b81216bd235aafc5091ee5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_wilbur, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 23:33:14 compute-0 systemd[1]: libpod-conmon-3aff5c0c585217ef880931b9e4d2f3a4403dbf8d55b81216bd235aafc5091ee5.scope: Deactivated successfully.
Nov 25 23:33:14 compute-0 sudo[95690]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:14 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 23:33:14 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:33:14 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 23:33:14 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:33:14 compute-0 sudo[95802]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrxkbjiwnxpmztuwdchreiqwxlowndmk ; /usr/bin/python3'
Nov 25 23:33:14 compute-0 sudo[95802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:33:14 compute-0 sudo[95803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:33:14 compute-0 sudo[95803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:14 compute-0 sudo[95803]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:14 compute-0 sudo[95830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:33:14 compute-0 sudo[95830]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:14 compute-0 sudo[95830]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:14 compute-0 python3[95816]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 101922db-575f-58e2-980f-928050464f69 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:33:14 compute-0 sudo[95855]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:33:14 compute-0 sudo[95855]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:14 compute-0 sudo[95855]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:15 compute-0 podman[95878]: 2025-11-25 23:33:15.024681864 +0000 UTC m=+0.048264870 container create 2d5e0ce45e081e278ca404bfaa4f1c49e66d769249090faef8e15807ca605c93 (image=quay.io/ceph/ceph:v18, name=magical_morse, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:33:15 compute-0 systemd[1]: Started libpod-conmon-2d5e0ce45e081e278ca404bfaa4f1c49e66d769249090faef8e15807ca605c93.scope.
Nov 25 23:33:15 compute-0 sudo[95886]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 25 23:33:15 compute-0 sudo[95886]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:15 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:33:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b960c4b878cd61dad40846722b8b4e6734b164926da79649f03c875f9081bb2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b960c4b878cd61dad40846722b8b4e6734b164926da79649f03c875f9081bb2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:15 compute-0 podman[95878]: 2025-11-25 23:33:15.007578308 +0000 UTC m=+0.031161334 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 23:33:15 compute-0 podman[95878]: 2025-11-25 23:33:15.120234047 +0000 UTC m=+0.143817153 container init 2d5e0ce45e081e278ca404bfaa4f1c49e66d769249090faef8e15807ca605c93 (image=quay.io/ceph/ceph:v18, name=magical_morse, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:33:15 compute-0 podman[95878]: 2025-11-25 23:33:15.158145136 +0000 UTC m=+0.181728142 container start 2d5e0ce45e081e278ca404bfaa4f1c49e66d769249090faef8e15807ca605c93 (image=quay.io/ceph/ceph:v18, name=magical_morse, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 23:33:15 compute-0 ceph-mon[75654]: log_channel(cluster) log [WRN] : Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 25 23:33:15 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e26 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:33:15 compute-0 podman[95878]: 2025-11-25 23:33:15.206214412 +0000 UTC m=+0.229797418 container attach 2d5e0ce45e081e278ca404bfaa4f1c49e66d769249090faef8e15807ca605c93 (image=quay.io/ceph/ceph:v18, name=magical_morse, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 25 23:33:15 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/3830652392' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Nov 25 23:33:15 compute-0 ceph-mon[75654]: osdmap e26: 3 total, 3 up, 3 in
Nov 25 23:33:15 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:33:15 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:33:15 compute-0 ceph-mon[75654]: Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 25 23:33:15 compute-0 podman[96009]: 2025-11-25 23:33:15.539701315 +0000 UTC m=+0.064481144 container exec 42789e176a5dd28a3c234ac87642cbe4e8e4b6a7a43641214b6da8a496b18af2 (image=quay.io/ceph/ceph:v18, name=ceph-101922db-575f-58e2-980f-928050464f69-mon-compute-0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:33:15 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0) v1
Nov 25 23:33:15 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3213799370' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Nov 25 23:33:15 compute-0 podman[96030]: 2025-11-25 23:33:15.713256244 +0000 UTC m=+0.050405846 container exec_died 42789e176a5dd28a3c234ac87642cbe4e8e4b6a7a43641214b6da8a496b18af2 (image=quay.io/ceph/ceph:v18, name=ceph-101922db-575f-58e2-980f-928050464f69-mon-compute-0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:33:15 compute-0 podman[96009]: 2025-11-25 23:33:15.725594986 +0000 UTC m=+0.250374785 container exec_died 42789e176a5dd28a3c234ac87642cbe4e8e4b6a7a43641214b6da8a496b18af2 (image=quay.io/ceph/ceph:v18, name=ceph-101922db-575f-58e2-980f-928050464f69-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Nov 25 23:33:15 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v57: 7 pgs: 1 unknown, 6 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:33:16 compute-0 sudo[95886]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:16 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 23:33:16 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:33:16 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 23:33:16 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:33:16 compute-0 sudo[96134]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:33:16 compute-0 sudo[96134]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:16 compute-0 sudo[96134]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:16 compute-0 sudo[96159]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:33:16 compute-0 sudo[96159]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:16 compute-0 sudo[96159]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:16 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Nov 25 23:33:16 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/3213799370' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Nov 25 23:33:16 compute-0 ceph-mon[75654]: pgmap v57: 7 pgs: 1 unknown, 6 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:33:16 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:33:16 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:33:16 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3213799370' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Nov 25 23:33:16 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e27 e27: 3 total, 3 up, 3 in
Nov 25 23:33:16 compute-0 sudo[96184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:33:16 compute-0 magical_morse[95919]: enabled application 'rbd' on pool 'volumes'
Nov 25 23:33:16 compute-0 sudo[96184]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:16 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 3 up, 3 in
Nov 25 23:33:16 compute-0 sudo[96184]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:16 compute-0 systemd[1]: libpod-2d5e0ce45e081e278ca404bfaa4f1c49e66d769249090faef8e15807ca605c93.scope: Deactivated successfully.
Nov 25 23:33:16 compute-0 podman[95878]: 2025-11-25 23:33:16.554390987 +0000 UTC m=+1.577974023 container died 2d5e0ce45e081e278ca404bfaa4f1c49e66d769249090faef8e15807ca605c93 (image=quay.io/ceph/ceph:v18, name=magical_morse, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:33:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b960c4b878cd61dad40846722b8b4e6734b164926da79649f03c875f9081bb2-merged.mount: Deactivated successfully.
Nov 25 23:33:16 compute-0 podman[95878]: 2025-11-25 23:33:16.612692418 +0000 UTC m=+1.636275454 container remove 2d5e0ce45e081e278ca404bfaa4f1c49e66d769249090faef8e15807ca605c93 (image=quay.io/ceph/ceph:v18, name=magical_morse, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 23:33:16 compute-0 systemd[1]: libpod-conmon-2d5e0ce45e081e278ca404bfaa4f1c49e66d769249090faef8e15807ca605c93.scope: Deactivated successfully.
Nov 25 23:33:16 compute-0 sudo[96210]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 23:33:16 compute-0 sudo[96210]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:16 compute-0 sudo[95802]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:16 compute-0 sudo[96279]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgkftbolrclcucuwnaikyhkflajqpemt ; /usr/bin/python3'
Nov 25 23:33:16 compute-0 sudo[96279]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:33:16 compute-0 python3[96285]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 101922db-575f-58e2-980f-928050464f69 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:33:17 compute-0 podman[96291]: 2025-11-25 23:33:17.045454702 +0000 UTC m=+0.043400964 container create 5159c8ca5365269bedde7acc05d7304af506cbab5abbda369dfaa6b1e0a0fc7a (image=quay.io/ceph/ceph:v18, name=reverent_morse, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:33:17 compute-0 systemd[1]: Started libpod-conmon-5159c8ca5365269bedde7acc05d7304af506cbab5abbda369dfaa6b1e0a0fc7a.scope.
Nov 25 23:33:17 compute-0 podman[96291]: 2025-11-25 23:33:17.026673722 +0000 UTC m=+0.024619964 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 23:33:17 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:33:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08664d14b6666a80ea3693e346ecea1108531eba1e998d326b7b554dbb29dfd0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08664d14b6666a80ea3693e346ecea1108531eba1e998d326b7b554dbb29dfd0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:17 compute-0 podman[96291]: 2025-11-25 23:33:17.147745031 +0000 UTC m=+0.145691333 container init 5159c8ca5365269bedde7acc05d7304af506cbab5abbda369dfaa6b1e0a0fc7a (image=quay.io/ceph/ceph:v18, name=reverent_morse, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 23:33:17 compute-0 podman[96291]: 2025-11-25 23:33:17.156206292 +0000 UTC m=+0.154152564 container start 5159c8ca5365269bedde7acc05d7304af506cbab5abbda369dfaa6b1e0a0fc7a (image=quay.io/ceph/ceph:v18, name=reverent_morse, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:33:17 compute-0 podman[96291]: 2025-11-25 23:33:17.160886815 +0000 UTC m=+0.158833057 container attach 5159c8ca5365269bedde7acc05d7304af506cbab5abbda369dfaa6b1e0a0fc7a (image=quay.io/ceph/ceph:v18, name=reverent_morse, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:33:17 compute-0 sudo[96210]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:17 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 23:33:17 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:33:17 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 23:33:17 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 23:33:17 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 23:33:17 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:33:17 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev 23bf84ef-361a-4ee5-bdda-3f9ad241eab9 does not exist
Nov 25 23:33:17 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev f814aae9-65f2-488e-ad35-b3ab939564d6 does not exist
Nov 25 23:33:17 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev c0e7b792-149e-453c-af2a-a48181c78716 does not exist
Nov 25 23:33:17 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 23:33:17 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 23:33:17 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 23:33:17 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 23:33:17 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 23:33:17 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:33:17 compute-0 sudo[96324]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:33:17 compute-0 sudo[96324]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:17 compute-0 sudo[96324]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:17 compute-0 sudo[96349]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:33:17 compute-0 sudo[96349]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:17 compute-0 sudo[96349]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:17 compute-0 sudo[96374]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:33:17 compute-0 sudo[96374]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:17 compute-0 sudo[96374]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:17 compute-0 sudo[96409]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 23:33:17 compute-0 sudo[96409]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:17 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/3213799370' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Nov 25 23:33:17 compute-0 ceph-mon[75654]: osdmap e27: 3 total, 3 up, 3 in
Nov 25 23:33:17 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:33:17 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 23:33:17 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:33:17 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 23:33:17 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 23:33:17 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:33:17 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0) v1
Nov 25 23:33:17 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1809223002' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Nov 25 23:33:17 compute-0 podman[96484]: 2025-11-25 23:33:17.894604014 +0000 UTC m=+0.062549344 container create 1abb922694b7230a43f7c7bd87707bb34cc6eb1efa5e3572a504d2af028f1e4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_wing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 23:33:17 compute-0 systemd[1]: Started libpod-conmon-1abb922694b7230a43f7c7bd87707bb34cc6eb1efa5e3572a504d2af028f1e4a.scope.
Nov 25 23:33:17 compute-0 sshd-session[96246]: Invalid user jack from 68.183.82.237 port 33166
Nov 25 23:33:17 compute-0 podman[96484]: 2025-11-25 23:33:17.868587744 +0000 UTC m=+0.036533124 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:33:17 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:33:17 compute-0 podman[96484]: 2025-11-25 23:33:17.993828363 +0000 UTC m=+0.161773743 container init 1abb922694b7230a43f7c7bd87707bb34cc6eb1efa5e3572a504d2af028f1e4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_wing, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 23:33:17 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v59: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:33:18 compute-0 podman[96484]: 2025-11-25 23:33:18.004750188 +0000 UTC m=+0.172695508 container start 1abb922694b7230a43f7c7bd87707bb34cc6eb1efa5e3572a504d2af028f1e4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_wing, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 23:33:18 compute-0 charming_wing[96500]: 167 167
Nov 25 23:33:18 compute-0 systemd[1]: libpod-1abb922694b7230a43f7c7bd87707bb34cc6eb1efa5e3572a504d2af028f1e4a.scope: Deactivated successfully.
Nov 25 23:33:18 compute-0 podman[96484]: 2025-11-25 23:33:18.0102103 +0000 UTC m=+0.178155630 container attach 1abb922694b7230a43f7c7bd87707bb34cc6eb1efa5e3572a504d2af028f1e4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_wing, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 25 23:33:18 compute-0 podman[96484]: 2025-11-25 23:33:18.011493614 +0000 UTC m=+0.179439004 container died 1abb922694b7230a43f7c7bd87707bb34cc6eb1efa5e3572a504d2af028f1e4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_wing, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:33:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-5f0cf45ef9b48cce7bfb03c8362aac1b493ff4de6403b3ae60f8652e186c83ae-merged.mount: Deactivated successfully.
Nov 25 23:33:18 compute-0 podman[96484]: 2025-11-25 23:33:18.058004667 +0000 UTC m=+0.225949997 container remove 1abb922694b7230a43f7c7bd87707bb34cc6eb1efa5e3572a504d2af028f1e4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_wing, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 23:33:18 compute-0 systemd[1]: libpod-conmon-1abb922694b7230a43f7c7bd87707bb34cc6eb1efa5e3572a504d2af028f1e4a.scope: Deactivated successfully.
Nov 25 23:33:18 compute-0 sshd-session[96246]: Received disconnect from 68.183.82.237 port 33166:11: Bye Bye [preauth]
Nov 25 23:33:18 compute-0 sshd-session[96246]: Disconnected from invalid user jack 68.183.82.237 port 33166 [preauth]
Nov 25 23:33:18 compute-0 podman[96522]: 2025-11-25 23:33:18.273582894 +0000 UTC m=+0.049458532 container create df76fe7522bb61cf1527c116ab463d2113cc06eb4cd8e42ce6b9e39741f8f17c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_kalam, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 25 23:33:18 compute-0 systemd[1]: Started libpod-conmon-df76fe7522bb61cf1527c116ab463d2113cc06eb4cd8e42ce6b9e39741f8f17c.scope.
Nov 25 23:33:18 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:33:18 compute-0 podman[96522]: 2025-11-25 23:33:18.257403982 +0000 UTC m=+0.033279650 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:33:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fafcd303bc7bac1124d7aeb25dc1779f1613daa61d01c0c535cad8dc5e1f6fe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fafcd303bc7bac1124d7aeb25dc1779f1613daa61d01c0c535cad8dc5e1f6fe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fafcd303bc7bac1124d7aeb25dc1779f1613daa61d01c0c535cad8dc5e1f6fe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fafcd303bc7bac1124d7aeb25dc1779f1613daa61d01c0c535cad8dc5e1f6fe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fafcd303bc7bac1124d7aeb25dc1779f1613daa61d01c0c535cad8dc5e1f6fe/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:18 compute-0 podman[96522]: 2025-11-25 23:33:18.368250775 +0000 UTC m=+0.144126453 container init df76fe7522bb61cf1527c116ab463d2113cc06eb4cd8e42ce6b9e39741f8f17c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_kalam, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:33:18 compute-0 podman[96522]: 2025-11-25 23:33:18.379347184 +0000 UTC m=+0.155222852 container start df76fe7522bb61cf1527c116ab463d2113cc06eb4cd8e42ce6b9e39741f8f17c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 25 23:33:18 compute-0 podman[96522]: 2025-11-25 23:33:18.383509393 +0000 UTC m=+0.159385071 container attach df76fe7522bb61cf1527c116ab463d2113cc06eb4cd8e42ce6b9e39741f8f17c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_kalam, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:33:18 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Nov 25 23:33:18 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/1809223002' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Nov 25 23:33:18 compute-0 ceph-mon[75654]: pgmap v59: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:33:18 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1809223002' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Nov 25 23:33:18 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e28 e28: 3 total, 3 up, 3 in
Nov 25 23:33:18 compute-0 reverent_morse[96308]: enabled application 'rbd' on pool 'backups'
Nov 25 23:33:18 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 3 up, 3 in
Nov 25 23:33:18 compute-0 systemd[1]: libpod-5159c8ca5365269bedde7acc05d7304af506cbab5abbda369dfaa6b1e0a0fc7a.scope: Deactivated successfully.
Nov 25 23:33:18 compute-0 podman[96291]: 2025-11-25 23:33:18.579801396 +0000 UTC m=+1.577747628 container died 5159c8ca5365269bedde7acc05d7304af506cbab5abbda369dfaa6b1e0a0fc7a (image=quay.io/ceph/ceph:v18, name=reverent_morse, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 25 23:33:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-08664d14b6666a80ea3693e346ecea1108531eba1e998d326b7b554dbb29dfd0-merged.mount: Deactivated successfully.
Nov 25 23:33:18 compute-0 podman[96291]: 2025-11-25 23:33:18.625399716 +0000 UTC m=+1.623345948 container remove 5159c8ca5365269bedde7acc05d7304af506cbab5abbda369dfaa6b1e0a0fc7a (image=quay.io/ceph/ceph:v18, name=reverent_morse, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 23:33:18 compute-0 systemd[1]: libpod-conmon-5159c8ca5365269bedde7acc05d7304af506cbab5abbda369dfaa6b1e0a0fc7a.scope: Deactivated successfully.
Nov 25 23:33:18 compute-0 sudo[96279]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:18 compute-0 sudo[96578]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcuatqdmdkxstozpsxnvrmvtfyeroxeh ; /usr/bin/python3'
Nov 25 23:33:18 compute-0 sudo[96578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:33:18 compute-0 python3[96580]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 101922db-575f-58e2-980f-928050464f69 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:33:19 compute-0 podman[96581]: 2025-11-25 23:33:19.07615729 +0000 UTC m=+0.060123760 container create b9e45985b81314737baffb3a83a93e6d38424305203db8cfc0a9a01d3e8598f2 (image=quay.io/ceph/ceph:v18, name=eager_hopper, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:33:19 compute-0 systemd[1]: Started libpod-conmon-b9e45985b81314737baffb3a83a93e6d38424305203db8cfc0a9a01d3e8598f2.scope.
Nov 25 23:33:19 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:33:19 compute-0 podman[96581]: 2025-11-25 23:33:19.049413702 +0000 UTC m=+0.033380162 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 23:33:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/651d1514fbbf65ff3d9a38c9e592d82cf338b07fe78f454c96746c850a4d4927/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/651d1514fbbf65ff3d9a38c9e592d82cf338b07fe78f454c96746c850a4d4927/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:19 compute-0 podman[96581]: 2025-11-25 23:33:19.164330621 +0000 UTC m=+0.148297151 container init b9e45985b81314737baffb3a83a93e6d38424305203db8cfc0a9a01d3e8598f2 (image=quay.io/ceph/ceph:v18, name=eager_hopper, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 23:33:19 compute-0 podman[96581]: 2025-11-25 23:33:19.175220685 +0000 UTC m=+0.159187125 container start b9e45985b81314737baffb3a83a93e6d38424305203db8cfc0a9a01d3e8598f2 (image=quay.io/ceph/ceph:v18, name=eager_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:33:19 compute-0 podman[96581]: 2025-11-25 23:33:19.178311296 +0000 UTC m=+0.162277766 container attach b9e45985b81314737baffb3a83a93e6d38424305203db8cfc0a9a01d3e8598f2 (image=quay.io/ceph/ceph:v18, name=eager_hopper, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:33:19 compute-0 cranky_kalam[96538]: --> passed data devices: 0 physical, 3 LVM
Nov 25 23:33:19 compute-0 cranky_kalam[96538]: --> relative data size: 1.0
Nov 25 23:33:19 compute-0 cranky_kalam[96538]: --> All data devices are unavailable
Nov 25 23:33:19 compute-0 systemd[1]: libpod-df76fe7522bb61cf1527c116ab463d2113cc06eb4cd8e42ce6b9e39741f8f17c.scope: Deactivated successfully.
Nov 25 23:33:19 compute-0 podman[96522]: 2025-11-25 23:33:19.504399997 +0000 UTC m=+1.280275725 container died df76fe7522bb61cf1527c116ab463d2113cc06eb4cd8e42ce6b9e39741f8f17c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_kalam, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:33:19 compute-0 systemd[1]: libpod-df76fe7522bb61cf1527c116ab463d2113cc06eb4cd8e42ce6b9e39741f8f17c.scope: Consumed 1.039s CPU time.
Nov 25 23:33:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-8fafcd303bc7bac1124d7aeb25dc1779f1613daa61d01c0c535cad8dc5e1f6fe-merged.mount: Deactivated successfully.
Nov 25 23:33:19 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/1809223002' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Nov 25 23:33:19 compute-0 ceph-mon[75654]: osdmap e28: 3 total, 3 up, 3 in
Nov 25 23:33:19 compute-0 podman[96522]: 2025-11-25 23:33:19.585081352 +0000 UTC m=+1.360957030 container remove df76fe7522bb61cf1527c116ab463d2113cc06eb4cd8e42ce6b9e39741f8f17c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_kalam, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 25 23:33:19 compute-0 systemd[1]: libpod-conmon-df76fe7522bb61cf1527c116ab463d2113cc06eb4cd8e42ce6b9e39741f8f17c.scope: Deactivated successfully.
Nov 25 23:33:19 compute-0 sudo[96409]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:19 compute-0 sudo[96654]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:33:19 compute-0 sudo[96654]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:19 compute-0 sudo[96654]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:19 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0) v1
Nov 25 23:33:19 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3856405420' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Nov 25 23:33:19 compute-0 sudo[96679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:33:19 compute-0 sudo[96679]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:19 compute-0 sudo[96679]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:19 compute-0 sudo[96705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:33:19 compute-0 sudo[96705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:19 compute-0 sudo[96705]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:19 compute-0 sudo[96730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 -- lvm list --format json
Nov 25 23:33:19 compute-0 sudo[96730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:19 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v61: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:33:20 compute-0 ceph-mon[75654]: log_channel(cluster) log [WRN] : Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 25 23:33:20 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e28 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:33:20 compute-0 podman[96796]: 2025-11-25 23:33:20.294756563 +0000 UTC m=+0.068441587 container create f4d415185a3647d8bd1a5101286924b48afa4e9dcf5c908b7f9efa42d365c558 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_pike, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:33:20 compute-0 systemd[1]: Started libpod-conmon-f4d415185a3647d8bd1a5101286924b48afa4e9dcf5c908b7f9efa42d365c558.scope.
Nov 25 23:33:20 compute-0 podman[96796]: 2025-11-25 23:33:20.26860923 +0000 UTC m=+0.042294304 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:33:20 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:33:20 compute-0 podman[96796]: 2025-11-25 23:33:20.395127163 +0000 UTC m=+0.168812177 container init f4d415185a3647d8bd1a5101286924b48afa4e9dcf5c908b7f9efa42d365c558 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_pike, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:33:20 compute-0 podman[96796]: 2025-11-25 23:33:20.400646327 +0000 UTC m=+0.174331321 container start f4d415185a3647d8bd1a5101286924b48afa4e9dcf5c908b7f9efa42d365c558 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_pike, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 23:33:20 compute-0 podman[96796]: 2025-11-25 23:33:20.403311976 +0000 UTC m=+0.176996980 container attach f4d415185a3647d8bd1a5101286924b48afa4e9dcf5c908b7f9efa42d365c558 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_pike, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507)
Nov 25 23:33:20 compute-0 exciting_pike[96813]: 167 167
Nov 25 23:33:20 compute-0 systemd[1]: libpod-f4d415185a3647d8bd1a5101286924b48afa4e9dcf5c908b7f9efa42d365c558.scope: Deactivated successfully.
Nov 25 23:33:20 compute-0 podman[96796]: 2025-11-25 23:33:20.407005772 +0000 UTC m=+0.180690756 container died f4d415185a3647d8bd1a5101286924b48afa4e9dcf5c908b7f9efa42d365c558 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_pike, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 23:33:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-ff435bbc29d11eef0a91d42f5b04e5a7729e24133b8da56bdd2236dd18a975b8-merged.mount: Deactivated successfully.
Nov 25 23:33:20 compute-0 podman[96796]: 2025-11-25 23:33:20.444113701 +0000 UTC m=+0.217798685 container remove f4d415185a3647d8bd1a5101286924b48afa4e9dcf5c908b7f9efa42d365c558 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_pike, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 23:33:20 compute-0 systemd[1]: libpod-conmon-f4d415185a3647d8bd1a5101286924b48afa4e9dcf5c908b7f9efa42d365c558.scope: Deactivated successfully.
Nov 25 23:33:20 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Nov 25 23:33:20 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/3856405420' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Nov 25 23:33:20 compute-0 ceph-mon[75654]: pgmap v61: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:33:20 compute-0 ceph-mon[75654]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 25 23:33:20 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3856405420' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Nov 25 23:33:20 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e29 e29: 3 total, 3 up, 3 in
Nov 25 23:33:20 compute-0 eager_hopper[96604]: enabled application 'rbd' on pool 'images'
Nov 25 23:33:20 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 3 up, 3 in
Nov 25 23:33:20 compute-0 systemd[1]: libpod-b9e45985b81314737baffb3a83a93e6d38424305203db8cfc0a9a01d3e8598f2.scope: Deactivated successfully.
Nov 25 23:33:20 compute-0 podman[96581]: 2025-11-25 23:33:20.594249529 +0000 UTC m=+1.578216009 container died b9e45985b81314737baffb3a83a93e6d38424305203db8cfc0a9a01d3e8598f2 (image=quay.io/ceph/ceph:v18, name=eager_hopper, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 23:33:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-651d1514fbbf65ff3d9a38c9e592d82cf338b07fe78f454c96746c850a4d4927-merged.mount: Deactivated successfully.
Nov 25 23:33:20 compute-0 podman[96581]: 2025-11-25 23:33:20.644652865 +0000 UTC m=+1.628619335 container remove b9e45985b81314737baffb3a83a93e6d38424305203db8cfc0a9a01d3e8598f2 (image=quay.io/ceph/ceph:v18, name=eager_hopper, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:33:20 compute-0 systemd[1]: libpod-conmon-b9e45985b81314737baffb3a83a93e6d38424305203db8cfc0a9a01d3e8598f2.scope: Deactivated successfully.
Nov 25 23:33:20 compute-0 podman[96837]: 2025-11-25 23:33:20.663753103 +0000 UTC m=+0.065034278 container create 86c8d6ffee7d175b320ce9bdc28bde44940a8d1d41e78c55c6d36e2c7fe7cff9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_nightingale, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 23:33:20 compute-0 sudo[96578]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:20 compute-0 systemd[1]: Started libpod-conmon-86c8d6ffee7d175b320ce9bdc28bde44940a8d1d41e78c55c6d36e2c7fe7cff9.scope.
Nov 25 23:33:20 compute-0 podman[96837]: 2025-11-25 23:33:20.63989481 +0000 UTC m=+0.041176055 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:33:20 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:33:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0c952a3e99946f977fdee3c1fc10355e3fad81915a19039e28d469ff27044da/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0c952a3e99946f977fdee3c1fc10355e3fad81915a19039e28d469ff27044da/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0c952a3e99946f977fdee3c1fc10355e3fad81915a19039e28d469ff27044da/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0c952a3e99946f977fdee3c1fc10355e3fad81915a19039e28d469ff27044da/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:20 compute-0 podman[96837]: 2025-11-25 23:33:20.779918395 +0000 UTC m=+0.181199560 container init 86c8d6ffee7d175b320ce9bdc28bde44940a8d1d41e78c55c6d36e2c7fe7cff9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_nightingale, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 23:33:20 compute-0 podman[96837]: 2025-11-25 23:33:20.797604636 +0000 UTC m=+0.198885801 container start 86c8d6ffee7d175b320ce9bdc28bde44940a8d1d41e78c55c6d36e2c7fe7cff9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_nightingale, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 23:33:20 compute-0 podman[96837]: 2025-11-25 23:33:20.800663246 +0000 UTC m=+0.201944401 container attach 86c8d6ffee7d175b320ce9bdc28bde44940a8d1d41e78c55c6d36e2c7fe7cff9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_nightingale, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:33:20 compute-0 sudo[96893]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ryjjuswrttqzrbaogrdouxsqhtlwnaeu ; /usr/bin/python3'
Nov 25 23:33:20 compute-0 sudo[96893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:33:21 compute-0 python3[96895]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 101922db-575f-58e2-980f-928050464f69 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:33:21 compute-0 podman[96896]: 2025-11-25 23:33:21.085113039 +0000 UTC m=+0.051765391 container create 07519b36b22fa584600f6e6b7af65e00349318f0dbd71cde496dd351b85b2063 (image=quay.io/ceph/ceph:v18, name=gracious_antonelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True)
Nov 25 23:33:21 compute-0 systemd[1]: Started libpod-conmon-07519b36b22fa584600f6e6b7af65e00349318f0dbd71cde496dd351b85b2063.scope.
Nov 25 23:33:21 compute-0 podman[96896]: 2025-11-25 23:33:21.068737032 +0000 UTC m=+0.035389404 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 23:33:21 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:33:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/740a3ac6852b92104c2bd77aa8d6a6dc09f18e862347efccb3c5543ccd4cf719/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/740a3ac6852b92104c2bd77aa8d6a6dc09f18e862347efccb3c5543ccd4cf719/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:21 compute-0 podman[96896]: 2025-11-25 23:33:21.192918413 +0000 UTC m=+0.159570785 container init 07519b36b22fa584600f6e6b7af65e00349318f0dbd71cde496dd351b85b2063 (image=quay.io/ceph/ceph:v18, name=gracious_antonelli, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 23:33:21 compute-0 podman[96896]: 2025-11-25 23:33:21.201966659 +0000 UTC m=+0.168619011 container start 07519b36b22fa584600f6e6b7af65e00349318f0dbd71cde496dd351b85b2063 (image=quay.io/ceph/ceph:v18, name=gracious_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 25 23:33:21 compute-0 podman[96896]: 2025-11-25 23:33:21.204923166 +0000 UTC m=+0.171575528 container attach 07519b36b22fa584600f6e6b7af65e00349318f0dbd71cde496dd351b85b2063 (image=quay.io/ceph/ceph:v18, name=gracious_antonelli, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:33:21 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/3856405420' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Nov 25 23:33:21 compute-0 ceph-mon[75654]: osdmap e29: 3 total, 3 up, 3 in
Nov 25 23:33:21 compute-0 nostalgic_nightingale[96865]: {
Nov 25 23:33:21 compute-0 nostalgic_nightingale[96865]:     "0": [
Nov 25 23:33:21 compute-0 nostalgic_nightingale[96865]:         {
Nov 25 23:33:21 compute-0 nostalgic_nightingale[96865]:             "devices": [
Nov 25 23:33:21 compute-0 nostalgic_nightingale[96865]:                 "/dev/loop3"
Nov 25 23:33:21 compute-0 nostalgic_nightingale[96865]:             ],
Nov 25 23:33:21 compute-0 nostalgic_nightingale[96865]:             "lv_name": "ceph_lv0",
Nov 25 23:33:21 compute-0 nostalgic_nightingale[96865]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 23:33:21 compute-0 nostalgic_nightingale[96865]:             "lv_size": "21470642176",
Nov 25 23:33:21 compute-0 nostalgic_nightingale[96865]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c5ddf08a-6193-41e0-8332-60b5083aa62e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 23:33:21 compute-0 nostalgic_nightingale[96865]:             "lv_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 23:33:21 compute-0 nostalgic_nightingale[96865]:             "name": "ceph_lv0",
Nov 25 23:33:21 compute-0 nostalgic_nightingale[96865]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 23:33:21 compute-0 nostalgic_nightingale[96865]:             "tags": {
Nov 25 23:33:21 compute-0 nostalgic_nightingale[96865]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 23:33:21 compute-0 nostalgic_nightingale[96865]:                 "ceph.block_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 23:33:21 compute-0 nostalgic_nightingale[96865]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 23:33:21 compute-0 nostalgic_nightingale[96865]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:33:21 compute-0 nostalgic_nightingale[96865]:                 "ceph.cluster_name": "ceph",
Nov 25 23:33:21 compute-0 nostalgic_nightingale[96865]:                 "ceph.crush_device_class": "",
Nov 25 23:33:21 compute-0 nostalgic_nightingale[96865]:                 "ceph.encrypted": "0",
Nov 25 23:33:21 compute-0 nostalgic_nightingale[96865]:                 "ceph.osd_fsid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 23:33:21 compute-0 nostalgic_nightingale[96865]:                 "ceph.osd_id": "0",
Nov 25 23:33:21 compute-0 nostalgic_nightingale[96865]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 23:33:21 compute-0 nostalgic_nightingale[96865]:                 "ceph.type": "block",
Nov 25 23:33:21 compute-0 nostalgic_nightingale[96865]:                 "ceph.vdo": "0"
Nov 25 23:33:21 compute-0 nostalgic_nightingale[96865]:             },
Nov 25 23:33:21 compute-0 nostalgic_nightingale[96865]:             "type": "block",
Nov 25 23:33:21 compute-0 nostalgic_nightingale[96865]:             "vg_name": "ceph_vg0"
Nov 25 23:33:21 compute-0 nostalgic_nightingale[96865]:         }
Nov 25 23:33:21 compute-0 nostalgic_nightingale[96865]:     ],
Nov 25 23:33:21 compute-0 nostalgic_nightingale[96865]:     "1": [
Nov 25 23:33:21 compute-0 nostalgic_nightingale[96865]:         {
Nov 25 23:33:21 compute-0 nostalgic_nightingale[96865]:             "devices": [
Nov 25 23:33:21 compute-0 nostalgic_nightingale[96865]:                 "/dev/loop4"
Nov 25 23:33:21 compute-0 nostalgic_nightingale[96865]:             ],
Nov 25 23:33:21 compute-0 nostalgic_nightingale[96865]:             "lv_name": "ceph_lv1",
Nov 25 23:33:21 compute-0 nostalgic_nightingale[96865]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 23:33:21 compute-0 nostalgic_nightingale[96865]:             "lv_size": "21470642176",
Nov 25 23:33:21 compute-0 nostalgic_nightingale[96865]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21bbab34-bea3-466b-8bf7-812749fcef47,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 23:33:21 compute-0 nostalgic_nightingale[96865]:             "lv_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 23:33:21 compute-0 nostalgic_nightingale[96865]:             "name": "ceph_lv1",
Nov 25 23:33:21 compute-0 nostalgic_nightingale[96865]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 23:33:21 compute-0 nostalgic_nightingale[96865]:             "tags": {
Nov 25 23:33:21 compute-0 nostalgic_nightingale[96865]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 23:33:21 compute-0 nostalgic_nightingale[96865]:                 "ceph.block_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 23:33:21 compute-0 nostalgic_nightingale[96865]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 23:33:21 compute-0 nostalgic_nightingale[96865]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:33:21 compute-0 nostalgic_nightingale[96865]:                 "ceph.cluster_name": "ceph",
Nov 25 23:33:21 compute-0 nostalgic_nightingale[96865]:                 "ceph.crush_device_class": "",
Nov 25 23:33:21 compute-0 nostalgic_nightingale[96865]:                 "ceph.encrypted": "0",
Nov 25 23:33:21 compute-0 nostalgic_nightingale[96865]:                 "ceph.osd_fsid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 23:33:21 compute-0 nostalgic_nightingale[96865]:                 "ceph.osd_id": "1",
Nov 25 23:33:21 compute-0 nostalgic_nightingale[96865]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 23:33:21 compute-0 nostalgic_nightingale[96865]:                 "ceph.type": "block",
Nov 25 23:33:21 compute-0 nostalgic_nightingale[96865]:                 "ceph.vdo": "0"
Nov 25 23:33:21 compute-0 nostalgic_nightingale[96865]:             },
Nov 25 23:33:21 compute-0 nostalgic_nightingale[96865]:             "type": "block",
Nov 25 23:33:21 compute-0 nostalgic_nightingale[96865]:             "vg_name": "ceph_vg1"
Nov 25 23:33:21 compute-0 nostalgic_nightingale[96865]:         }
Nov 25 23:33:21 compute-0 nostalgic_nightingale[96865]:     ],
Nov 25 23:33:21 compute-0 nostalgic_nightingale[96865]:     "2": [
Nov 25 23:33:21 compute-0 nostalgic_nightingale[96865]:         {
Nov 25 23:33:21 compute-0 nostalgic_nightingale[96865]:             "devices": [
Nov 25 23:33:21 compute-0 nostalgic_nightingale[96865]:                 "/dev/loop5"
Nov 25 23:33:21 compute-0 nostalgic_nightingale[96865]:             ],
Nov 25 23:33:21 compute-0 nostalgic_nightingale[96865]:             "lv_name": "ceph_lv2",
Nov 25 23:33:21 compute-0 nostalgic_nightingale[96865]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 23:33:21 compute-0 nostalgic_nightingale[96865]:             "lv_size": "21470642176",
Nov 25 23:33:21 compute-0 nostalgic_nightingale[96865]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=019d967b-1a56-4e90-8682-a890da577e20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 23:33:21 compute-0 nostalgic_nightingale[96865]:             "lv_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 23:33:21 compute-0 nostalgic_nightingale[96865]:             "name": "ceph_lv2",
Nov 25 23:33:21 compute-0 nostalgic_nightingale[96865]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 23:33:21 compute-0 nostalgic_nightingale[96865]:             "tags": {
Nov 25 23:33:21 compute-0 nostalgic_nightingale[96865]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 23:33:21 compute-0 nostalgic_nightingale[96865]:                 "ceph.block_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 23:33:21 compute-0 nostalgic_nightingale[96865]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 23:33:21 compute-0 nostalgic_nightingale[96865]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:33:21 compute-0 nostalgic_nightingale[96865]:                 "ceph.cluster_name": "ceph",
Nov 25 23:33:21 compute-0 nostalgic_nightingale[96865]:                 "ceph.crush_device_class": "",
Nov 25 23:33:21 compute-0 nostalgic_nightingale[96865]:                 "ceph.encrypted": "0",
Nov 25 23:33:21 compute-0 nostalgic_nightingale[96865]:                 "ceph.osd_fsid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 23:33:21 compute-0 nostalgic_nightingale[96865]:                 "ceph.osd_id": "2",
Nov 25 23:33:21 compute-0 nostalgic_nightingale[96865]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 23:33:21 compute-0 nostalgic_nightingale[96865]:                 "ceph.type": "block",
Nov 25 23:33:21 compute-0 nostalgic_nightingale[96865]:                 "ceph.vdo": "0"
Nov 25 23:33:21 compute-0 nostalgic_nightingale[96865]:             },
Nov 25 23:33:21 compute-0 nostalgic_nightingale[96865]:             "type": "block",
Nov 25 23:33:21 compute-0 nostalgic_nightingale[96865]:             "vg_name": "ceph_vg2"
Nov 25 23:33:21 compute-0 nostalgic_nightingale[96865]:         }
Nov 25 23:33:21 compute-0 nostalgic_nightingale[96865]:     ]
Nov 25 23:33:21 compute-0 nostalgic_nightingale[96865]: }
Nov 25 23:33:21 compute-0 systemd[1]: libpod-86c8d6ffee7d175b320ce9bdc28bde44940a8d1d41e78c55c6d36e2c7fe7cff9.scope: Deactivated successfully.
Nov 25 23:33:21 compute-0 podman[96837]: 2025-11-25 23:33:21.698461677 +0000 UTC m=+1.099742882 container died 86c8d6ffee7d175b320ce9bdc28bde44940a8d1d41e78c55c6d36e2c7fe7cff9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_nightingale, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 23:33:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-c0c952a3e99946f977fdee3c1fc10355e3fad81915a19039e28d469ff27044da-merged.mount: Deactivated successfully.
Nov 25 23:33:21 compute-0 podman[96837]: 2025-11-25 23:33:21.768901005 +0000 UTC m=+1.170182200 container remove 86c8d6ffee7d175b320ce9bdc28bde44940a8d1d41e78c55c6d36e2c7fe7cff9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 23:33:21 compute-0 systemd[1]: libpod-conmon-86c8d6ffee7d175b320ce9bdc28bde44940a8d1d41e78c55c6d36e2c7fe7cff9.scope: Deactivated successfully.
Nov 25 23:33:21 compute-0 sudo[96730]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:21 compute-0 sudo[96949]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:33:21 compute-0 sudo[96949]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:21 compute-0 sudo[96949]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:21 compute-0 sudo[96974]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:33:21 compute-0 sudo[96974]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:21 compute-0 sudo[96974]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:21 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v63: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:33:22 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0) v1
Nov 25 23:33:22 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/403499466' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Nov 25 23:33:22 compute-0 sudo[96999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:33:22 compute-0 sudo[96999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:22 compute-0 sudo[96999]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:22 compute-0 sudo[97025]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 -- raw list --format json
Nov 25 23:33:22 compute-0 sudo[97025]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:22 compute-0 podman[97090]: 2025-11-25 23:33:22.582039247 +0000 UTC m=+0.067672308 container create e19ce40d54833d9b70a206fe38d29e41438d83f1d21965086d518c5a26808f75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_galileo, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 25 23:33:22 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Nov 25 23:33:22 compute-0 ceph-mon[75654]: pgmap v63: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:33:22 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/403499466' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Nov 25 23:33:22 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/403499466' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Nov 25 23:33:22 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e30 e30: 3 total, 3 up, 3 in
Nov 25 23:33:22 compute-0 gracious_antonelli[96911]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Nov 25 23:33:22 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 3 up, 3 in
Nov 25 23:33:22 compute-0 systemd[1]: Started libpod-conmon-e19ce40d54833d9b70a206fe38d29e41438d83f1d21965086d518c5a26808f75.scope.
Nov 25 23:33:22 compute-0 systemd[1]: libpod-07519b36b22fa584600f6e6b7af65e00349318f0dbd71cde496dd351b85b2063.scope: Deactivated successfully.
Nov 25 23:33:22 compute-0 podman[96896]: 2025-11-25 23:33:22.636693873 +0000 UTC m=+1.603346255 container died 07519b36b22fa584600f6e6b7af65e00349318f0dbd71cde496dd351b85b2063 (image=quay.io/ceph/ceph:v18, name=gracious_antonelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:33:22 compute-0 podman[97090]: 2025-11-25 23:33:22.555362401 +0000 UTC m=+0.040995522 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:33:22 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:33:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-740a3ac6852b92104c2bd77aa8d6a6dc09f18e862347efccb3c5543ccd4cf719-merged.mount: Deactivated successfully.
Nov 25 23:33:22 compute-0 podman[97090]: 2025-11-25 23:33:22.680121586 +0000 UTC m=+0.165754657 container init e19ce40d54833d9b70a206fe38d29e41438d83f1d21965086d518c5a26808f75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_galileo, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 23:33:22 compute-0 podman[96896]: 2025-11-25 23:33:22.687836437 +0000 UTC m=+1.654488809 container remove 07519b36b22fa584600f6e6b7af65e00349318f0dbd71cde496dd351b85b2063 (image=quay.io/ceph/ceph:v18, name=gracious_antonelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 23:33:22 compute-0 podman[97090]: 2025-11-25 23:33:22.693232448 +0000 UTC m=+0.178865489 container start e19ce40d54833d9b70a206fe38d29e41438d83f1d21965086d518c5a26808f75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_galileo, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 23:33:22 compute-0 funny_galileo[97108]: 167 167
Nov 25 23:33:22 compute-0 podman[97090]: 2025-11-25 23:33:22.69824711 +0000 UTC m=+0.183880181 container attach e19ce40d54833d9b70a206fe38d29e41438d83f1d21965086d518c5a26808f75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_galileo, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:33:22 compute-0 systemd[1]: libpod-e19ce40d54833d9b70a206fe38d29e41438d83f1d21965086d518c5a26808f75.scope: Deactivated successfully.
Nov 25 23:33:22 compute-0 podman[97090]: 2025-11-25 23:33:22.700310163 +0000 UTC m=+0.185943224 container died e19ce40d54833d9b70a206fe38d29e41438d83f1d21965086d518c5a26808f75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_galileo, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 23:33:22 compute-0 systemd[1]: libpod-conmon-07519b36b22fa584600f6e6b7af65e00349318f0dbd71cde496dd351b85b2063.scope: Deactivated successfully.
Nov 25 23:33:22 compute-0 sudo[96893]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-2e5ea64f790e378b5c844542ec1861dfee15f9b4cf5b653be3025f0181fe6254-merged.mount: Deactivated successfully.
Nov 25 23:33:22 compute-0 podman[97090]: 2025-11-25 23:33:22.73967079 +0000 UTC m=+0.225303831 container remove e19ce40d54833d9b70a206fe38d29e41438d83f1d21965086d518c5a26808f75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_galileo, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:33:22 compute-0 systemd[1]: libpod-conmon-e19ce40d54833d9b70a206fe38d29e41438d83f1d21965086d518c5a26808f75.scope: Deactivated successfully.
Nov 25 23:33:22 compute-0 sudo[97168]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zyzlpptbxkpkgbdvlnfjjscuuhoedsvz ; /usr/bin/python3'
Nov 25 23:33:22 compute-0 sudo[97168]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:33:22 compute-0 podman[97164]: 2025-11-25 23:33:22.93433055 +0000 UTC m=+0.068928609 container create 2d1b1f71675be997f89a399a53f7668a97d89b0deb271d1bb44443b1fb5d9645 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_joliot, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:33:22 compute-0 systemd[1]: Started libpod-conmon-2d1b1f71675be997f89a399a53f7668a97d89b0deb271d1bb44443b1fb5d9645.scope.
Nov 25 23:33:23 compute-0 podman[97164]: 2025-11-25 23:33:22.907306555 +0000 UTC m=+0.041904674 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:33:23 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:33:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/979c8244437182f93748b4f6f26de37e6814ec09c9ce40957d32d21edf4702af/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/979c8244437182f93748b4f6f26de37e6814ec09c9ce40957d32d21edf4702af/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/979c8244437182f93748b4f6f26de37e6814ec09c9ce40957d32d21edf4702af/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/979c8244437182f93748b4f6f26de37e6814ec09c9ce40957d32d21edf4702af/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:23 compute-0 podman[97164]: 2025-11-25 23:33:23.034611018 +0000 UTC m=+0.169209127 container init 2d1b1f71675be997f89a399a53f7668a97d89b0deb271d1bb44443b1fb5d9645 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:33:23 compute-0 python3[97177]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 101922db-575f-58e2-980f-928050464f69 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:33:23 compute-0 podman[97164]: 2025-11-25 23:33:23.051583711 +0000 UTC m=+0.186181780 container start 2d1b1f71675be997f89a399a53f7668a97d89b0deb271d1bb44443b1fb5d9645 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_joliot, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:33:23 compute-0 podman[97164]: 2025-11-25 23:33:23.055673927 +0000 UTC m=+0.190271986 container attach 2d1b1f71675be997f89a399a53f7668a97d89b0deb271d1bb44443b1fb5d9645 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_joliot, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 23:33:23 compute-0 podman[97190]: 2025-11-25 23:33:23.120035667 +0000 UTC m=+0.057392959 container create d15164a80bd4ed85b7e411887aaf46a34973416db0dd6865cca17c1b29728ab5 (image=quay.io/ceph/ceph:v18, name=funny_zhukovsky, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:33:23 compute-0 systemd[1]: Started libpod-conmon-d15164a80bd4ed85b7e411887aaf46a34973416db0dd6865cca17c1b29728ab5.scope.
Nov 25 23:33:23 compute-0 podman[97190]: 2025-11-25 23:33:23.091523562 +0000 UTC m=+0.028880904 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 23:33:23 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:33:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14531388e0cc002b2340a6c35766c6439123a636233799a961217a7c62d6c323/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14531388e0cc002b2340a6c35766c6439123a636233799a961217a7c62d6c323/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:23 compute-0 podman[97190]: 2025-11-25 23:33:23.222013909 +0000 UTC m=+0.159371191 container init d15164a80bd4ed85b7e411887aaf46a34973416db0dd6865cca17c1b29728ab5 (image=quay.io/ceph/ceph:v18, name=funny_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 23:33:23 compute-0 podman[97190]: 2025-11-25 23:33:23.227185783 +0000 UTC m=+0.164543045 container start d15164a80bd4ed85b7e411887aaf46a34973416db0dd6865cca17c1b29728ab5 (image=quay.io/ceph/ceph:v18, name=funny_zhukovsky, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 23:33:23 compute-0 podman[97190]: 2025-11-25 23:33:23.230724096 +0000 UTC m=+0.168081438 container attach d15164a80bd4ed85b7e411887aaf46a34973416db0dd6865cca17c1b29728ab5 (image=quay.io/ceph/ceph:v18, name=funny_zhukovsky, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 23:33:23 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/403499466' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Nov 25 23:33:23 compute-0 ceph-mon[75654]: osdmap e30: 3 total, 3 up, 3 in
Nov 25 23:33:23 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0) v1
Nov 25 23:33:23 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2286248551' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Nov 25 23:33:23 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v65: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:33:24 compute-0 cool_joliot[97186]: {
Nov 25 23:33:24 compute-0 cool_joliot[97186]:     "019d967b-1a56-4e90-8682-a890da577e20": {
Nov 25 23:33:24 compute-0 cool_joliot[97186]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:33:24 compute-0 cool_joliot[97186]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 23:33:24 compute-0 cool_joliot[97186]:         "osd_id": 2,
Nov 25 23:33:24 compute-0 cool_joliot[97186]:         "osd_uuid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 23:33:24 compute-0 cool_joliot[97186]:         "type": "bluestore"
Nov 25 23:33:24 compute-0 cool_joliot[97186]:     },
Nov 25 23:33:24 compute-0 cool_joliot[97186]:     "21bbab34-bea3-466b-8bf7-812749fcef47": {
Nov 25 23:33:24 compute-0 cool_joliot[97186]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:33:24 compute-0 cool_joliot[97186]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 23:33:24 compute-0 cool_joliot[97186]:         "osd_id": 1,
Nov 25 23:33:24 compute-0 cool_joliot[97186]:         "osd_uuid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 23:33:24 compute-0 cool_joliot[97186]:         "type": "bluestore"
Nov 25 23:33:24 compute-0 cool_joliot[97186]:     },
Nov 25 23:33:24 compute-0 cool_joliot[97186]:     "c5ddf08a-6193-41e0-8332-60b5083aa62e": {
Nov 25 23:33:24 compute-0 cool_joliot[97186]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:33:24 compute-0 cool_joliot[97186]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 23:33:24 compute-0 cool_joliot[97186]:         "osd_id": 0,
Nov 25 23:33:24 compute-0 cool_joliot[97186]:         "osd_uuid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 23:33:24 compute-0 cool_joliot[97186]:         "type": "bluestore"
Nov 25 23:33:24 compute-0 cool_joliot[97186]:     }
Nov 25 23:33:24 compute-0 cool_joliot[97186]: }
Nov 25 23:33:24 compute-0 systemd[1]: libpod-2d1b1f71675be997f89a399a53f7668a97d89b0deb271d1bb44443b1fb5d9645.scope: Deactivated successfully.
Nov 25 23:33:24 compute-0 systemd[1]: libpod-2d1b1f71675be997f89a399a53f7668a97d89b0deb271d1bb44443b1fb5d9645.scope: Consumed 1.150s CPU time.
Nov 25 23:33:24 compute-0 podman[97259]: 2025-11-25 23:33:24.254467403 +0000 UTC m=+0.033045773 container died 2d1b1f71675be997f89a399a53f7668a97d89b0deb271d1bb44443b1fb5d9645 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:33:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-979c8244437182f93748b4f6f26de37e6814ec09c9ce40957d32d21edf4702af-merged.mount: Deactivated successfully.
Nov 25 23:33:24 compute-0 podman[97259]: 2025-11-25 23:33:24.334840761 +0000 UTC m=+0.113419101 container remove 2d1b1f71675be997f89a399a53f7668a97d89b0deb271d1bb44443b1fb5d9645 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_joliot, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:33:24 compute-0 systemd[1]: libpod-conmon-2d1b1f71675be997f89a399a53f7668a97d89b0deb271d1bb44443b1fb5d9645.scope: Deactivated successfully.
Nov 25 23:33:24 compute-0 sudo[97025]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:24 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 23:33:24 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:33:24 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 23:33:24 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:33:24 compute-0 sudo[97274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:33:24 compute-0 sudo[97274]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:24 compute-0 sudo[97274]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:24 compute-0 sudo[97299]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 23:33:24 compute-0 sudo[97299]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:24 compute-0 sudo[97299]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:24 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Nov 25 23:33:24 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/2286248551' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Nov 25 23:33:24 compute-0 ceph-mon[75654]: pgmap v65: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:33:24 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:33:24 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:33:24 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2286248551' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Nov 25 23:33:24 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e31 e31: 3 total, 3 up, 3 in
Nov 25 23:33:24 compute-0 funny_zhukovsky[97207]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Nov 25 23:33:24 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 3 up, 3 in
Nov 25 23:33:24 compute-0 systemd[1]: libpod-d15164a80bd4ed85b7e411887aaf46a34973416db0dd6865cca17c1b29728ab5.scope: Deactivated successfully.
Nov 25 23:33:24 compute-0 podman[97190]: 2025-11-25 23:33:24.660769007 +0000 UTC m=+1.598126299 container died d15164a80bd4ed85b7e411887aaf46a34973416db0dd6865cca17c1b29728ab5 (image=quay.io/ceph/ceph:v18, name=funny_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 23:33:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-14531388e0cc002b2340a6c35766c6439123a636233799a961217a7c62d6c323-merged.mount: Deactivated successfully.
Nov 25 23:33:24 compute-0 podman[97190]: 2025-11-25 23:33:24.712582989 +0000 UTC m=+1.649940281 container remove d15164a80bd4ed85b7e411887aaf46a34973416db0dd6865cca17c1b29728ab5 (image=quay.io/ceph/ceph:v18, name=funny_zhukovsky, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 23:33:24 compute-0 systemd[1]: libpod-conmon-d15164a80bd4ed85b7e411887aaf46a34973416db0dd6865cca17c1b29728ab5.scope: Deactivated successfully.
Nov 25 23:33:24 compute-0 sudo[97168]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:25 compute-0 ceph-mon[75654]: log_channel(cluster) log [WRN] : Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 25 23:33:25 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:33:25 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/2286248551' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Nov 25 23:33:25 compute-0 ceph-mon[75654]: osdmap e31: 3 total, 3 up, 3 in
Nov 25 23:33:25 compute-0 ceph-mon[75654]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 25 23:33:25 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v67: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:33:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:33:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:33:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:33:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:33:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:33:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:33:26 compute-0 python3[97414]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 23:33:26 compute-0 ceph-mon[75654]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 25 23:33:26 compute-0 ceph-mon[75654]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 25 23:33:26 compute-0 ceph-mon[75654]: pgmap v67: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:33:26 compute-0 python3[97485]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764113606.1408372-36784-76276289501161/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=e359e26d9e42bc107a0de03375144cf8590b6f68 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:33:27 compute-0 sudo[97533]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrligoqtxfwszafmkibngsapanlonlxw ; /usr/bin/python3'
Nov 25 23:33:27 compute-0 sudo[97533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:33:27 compute-0 python3[97535]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 101922db-575f-58e2-980f-928050464f69 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 '
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:33:27 compute-0 podman[97536]: 2025-11-25 23:33:27.504090662 +0000 UTC m=+0.048584219 container create 6ee2b2a61d7a9c2e6c72dfcf7a69c37691b91a74b5d68f488158cfba35a499e1 (image=quay.io/ceph/ceph:v18, name=practical_mendel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 23:33:27 compute-0 systemd[1]: Started libpod-conmon-6ee2b2a61d7a9c2e6c72dfcf7a69c37691b91a74b5d68f488158cfba35a499e1.scope.
Nov 25 23:33:27 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:33:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bba69982c9e489d6b7336acb9b659f5e40e1cf4ba17b46578d045de6f960062/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bba69982c9e489d6b7336acb9b659f5e40e1cf4ba17b46578d045de6f960062/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bba69982c9e489d6b7336acb9b659f5e40e1cf4ba17b46578d045de6f960062/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:27 compute-0 podman[97536]: 2025-11-25 23:33:27.481029981 +0000 UTC m=+0.025523598 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 23:33:27 compute-0 podman[97536]: 2025-11-25 23:33:27.587963371 +0000 UTC m=+0.132456978 container init 6ee2b2a61d7a9c2e6c72dfcf7a69c37691b91a74b5d68f488158cfba35a499e1 (image=quay.io/ceph/ceph:v18, name=practical_mendel, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:33:27 compute-0 podman[97536]: 2025-11-25 23:33:27.601827743 +0000 UTC m=+0.146321300 container start 6ee2b2a61d7a9c2e6c72dfcf7a69c37691b91a74b5d68f488158cfba35a499e1 (image=quay.io/ceph/ceph:v18, name=practical_mendel, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 23:33:27 compute-0 podman[97536]: 2025-11-25 23:33:27.606492664 +0000 UTC m=+0.150986231 container attach 6ee2b2a61d7a9c2e6c72dfcf7a69c37691b91a74b5d68f488158cfba35a499e1 (image=quay.io/ceph/ceph:v18, name=practical_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:33:27 compute-0 ceph-mon[75654]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 25 23:33:27 compute-0 ceph-mon[75654]: Cluster is now healthy
Nov 25 23:33:27 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v68: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:33:28 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14240 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 23:33:28 compute-0 ceph-mgr[75954]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Nov 25 23:33:28 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0) v1
Nov 25 23:33:28 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Nov 25 23:33:28 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0) v1
Nov 25 23:33:28 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Nov 25 23:33:28 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0) v1
Nov 25 23:33:28 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Nov 25 23:33:28 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Nov 25 23:33:28 compute-0 ceph-mon[75654]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Nov 25 23:33:28 compute-0 ceph-mon[75654]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Nov 25 23:33:28 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-mon-compute-0[75650]: 2025-11-25T23:33:28.178+0000 7efe59ee9640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Nov 25 23:33:28 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Nov 25 23:33:28 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).mds e2 new map
Nov 25 23:33:28 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).mds e2 print_map
                                           e2
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-25T23:33:28.179667+0000
                                           modified        2025-11-25T23:33:28.179713+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                            
                                            
Nov 25 23:33:28 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e32 e32: 3 total, 3 up, 3 in
Nov 25 23:33:28 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 3 up, 3 in
Nov 25 23:33:28 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Nov 25 23:33:28 compute-0 ceph-mgr[75954]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Nov 25 23:33:28 compute-0 ceph-mgr[75954]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Nov 25 23:33:28 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 25 23:33:28 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:33:28 compute-0 ceph-mgr[75954]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Nov 25 23:33:28 compute-0 systemd[1]: libpod-6ee2b2a61d7a9c2e6c72dfcf7a69c37691b91a74b5d68f488158cfba35a499e1.scope: Deactivated successfully.
Nov 25 23:33:28 compute-0 podman[97536]: 2025-11-25 23:33:28.221003382 +0000 UTC m=+0.765496919 container died 6ee2b2a61d7a9c2e6c72dfcf7a69c37691b91a74b5d68f488158cfba35a499e1 (image=quay.io/ceph/ceph:v18, name=practical_mendel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507)
Nov 25 23:33:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-8bba69982c9e489d6b7336acb9b659f5e40e1cf4ba17b46578d045de6f960062-merged.mount: Deactivated successfully.
Nov 25 23:33:28 compute-0 podman[97536]: 2025-11-25 23:33:28.266373235 +0000 UTC m=+0.810866772 container remove 6ee2b2a61d7a9c2e6c72dfcf7a69c37691b91a74b5d68f488158cfba35a499e1 (image=quay.io/ceph/ceph:v18, name=practical_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 23:33:28 compute-0 systemd[1]: libpod-conmon-6ee2b2a61d7a9c2e6c72dfcf7a69c37691b91a74b5d68f488158cfba35a499e1.scope: Deactivated successfully.
Nov 25 23:33:28 compute-0 sudo[97576]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:33:28 compute-0 sudo[97576]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:28 compute-0 sudo[97533]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:28 compute-0 sudo[97576]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:28 compute-0 sudo[97612]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:33:28 compute-0 sudo[97612]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:28 compute-0 sudo[97612]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:28 compute-0 sudo[97637]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:33:28 compute-0 sudo[97637]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:28 compute-0 sudo[97637]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:28 compute-0 sudo[97685]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uziikndftidrolkghhrewmyqaqgnhutz ; /usr/bin/python3'
Nov 25 23:33:28 compute-0 sudo[97685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:33:28 compute-0 sudo[97686]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 25 23:33:28 compute-0 sudo[97686]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:28 compute-0 python3[97692]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 101922db-575f-58e2-980f-928050464f69 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:33:28 compute-0 ceph-mon[75654]: pgmap v68: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:33:28 compute-0 ceph-mon[75654]: from='client.14240 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 23:33:28 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Nov 25 23:33:28 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Nov 25 23:33:28 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Nov 25 23:33:28 compute-0 ceph-mon[75654]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Nov 25 23:33:28 compute-0 ceph-mon[75654]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Nov 25 23:33:28 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Nov 25 23:33:28 compute-0 ceph-mon[75654]: osdmap e32: 3 total, 3 up, 3 in
Nov 25 23:33:28 compute-0 ceph-mon[75654]: fsmap cephfs:0
Nov 25 23:33:28 compute-0 ceph-mon[75654]: Saving service mds.cephfs spec with placement compute-0
Nov 25 23:33:28 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:33:28 compute-0 podman[97713]: 2025-11-25 23:33:28.715702422 +0000 UTC m=+0.056372522 container create 403a406339fcbdcdab06af7264a72a435f7f6b29b4a6cdbea2b009c969ed5c27 (image=quay.io/ceph/ceph:v18, name=upbeat_faraday, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 23:33:28 compute-0 systemd[1]: Started libpod-conmon-403a406339fcbdcdab06af7264a72a435f7f6b29b4a6cdbea2b009c969ed5c27.scope.
Nov 25 23:33:28 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:33:28 compute-0 podman[97713]: 2025-11-25 23:33:28.697436906 +0000 UTC m=+0.038107026 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 23:33:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a46cc0d4b09f50605bc909e3178ba8b0718e0e42999faceb683eee5fd0fffe13/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a46cc0d4b09f50605bc909e3178ba8b0718e0e42999faceb683eee5fd0fffe13/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a46cc0d4b09f50605bc909e3178ba8b0718e0e42999faceb683eee5fd0fffe13/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:28 compute-0 podman[97713]: 2025-11-25 23:33:28.808906285 +0000 UTC m=+0.149576415 container init 403a406339fcbdcdab06af7264a72a435f7f6b29b4a6cdbea2b009c969ed5c27 (image=quay.io/ceph/ceph:v18, name=upbeat_faraday, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 23:33:28 compute-0 podman[97713]: 2025-11-25 23:33:28.816892974 +0000 UTC m=+0.157563104 container start 403a406339fcbdcdab06af7264a72a435f7f6b29b4a6cdbea2b009c969ed5c27 (image=quay.io/ceph/ceph:v18, name=upbeat_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:33:28 compute-0 podman[97713]: 2025-11-25 23:33:28.821194566 +0000 UTC m=+0.161864666 container attach 403a406339fcbdcdab06af7264a72a435f7f6b29b4a6cdbea2b009c969ed5c27 (image=quay.io/ceph/ceph:v18, name=upbeat_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:33:29 compute-0 podman[97805]: 2025-11-25 23:33:29.241855544 +0000 UTC m=+0.091529839 container exec 42789e176a5dd28a3c234ac87642cbe4e8e4b6a7a43641214b6da8a496b18af2 (image=quay.io/ceph/ceph:v18, name=ceph-101922db-575f-58e2-980f-928050464f69-mon-compute-0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default)
Nov 25 23:33:29 compute-0 podman[97805]: 2025-11-25 23:33:29.355546051 +0000 UTC m=+0.205220276 container exec_died 42789e176a5dd28a3c234ac87642cbe4e8e4b6a7a43641214b6da8a496b18af2 (image=quay.io/ceph/ceph:v18, name=ceph-101922db-575f-58e2-980f-928050464f69-mon-compute-0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:33:29 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14242 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 23:33:29 compute-0 ceph-mgr[75954]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Nov 25 23:33:29 compute-0 ceph-mgr[75954]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Nov 25 23:33:29 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 25 23:33:29 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:33:29 compute-0 upbeat_faraday[97735]: Scheduled mds.cephfs update...
Nov 25 23:33:29 compute-0 systemd[1]: libpod-403a406339fcbdcdab06af7264a72a435f7f6b29b4a6cdbea2b009c969ed5c27.scope: Deactivated successfully.
Nov 25 23:33:29 compute-0 podman[97713]: 2025-11-25 23:33:29.401453969 +0000 UTC m=+0.742124089 container died 403a406339fcbdcdab06af7264a72a435f7f6b29b4a6cdbea2b009c969ed5c27 (image=quay.io/ceph/ceph:v18, name=upbeat_faraday, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:33:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-a46cc0d4b09f50605bc909e3178ba8b0718e0e42999faceb683eee5fd0fffe13-merged.mount: Deactivated successfully.
Nov 25 23:33:29 compute-0 podman[97713]: 2025-11-25 23:33:29.459589557 +0000 UTC m=+0.800259647 container remove 403a406339fcbdcdab06af7264a72a435f7f6b29b4a6cdbea2b009c969ed5c27 (image=quay.io/ceph/ceph:v18, name=upbeat_faraday, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:33:29 compute-0 systemd[1]: libpod-conmon-403a406339fcbdcdab06af7264a72a435f7f6b29b4a6cdbea2b009c969ed5c27.scope: Deactivated successfully.
Nov 25 23:33:29 compute-0 sudo[97685]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:29 compute-0 sudo[97686]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:29 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 23:33:29 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:33:29 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 23:33:29 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:33:29 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 23:33:29 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:33:29 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 23:33:29 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 23:33:29 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 23:33:29 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:33:29 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev eab50d6d-27b9-4921-b139-e4358bee461f does not exist
Nov 25 23:33:29 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev 96ddb597-69f5-4b9f-b6b9-3935e370cd77 does not exist
Nov 25 23:33:29 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev f0dd84be-34cc-4bc4-b40c-75fbdcd85099 does not exist
Nov 25 23:33:29 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 23:33:29 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 23:33:29 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 23:33:29 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 23:33:29 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 23:33:29 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:33:29 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v70: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:33:30 compute-0 sudo[97996]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:33:30 compute-0 sudo[97996]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:30 compute-0 sudo[97996]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:30 compute-0 sudo[98062]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wyzmrepwqtjrouqqlmmeatknuweiopez ; /usr/bin/python3'
Nov 25 23:33:30 compute-0 sudo[98062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:33:30 compute-0 sudo[98050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:33:30 compute-0 sudo[98050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:30 compute-0 sudo[98050]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:30 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:33:30 compute-0 sudo[98084]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:33:30 compute-0 sudo[98084]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:30 compute-0 sudo[98084]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:30 compute-0 python3[98081]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 23:33:30 compute-0 sudo[98062]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:30 compute-0 sudo[98109]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 23:33:30 compute-0 sudo[98109]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:30 compute-0 ceph-mon[75654]: from='client.14242 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 23:33:30 compute-0 ceph-mon[75654]: Saving service mds.cephfs spec with placement compute-0
Nov 25 23:33:30 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:33:30 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:33:30 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:33:30 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:33:30 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 23:33:30 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:33:30 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 23:33:30 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 23:33:30 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:33:30 compute-0 sudo[98211]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npcmpflzzguouxknzkcgyjqhgtcmftbh ; /usr/bin/python3'
Nov 25 23:33:30 compute-0 sudo[98211]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:33:30 compute-0 podman[98247]: 2025-11-25 23:33:30.699608978 +0000 UTC m=+0.038348131 container create 99a64be6b36632d135318213e494a28d1f11e7f02c3ce9776d8d5fc046b9c8ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_satoshi, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 23:33:30 compute-0 systemd[1]: Started libpod-conmon-99a64be6b36632d135318213e494a28d1f11e7f02c3ce9776d8d5fc046b9c8ca.scope.
Nov 25 23:33:30 compute-0 python3[98221]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764113609.944351-36815-146667851931385/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=7118a3e4848d5b96f84dfc7266d24215d2762b5c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:33:30 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:33:30 compute-0 podman[98247]: 2025-11-25 23:33:30.764876512 +0000 UTC m=+0.103615705 container init 99a64be6b36632d135318213e494a28d1f11e7f02c3ce9776d8d5fc046b9c8ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_satoshi, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Nov 25 23:33:30 compute-0 podman[98247]: 2025-11-25 23:33:30.772150512 +0000 UTC m=+0.110889655 container start 99a64be6b36632d135318213e494a28d1f11e7f02c3ce9776d8d5fc046b9c8ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_satoshi, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:33:30 compute-0 podman[98247]: 2025-11-25 23:33:30.775732036 +0000 UTC m=+0.114471209 container attach 99a64be6b36632d135318213e494a28d1f11e7f02c3ce9776d8d5fc046b9c8ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_satoshi, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:33:30 compute-0 distracted_satoshi[98263]: 167 167
Nov 25 23:33:30 compute-0 systemd[1]: libpod-99a64be6b36632d135318213e494a28d1f11e7f02c3ce9776d8d5fc046b9c8ca.scope: Deactivated successfully.
Nov 25 23:33:30 compute-0 podman[98247]: 2025-11-25 23:33:30.681187838 +0000 UTC m=+0.019927021 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:33:30 compute-0 sudo[98211]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:30 compute-0 podman[98247]: 2025-11-25 23:33:30.777379818 +0000 UTC m=+0.116118971 container died 99a64be6b36632d135318213e494a28d1f11e7f02c3ce9776d8d5fc046b9c8ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 23:33:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-f201bbbf946f56abbf7ed016d66c91b97da9dee700d4ffaa58d02e911c408bae-merged.mount: Deactivated successfully.
Nov 25 23:33:30 compute-0 podman[98247]: 2025-11-25 23:33:30.822035154 +0000 UTC m=+0.160774337 container remove 99a64be6b36632d135318213e494a28d1f11e7f02c3ce9776d8d5fc046b9c8ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_satoshi, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3)
Nov 25 23:33:30 compute-0 systemd[1]: libpod-conmon-99a64be6b36632d135318213e494a28d1f11e7f02c3ce9776d8d5fc046b9c8ca.scope: Deactivated successfully.
Nov 25 23:33:30 compute-0 podman[98314]: 2025-11-25 23:33:30.999103355 +0000 UTC m=+0.044336758 container create b5d7dfc8b14c72c3fa7fc52630f943a0f7ec7c3eaf2118cb4d56d4d57f2bbb5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Nov 25 23:33:31 compute-0 systemd[1]: Started libpod-conmon-b5d7dfc8b14c72c3fa7fc52630f943a0f7ec7c3eaf2118cb4d56d4d57f2bbb5c.scope.
Nov 25 23:33:31 compute-0 podman[98314]: 2025-11-25 23:33:30.983748244 +0000 UTC m=+0.028981657 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:33:31 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:33:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1587881158edb7f83a467d6e1d89e8d0ebe775c5b24fc6fb5c6d413efe312b0e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1587881158edb7f83a467d6e1d89e8d0ebe775c5b24fc6fb5c6d413efe312b0e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1587881158edb7f83a467d6e1d89e8d0ebe775c5b24fc6fb5c6d413efe312b0e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1587881158edb7f83a467d6e1d89e8d0ebe775c5b24fc6fb5c6d413efe312b0e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1587881158edb7f83a467d6e1d89e8d0ebe775c5b24fc6fb5c6d413efe312b0e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:31 compute-0 podman[98314]: 2025-11-25 23:33:31.106859227 +0000 UTC m=+0.152092700 container init b5d7dfc8b14c72c3fa7fc52630f943a0f7ec7c3eaf2118cb4d56d4d57f2bbb5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_tesla, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:33:31 compute-0 podman[98314]: 2025-11-25 23:33:31.124672122 +0000 UTC m=+0.169905545 container start b5d7dfc8b14c72c3fa7fc52630f943a0f7ec7c3eaf2118cb4d56d4d57f2bbb5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 23:33:31 compute-0 podman[98314]: 2025-11-25 23:33:31.128854761 +0000 UTC m=+0.174088194 container attach b5d7dfc8b14c72c3fa7fc52630f943a0f7ec7c3eaf2118cb4d56d4d57f2bbb5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:33:31 compute-0 sudo[98359]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eiibtiuqymlikbqhyufdkhrypfybvzfe ; /usr/bin/python3'
Nov 25 23:33:31 compute-0 sudo[98359]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:33:31 compute-0 python3[98361]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 101922db-575f-58e2-980f-928050464f69 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:33:31 compute-0 ceph-mon[75654]: pgmap v70: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:33:31 compute-0 podman[98362]: 2025-11-25 23:33:31.414457195 +0000 UTC m=+0.057576764 container create 56b45cfa8b2bdc64825c4007a9162308f1b499ee663f626fdef63402f5573134 (image=quay.io/ceph/ceph:v18, name=distracted_benz, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 23:33:31 compute-0 systemd[1]: Started libpod-conmon-56b45cfa8b2bdc64825c4007a9162308f1b499ee663f626fdef63402f5573134.scope.
Nov 25 23:33:31 compute-0 podman[98362]: 2025-11-25 23:33:31.384938595 +0000 UTC m=+0.028058214 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 23:33:31 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:33:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/387e6c0558e425423a82f67bfc159d001809ff45419813f3ce54746f0e5bf3ca/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/387e6c0558e425423a82f67bfc159d001809ff45419813f3ce54746f0e5bf3ca/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:31 compute-0 podman[98362]: 2025-11-25 23:33:31.514610318 +0000 UTC m=+0.157729917 container init 56b45cfa8b2bdc64825c4007a9162308f1b499ee663f626fdef63402f5573134 (image=quay.io/ceph/ceph:v18, name=distracted_benz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:33:31 compute-0 podman[98362]: 2025-11-25 23:33:31.523511691 +0000 UTC m=+0.166631260 container start 56b45cfa8b2bdc64825c4007a9162308f1b499ee663f626fdef63402f5573134 (image=quay.io/ceph/ceph:v18, name=distracted_benz, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 23:33:31 compute-0 podman[98362]: 2025-11-25 23:33:31.527959237 +0000 UTC m=+0.171078856 container attach 56b45cfa8b2bdc64825c4007a9162308f1b499ee663f626fdef63402f5573134 (image=quay.io/ceph/ceph:v18, name=distracted_benz, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:33:31 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v71: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:33:32 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth import"} v 0) v1
Nov 25 23:33:32 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/92955043' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Nov 25 23:33:32 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/92955043' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Nov 25 23:33:32 compute-0 systemd[1]: libpod-56b45cfa8b2bdc64825c4007a9162308f1b499ee663f626fdef63402f5573134.scope: Deactivated successfully.
Nov 25 23:33:32 compute-0 podman[98422]: 2025-11-25 23:33:32.192950552 +0000 UTC m=+0.029742457 container died 56b45cfa8b2bdc64825c4007a9162308f1b499ee663f626fdef63402f5573134 (image=quay.io/ceph/ceph:v18, name=distracted_benz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:33:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-387e6c0558e425423a82f67bfc159d001809ff45419813f3ce54746f0e5bf3ca-merged.mount: Deactivated successfully.
Nov 25 23:33:32 compute-0 festive_tesla[98331]: --> passed data devices: 0 physical, 3 LVM
Nov 25 23:33:32 compute-0 festive_tesla[98331]: --> relative data size: 1.0
Nov 25 23:33:32 compute-0 festive_tesla[98331]: --> All data devices are unavailable
Nov 25 23:33:32 compute-0 systemd[1]: libpod-b5d7dfc8b14c72c3fa7fc52630f943a0f7ec7c3eaf2118cb4d56d4d57f2bbb5c.scope: Deactivated successfully.
Nov 25 23:33:32 compute-0 systemd[1]: libpod-b5d7dfc8b14c72c3fa7fc52630f943a0f7ec7c3eaf2118cb4d56d4d57f2bbb5c.scope: Consumed 1.063s CPU time.
Nov 25 23:33:32 compute-0 podman[98422]: 2025-11-25 23:33:32.264683324 +0000 UTC m=+0.101475159 container remove 56b45cfa8b2bdc64825c4007a9162308f1b499ee663f626fdef63402f5573134 (image=quay.io/ceph/ceph:v18, name=distracted_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:33:32 compute-0 systemd[1]: libpod-conmon-56b45cfa8b2bdc64825c4007a9162308f1b499ee663f626fdef63402f5573134.scope: Deactivated successfully.
Nov 25 23:33:32 compute-0 sudo[98359]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:32 compute-0 podman[98441]: 2025-11-25 23:33:32.311540167 +0000 UTC m=+0.040559849 container died b5d7dfc8b14c72c3fa7fc52630f943a0f7ec7c3eaf2118cb4d56d4d57f2bbb5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_tesla, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:33:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-1587881158edb7f83a467d6e1d89e8d0ebe775c5b24fc6fb5c6d413efe312b0e-merged.mount: Deactivated successfully.
Nov 25 23:33:32 compute-0 podman[98441]: 2025-11-25 23:33:32.364237552 +0000 UTC m=+0.093257164 container remove b5d7dfc8b14c72c3fa7fc52630f943a0f7ec7c3eaf2118cb4d56d4d57f2bbb5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:33:32 compute-0 systemd[1]: libpod-conmon-b5d7dfc8b14c72c3fa7fc52630f943a0f7ec7c3eaf2118cb4d56d4d57f2bbb5c.scope: Deactivated successfully.
Nov 25 23:33:32 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/92955043' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Nov 25 23:33:32 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/92955043' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Nov 25 23:33:32 compute-0 sudo[98109]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:32 compute-0 sudo[98456]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:33:32 compute-0 sudo[98456]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:32 compute-0 sudo[98456]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:32 compute-0 sudo[98481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:33:32 compute-0 sudo[98481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:32 compute-0 sudo[98481]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:32 compute-0 sudo[98506]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:33:32 compute-0 sudo[98506]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:32 compute-0 sudo[98506]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:32 compute-0 sudo[98531]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 -- lvm list --format json
Nov 25 23:33:32 compute-0 sudo[98531]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:32 compute-0 sudo[98597]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-msqatlsgbezzacdvvohrwbyzovbvlkth ; /usr/bin/python3'
Nov 25 23:33:32 compute-0 sudo[98597]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:33:33 compute-0 python3[98606]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 101922db-575f-58e2-980f-928050464f69 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:33:33 compute-0 podman[98621]: 2025-11-25 23:33:33.17302415 +0000 UTC m=+0.065136041 container create a51301db37b25d38f1231e57d203893ce5c96cc8edcde1e957fdd4f23c446017 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_goodall, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:33:33 compute-0 systemd[1]: Started libpod-conmon-a51301db37b25d38f1231e57d203893ce5c96cc8edcde1e957fdd4f23c446017.scope.
Nov 25 23:33:33 compute-0 podman[98621]: 2025-11-25 23:33:33.146338564 +0000 UTC m=+0.038450495 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:33:33 compute-0 podman[98636]: 2025-11-25 23:33:33.239888195 +0000 UTC m=+0.074164676 container create 362dc275b755695e03e8e6762a20ee647ab30ec34f8f500f90e347305502a0e6 (image=quay.io/ceph/ceph:v18, name=ecstatic_diffie, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 23:33:33 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:33:33 compute-0 systemd[1]: Started libpod-conmon-362dc275b755695e03e8e6762a20ee647ab30ec34f8f500f90e347305502a0e6.scope.
Nov 25 23:33:33 compute-0 podman[98621]: 2025-11-25 23:33:33.274349974 +0000 UTC m=+0.166461905 container init a51301db37b25d38f1231e57d203893ce5c96cc8edcde1e957fdd4f23c446017 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_goodall, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:33:33 compute-0 podman[98621]: 2025-11-25 23:33:33.286099121 +0000 UTC m=+0.178211002 container start a51301db37b25d38f1231e57d203893ce5c96cc8edcde1e957fdd4f23c446017 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_goodall, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 23:33:33 compute-0 podman[98621]: 2025-11-25 23:33:33.28992809 +0000 UTC m=+0.182040021 container attach a51301db37b25d38f1231e57d203893ce5c96cc8edcde1e957fdd4f23c446017 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 23:33:33 compute-0 romantic_goodall[98650]: 167 167
Nov 25 23:33:33 compute-0 podman[98621]: 2025-11-25 23:33:33.294401238 +0000 UTC m=+0.186513129 container died a51301db37b25d38f1231e57d203893ce5c96cc8edcde1e957fdd4f23c446017 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_goodall, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:33:33 compute-0 podman[98636]: 2025-11-25 23:33:33.21097358 +0000 UTC m=+0.045250111 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 23:33:33 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:33:33 compute-0 systemd[1]: libpod-a51301db37b25d38f1231e57d203893ce5c96cc8edcde1e957fdd4f23c446017.scope: Deactivated successfully.
Nov 25 23:33:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a7d4e1d1b549ee04c4359a2b3a3507bc50221780b185c3d88d58962e7fd4e5f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a7d4e1d1b549ee04c4359a2b3a3507bc50221780b185c3d88d58962e7fd4e5f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:33 compute-0 podman[98636]: 2025-11-25 23:33:33.334231257 +0000 UTC m=+0.168507818 container init 362dc275b755695e03e8e6762a20ee647ab30ec34f8f500f90e347305502a0e6 (image=quay.io/ceph/ceph:v18, name=ecstatic_diffie, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 23:33:33 compute-0 podman[98636]: 2025-11-25 23:33:33.344920496 +0000 UTC m=+0.179196947 container start 362dc275b755695e03e8e6762a20ee647ab30ec34f8f500f90e347305502a0e6 (image=quay.io/ceph/ceph:v18, name=ecstatic_diffie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:33:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-ad4ce7145cf7f8415eb28ab8d25e616d85828e021e1a10c89c25afbe0373ccb6-merged.mount: Deactivated successfully.
Nov 25 23:33:33 compute-0 podman[98636]: 2025-11-25 23:33:33.348370376 +0000 UTC m=+0.182646867 container attach 362dc275b755695e03e8e6762a20ee647ab30ec34f8f500f90e347305502a0e6 (image=quay.io/ceph/ceph:v18, name=ecstatic_diffie, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 23:33:33 compute-0 podman[98621]: 2025-11-25 23:33:33.367603058 +0000 UTC m=+0.259714919 container remove a51301db37b25d38f1231e57d203893ce5c96cc8edcde1e957fdd4f23c446017 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_goodall, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:33:33 compute-0 systemd[1]: libpod-conmon-a51301db37b25d38f1231e57d203893ce5c96cc8edcde1e957fdd4f23c446017.scope: Deactivated successfully.
Nov 25 23:33:33 compute-0 ceph-mon[75654]: pgmap v71: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:33:33 compute-0 podman[98680]: 2025-11-25 23:33:33.568272235 +0000 UTC m=+0.057139403 container create 6e5b4fe6efe1a3658ed000e6cec7df460092dc080d01d0b0189fdb125bb76f81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_dijkstra, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 23:33:33 compute-0 systemd[1]: Started libpod-conmon-6e5b4fe6efe1a3658ed000e6cec7df460092dc080d01d0b0189fdb125bb76f81.scope.
Nov 25 23:33:33 compute-0 podman[98680]: 2025-11-25 23:33:33.537929103 +0000 UTC m=+0.026796311 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:33:33 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:33:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28c011b01fe6fb4f3201ff7b278fd316d74272d2326d9d7bf3ec9ecab05cb71f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28c011b01fe6fb4f3201ff7b278fd316d74272d2326d9d7bf3ec9ecab05cb71f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28c011b01fe6fb4f3201ff7b278fd316d74272d2326d9d7bf3ec9ecab05cb71f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28c011b01fe6fb4f3201ff7b278fd316d74272d2326d9d7bf3ec9ecab05cb71f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:33 compute-0 podman[98680]: 2025-11-25 23:33:33.655844471 +0000 UTC m=+0.144711679 container init 6e5b4fe6efe1a3658ed000e6cec7df460092dc080d01d0b0189fdb125bb76f81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_dijkstra, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 25 23:33:33 compute-0 podman[98680]: 2025-11-25 23:33:33.668729467 +0000 UTC m=+0.157596625 container start 6e5b4fe6efe1a3658ed000e6cec7df460092dc080d01d0b0189fdb125bb76f81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_dijkstra, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 23:33:33 compute-0 podman[98680]: 2025-11-25 23:33:33.672252228 +0000 UTC m=+0.161119406 container attach 6e5b4fe6efe1a3658ed000e6cec7df460092dc080d01d0b0189fdb125bb76f81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_dijkstra, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:33:33 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 25 23:33:33 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1389221647' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 25 23:33:33 compute-0 ecstatic_diffie[98655]: 
Nov 25 23:33:33 compute-0 ecstatic_diffie[98655]: {"fsid":"101922db-575f-58e2-980f-928050464f69","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":143,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":32,"num_osds":3,"num_up_osds":3,"osd_up_since":1764113585,"num_in_osds":3,"osd_in_since":1764113559,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":7}],"num_pgs":7,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":83849216,"bytes_avail":64328077312,"bytes_total":64411926528},"fsmap":{"epoch":2,"id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-11-25T23:32:57.989925+0000","services":{"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Nov 25 23:33:33 compute-0 systemd[1]: libpod-362dc275b755695e03e8e6762a20ee647ab30ec34f8f500f90e347305502a0e6.scope: Deactivated successfully.
Nov 25 23:33:33 compute-0 podman[98636]: 2025-11-25 23:33:33.955983633 +0000 UTC m=+0.790260084 container died 362dc275b755695e03e8e6762a20ee647ab30ec34f8f500f90e347305502a0e6 (image=quay.io/ceph/ceph:v18, name=ecstatic_diffie, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:33:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-5a7d4e1d1b549ee04c4359a2b3a3507bc50221780b185c3d88d58962e7fd4e5f-merged.mount: Deactivated successfully.
Nov 25 23:33:33 compute-0 podman[98636]: 2025-11-25 23:33:33.996518571 +0000 UTC m=+0.830795022 container remove 362dc275b755695e03e8e6762a20ee647ab30ec34f8f500f90e347305502a0e6 (image=quay.io/ceph/ceph:v18, name=ecstatic_diffie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 23:33:34 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v72: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:33:34 compute-0 systemd[1]: libpod-conmon-362dc275b755695e03e8e6762a20ee647ab30ec34f8f500f90e347305502a0e6.scope: Deactivated successfully.
Nov 25 23:33:34 compute-0 sudo[98597]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:34 compute-0 sudo[98758]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnhddtdwsfykfkuddjgsudmxxskhnwml ; /usr/bin/python3'
Nov 25 23:33:34 compute-0 sudo[98758]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:33:34 compute-0 python3[98760]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 101922db-575f-58e2-980f-928050464f69 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:33:34 compute-0 podman[98763]: 2025-11-25 23:33:34.41297657 +0000 UTC m=+0.052716647 container create 4e47610680b788284b7d97b972dae541f3925b753455c02e5e316ee49cbb7544 (image=quay.io/ceph/ceph:v18, name=zealous_moore, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 23:33:34 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/1389221647' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 25 23:33:34 compute-0 systemd[1]: Started libpod-conmon-4e47610680b788284b7d97b972dae541f3925b753455c02e5e316ee49cbb7544.scope.
Nov 25 23:33:34 compute-0 vigilant_dijkstra[98696]: {
Nov 25 23:33:34 compute-0 vigilant_dijkstra[98696]:     "0": [
Nov 25 23:33:34 compute-0 vigilant_dijkstra[98696]:         {
Nov 25 23:33:34 compute-0 vigilant_dijkstra[98696]:             "devices": [
Nov 25 23:33:34 compute-0 vigilant_dijkstra[98696]:                 "/dev/loop3"
Nov 25 23:33:34 compute-0 vigilant_dijkstra[98696]:             ],
Nov 25 23:33:34 compute-0 vigilant_dijkstra[98696]:             "lv_name": "ceph_lv0",
Nov 25 23:33:34 compute-0 vigilant_dijkstra[98696]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 23:33:34 compute-0 vigilant_dijkstra[98696]:             "lv_size": "21470642176",
Nov 25 23:33:34 compute-0 vigilant_dijkstra[98696]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c5ddf08a-6193-41e0-8332-60b5083aa62e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 23:33:34 compute-0 vigilant_dijkstra[98696]:             "lv_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 23:33:34 compute-0 vigilant_dijkstra[98696]:             "name": "ceph_lv0",
Nov 25 23:33:34 compute-0 vigilant_dijkstra[98696]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 23:33:34 compute-0 vigilant_dijkstra[98696]:             "tags": {
Nov 25 23:33:34 compute-0 vigilant_dijkstra[98696]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 23:33:34 compute-0 vigilant_dijkstra[98696]:                 "ceph.block_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 23:33:34 compute-0 vigilant_dijkstra[98696]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 23:33:34 compute-0 vigilant_dijkstra[98696]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:33:34 compute-0 vigilant_dijkstra[98696]:                 "ceph.cluster_name": "ceph",
Nov 25 23:33:34 compute-0 vigilant_dijkstra[98696]:                 "ceph.crush_device_class": "",
Nov 25 23:33:34 compute-0 vigilant_dijkstra[98696]:                 "ceph.encrypted": "0",
Nov 25 23:33:34 compute-0 vigilant_dijkstra[98696]:                 "ceph.osd_fsid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 23:33:34 compute-0 vigilant_dijkstra[98696]:                 "ceph.osd_id": "0",
Nov 25 23:33:34 compute-0 vigilant_dijkstra[98696]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 23:33:34 compute-0 vigilant_dijkstra[98696]:                 "ceph.type": "block",
Nov 25 23:33:34 compute-0 vigilant_dijkstra[98696]:                 "ceph.vdo": "0"
Nov 25 23:33:34 compute-0 vigilant_dijkstra[98696]:             },
Nov 25 23:33:34 compute-0 vigilant_dijkstra[98696]:             "type": "block",
Nov 25 23:33:34 compute-0 vigilant_dijkstra[98696]:             "vg_name": "ceph_vg0"
Nov 25 23:33:34 compute-0 vigilant_dijkstra[98696]:         }
Nov 25 23:33:34 compute-0 vigilant_dijkstra[98696]:     ],
Nov 25 23:33:34 compute-0 vigilant_dijkstra[98696]:     "1": [
Nov 25 23:33:34 compute-0 vigilant_dijkstra[98696]:         {
Nov 25 23:33:34 compute-0 vigilant_dijkstra[98696]:             "devices": [
Nov 25 23:33:34 compute-0 vigilant_dijkstra[98696]:                 "/dev/loop4"
Nov 25 23:33:34 compute-0 vigilant_dijkstra[98696]:             ],
Nov 25 23:33:34 compute-0 vigilant_dijkstra[98696]:             "lv_name": "ceph_lv1",
Nov 25 23:33:34 compute-0 vigilant_dijkstra[98696]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 23:33:34 compute-0 vigilant_dijkstra[98696]:             "lv_size": "21470642176",
Nov 25 23:33:34 compute-0 vigilant_dijkstra[98696]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21bbab34-bea3-466b-8bf7-812749fcef47,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 23:33:34 compute-0 vigilant_dijkstra[98696]:             "lv_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 23:33:34 compute-0 vigilant_dijkstra[98696]:             "name": "ceph_lv1",
Nov 25 23:33:34 compute-0 vigilant_dijkstra[98696]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 23:33:34 compute-0 vigilant_dijkstra[98696]:             "tags": {
Nov 25 23:33:34 compute-0 vigilant_dijkstra[98696]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 23:33:34 compute-0 vigilant_dijkstra[98696]:                 "ceph.block_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 23:33:34 compute-0 vigilant_dijkstra[98696]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 23:33:34 compute-0 vigilant_dijkstra[98696]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:33:34 compute-0 vigilant_dijkstra[98696]:                 "ceph.cluster_name": "ceph",
Nov 25 23:33:34 compute-0 vigilant_dijkstra[98696]:                 "ceph.crush_device_class": "",
Nov 25 23:33:34 compute-0 vigilant_dijkstra[98696]:                 "ceph.encrypted": "0",
Nov 25 23:33:34 compute-0 vigilant_dijkstra[98696]:                 "ceph.osd_fsid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 23:33:34 compute-0 vigilant_dijkstra[98696]:                 "ceph.osd_id": "1",
Nov 25 23:33:34 compute-0 vigilant_dijkstra[98696]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 23:33:34 compute-0 vigilant_dijkstra[98696]:                 "ceph.type": "block",
Nov 25 23:33:34 compute-0 vigilant_dijkstra[98696]:                 "ceph.vdo": "0"
Nov 25 23:33:34 compute-0 vigilant_dijkstra[98696]:             },
Nov 25 23:33:34 compute-0 vigilant_dijkstra[98696]:             "type": "block",
Nov 25 23:33:34 compute-0 vigilant_dijkstra[98696]:             "vg_name": "ceph_vg1"
Nov 25 23:33:34 compute-0 vigilant_dijkstra[98696]:         }
Nov 25 23:33:34 compute-0 vigilant_dijkstra[98696]:     ],
Nov 25 23:33:34 compute-0 vigilant_dijkstra[98696]:     "2": [
Nov 25 23:33:34 compute-0 vigilant_dijkstra[98696]:         {
Nov 25 23:33:34 compute-0 vigilant_dijkstra[98696]:             "devices": [
Nov 25 23:33:34 compute-0 vigilant_dijkstra[98696]:                 "/dev/loop5"
Nov 25 23:33:34 compute-0 vigilant_dijkstra[98696]:             ],
Nov 25 23:33:34 compute-0 vigilant_dijkstra[98696]:             "lv_name": "ceph_lv2",
Nov 25 23:33:34 compute-0 vigilant_dijkstra[98696]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 23:33:34 compute-0 vigilant_dijkstra[98696]:             "lv_size": "21470642176",
Nov 25 23:33:34 compute-0 vigilant_dijkstra[98696]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=019d967b-1a56-4e90-8682-a890da577e20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 23:33:34 compute-0 vigilant_dijkstra[98696]:             "lv_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 23:33:34 compute-0 vigilant_dijkstra[98696]:             "name": "ceph_lv2",
Nov 25 23:33:34 compute-0 vigilant_dijkstra[98696]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 23:33:34 compute-0 vigilant_dijkstra[98696]:             "tags": {
Nov 25 23:33:34 compute-0 vigilant_dijkstra[98696]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 23:33:34 compute-0 vigilant_dijkstra[98696]:                 "ceph.block_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 23:33:34 compute-0 vigilant_dijkstra[98696]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 23:33:34 compute-0 vigilant_dijkstra[98696]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:33:34 compute-0 vigilant_dijkstra[98696]:                 "ceph.cluster_name": "ceph",
Nov 25 23:33:34 compute-0 vigilant_dijkstra[98696]:                 "ceph.crush_device_class": "",
Nov 25 23:33:34 compute-0 vigilant_dijkstra[98696]:                 "ceph.encrypted": "0",
Nov 25 23:33:34 compute-0 vigilant_dijkstra[98696]:                 "ceph.osd_fsid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 23:33:34 compute-0 vigilant_dijkstra[98696]:                 "ceph.osd_id": "2",
Nov 25 23:33:34 compute-0 vigilant_dijkstra[98696]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 23:33:34 compute-0 vigilant_dijkstra[98696]:                 "ceph.type": "block",
Nov 25 23:33:34 compute-0 vigilant_dijkstra[98696]:                 "ceph.vdo": "0"
Nov 25 23:33:34 compute-0 vigilant_dijkstra[98696]:             },
Nov 25 23:33:34 compute-0 vigilant_dijkstra[98696]:             "type": "block",
Nov 25 23:33:34 compute-0 vigilant_dijkstra[98696]:             "vg_name": "ceph_vg2"
Nov 25 23:33:34 compute-0 vigilant_dijkstra[98696]:         }
Nov 25 23:33:34 compute-0 vigilant_dijkstra[98696]:     ]
Nov 25 23:33:34 compute-0 vigilant_dijkstra[98696]: }
Nov 25 23:33:34 compute-0 podman[98763]: 2025-11-25 23:33:34.384868406 +0000 UTC m=+0.024608483 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 23:33:34 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:33:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7021109aa2c27b0b6334d360ba48fe38b57361c8727f182ec604e1573f293a2a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7021109aa2c27b0b6334d360ba48fe38b57361c8727f182ec604e1573f293a2a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:34 compute-0 systemd[1]: libpod-6e5b4fe6efe1a3658ed000e6cec7df460092dc080d01d0b0189fdb125bb76f81.scope: Deactivated successfully.
Nov 25 23:33:34 compute-0 podman[98680]: 2025-11-25 23:33:34.514354996 +0000 UTC m=+1.003222184 container died 6e5b4fe6efe1a3658ed000e6cec7df460092dc080d01d0b0189fdb125bb76f81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_dijkstra, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 23:33:34 compute-0 podman[98763]: 2025-11-25 23:33:34.528924486 +0000 UTC m=+0.168664563 container init 4e47610680b788284b7d97b972dae541f3925b753455c02e5e316ee49cbb7544 (image=quay.io/ceph/ceph:v18, name=zealous_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 23:33:34 compute-0 podman[98763]: 2025-11-25 23:33:34.539976744 +0000 UTC m=+0.179716801 container start 4e47610680b788284b7d97b972dae541f3925b753455c02e5e316ee49cbb7544 (image=quay.io/ceph/ceph:v18, name=zealous_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 25 23:33:34 compute-0 podman[98763]: 2025-11-25 23:33:34.544671006 +0000 UTC m=+0.184411133 container attach 4e47610680b788284b7d97b972dae541f3925b753455c02e5e316ee49cbb7544 (image=quay.io/ceph/ceph:v18, name=zealous_moore, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:33:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-28c011b01fe6fb4f3201ff7b278fd316d74272d2326d9d7bf3ec9ecab05cb71f-merged.mount: Deactivated successfully.
Nov 25 23:33:34 compute-0 podman[98680]: 2025-11-25 23:33:34.58154865 +0000 UTC m=+1.070415848 container remove 6e5b4fe6efe1a3658ed000e6cec7df460092dc080d01d0b0189fdb125bb76f81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_dijkstra, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:33:34 compute-0 systemd[1]: libpod-conmon-6e5b4fe6efe1a3658ed000e6cec7df460092dc080d01d0b0189fdb125bb76f81.scope: Deactivated successfully.
Nov 25 23:33:34 compute-0 sudo[98531]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:34 compute-0 sudo[98795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:33:34 compute-0 sudo[98795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:34 compute-0 sudo[98795]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:34 compute-0 sudo[98820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:33:34 compute-0 sudo[98820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:34 compute-0 sudo[98820]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:34 compute-0 sudo[98864]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:33:34 compute-0 sudo[98864]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:34 compute-0 sudo[98864]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:34 compute-0 sudo[98889]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 -- raw list --format json
Nov 25 23:33:35 compute-0 sudo[98889]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:35 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 23:33:35 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3636941601' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 23:33:35 compute-0 zealous_moore[98780]: 
Nov 25 23:33:35 compute-0 zealous_moore[98780]: {"epoch":1,"fsid":"101922db-575f-58e2-980f-928050464f69","modified":"2025-11-25T23:31:04.907397Z","created":"2025-11-25T23:31:04.907397Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]}
Nov 25 23:33:35 compute-0 zealous_moore[98780]: dumped monmap epoch 1
Nov 25 23:33:35 compute-0 systemd[1]: libpod-4e47610680b788284b7d97b972dae541f3925b753455c02e5e316ee49cbb7544.scope: Deactivated successfully.
Nov 25 23:33:35 compute-0 podman[98763]: 2025-11-25 23:33:35.117640071 +0000 UTC m=+0.757380158 container died 4e47610680b788284b7d97b972dae541f3925b753455c02e5e316ee49cbb7544 (image=quay.io/ceph/ceph:v18, name=zealous_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:33:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-7021109aa2c27b0b6334d360ba48fe38b57361c8727f182ec604e1573f293a2a-merged.mount: Deactivated successfully.
Nov 25 23:33:35 compute-0 podman[98763]: 2025-11-25 23:33:35.176294841 +0000 UTC m=+0.816034948 container remove 4e47610680b788284b7d97b972dae541f3925b753455c02e5e316ee49cbb7544 (image=quay.io/ceph/ceph:v18, name=zealous_moore, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:33:35 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:33:35 compute-0 systemd[1]: libpod-conmon-4e47610680b788284b7d97b972dae541f3925b753455c02e5e316ee49cbb7544.scope: Deactivated successfully.
Nov 25 23:33:35 compute-0 sudo[98758]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:35 compute-0 podman[98968]: 2025-11-25 23:33:35.415398961 +0000 UTC m=+0.063589680 container create 029aab30d63dc03d5b21f03a3046fdca9ed139f630dd6f2df4dc23e77be1bc83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_pare, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:33:35 compute-0 ceph-mon[75654]: pgmap v72: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:33:35 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/3636941601' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 23:33:35 compute-0 systemd[1]: Started libpod-conmon-029aab30d63dc03d5b21f03a3046fdca9ed139f630dd6f2df4dc23e77be1bc83.scope.
Nov 25 23:33:35 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:33:35 compute-0 podman[98968]: 2025-11-25 23:33:35.400243796 +0000 UTC m=+0.048434535 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:33:35 compute-0 podman[98968]: 2025-11-25 23:33:35.494245139 +0000 UTC m=+0.142435878 container init 029aab30d63dc03d5b21f03a3046fdca9ed139f630dd6f2df4dc23e77be1bc83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_pare, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 23:33:35 compute-0 podman[98968]: 2025-11-25 23:33:35.501725964 +0000 UTC m=+0.149916683 container start 029aab30d63dc03d5b21f03a3046fdca9ed139f630dd6f2df4dc23e77be1bc83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_pare, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 23:33:35 compute-0 podman[98968]: 2025-11-25 23:33:35.505541664 +0000 UTC m=+0.153732483 container attach 029aab30d63dc03d5b21f03a3046fdca9ed139f630dd6f2df4dc23e77be1bc83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_pare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:33:35 compute-0 sweet_pare[98985]: 167 167
Nov 25 23:33:35 compute-0 systemd[1]: libpod-029aab30d63dc03d5b21f03a3046fdca9ed139f630dd6f2df4dc23e77be1bc83.scope: Deactivated successfully.
Nov 25 23:33:35 compute-0 podman[98968]: 2025-11-25 23:33:35.507139176 +0000 UTC m=+0.155329895 container died 029aab30d63dc03d5b21f03a3046fdca9ed139f630dd6f2df4dc23e77be1bc83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_pare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:33:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-c838371db4d7157eeec4ddd6e9863561b0e3ecd8e41b54cd0a6e66ad4050f82c-merged.mount: Deactivated successfully.
Nov 25 23:33:35 compute-0 podman[98968]: 2025-11-25 23:33:35.556156225 +0000 UTC m=+0.204346944 container remove 029aab30d63dc03d5b21f03a3046fdca9ed139f630dd6f2df4dc23e77be1bc83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_pare, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:33:35 compute-0 systemd[1]: libpod-conmon-029aab30d63dc03d5b21f03a3046fdca9ed139f630dd6f2df4dc23e77be1bc83.scope: Deactivated successfully.
Nov 25 23:33:35 compute-0 sudo[99027]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhirqppqggskdqbzjazzxtyugfqcipqu ; /usr/bin/python3'
Nov 25 23:33:35 compute-0 sudo[99027]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:33:35 compute-0 podman[99035]: 2025-11-25 23:33:35.765046586 +0000 UTC m=+0.043352152 container create c1299bc5ab0376dca5f2b7f92c4ce9bb6bc4c0214ae945f0346723aa71eb6735 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:33:35 compute-0 systemd[1]: Started libpod-conmon-c1299bc5ab0376dca5f2b7f92c4ce9bb6bc4c0214ae945f0346723aa71eb6735.scope.
Nov 25 23:33:35 compute-0 podman[99035]: 2025-11-25 23:33:35.744671025 +0000 UTC m=+0.022976581 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:33:35 compute-0 python3[99029]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 101922db-575f-58e2-980f-928050464f69 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:33:35 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:33:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c5e9b3fa028cc97b6805d8e0cadab1cfc6e31ff24a57ab9c2a223cb7b754ca3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c5e9b3fa028cc97b6805d8e0cadab1cfc6e31ff24a57ab9c2a223cb7b754ca3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c5e9b3fa028cc97b6805d8e0cadab1cfc6e31ff24a57ab9c2a223cb7b754ca3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c5e9b3fa028cc97b6805d8e0cadab1cfc6e31ff24a57ab9c2a223cb7b754ca3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:35 compute-0 podman[99035]: 2025-11-25 23:33:35.891607319 +0000 UTC m=+0.169912915 container init c1299bc5ab0376dca5f2b7f92c4ce9bb6bc4c0214ae945f0346723aa71eb6735 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_merkle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:33:35 compute-0 podman[99035]: 2025-11-25 23:33:35.906430886 +0000 UTC m=+0.184736462 container start c1299bc5ab0376dca5f2b7f92c4ce9bb6bc4c0214ae945f0346723aa71eb6735 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_merkle, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 23:33:35 compute-0 podman[99035]: 2025-11-25 23:33:35.911065307 +0000 UTC m=+0.189370943 container attach c1299bc5ab0376dca5f2b7f92c4ce9bb6bc4c0214ae945f0346723aa71eb6735 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 23:33:35 compute-0 podman[99054]: 2025-11-25 23:33:35.925791222 +0000 UTC m=+0.061830765 container create bcdc9b7823e96a848b30d9ca17759a6d29c38ec5ba4814321c953e649fef602e (image=quay.io/ceph/ceph:v18, name=gallant_mcnulty, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:33:35 compute-0 systemd[1]: Started libpod-conmon-bcdc9b7823e96a848b30d9ca17759a6d29c38ec5ba4814321c953e649fef602e.scope.
Nov 25 23:33:35 compute-0 podman[99054]: 2025-11-25 23:33:35.902711949 +0000 UTC m=+0.038751512 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 23:33:36 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:33:36 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v73: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:33:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4edcfe5b168b08b3daa6d5dc5e6de882ab41de9dce439fdb8f70a14d82a1f599/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4edcfe5b168b08b3daa6d5dc5e6de882ab41de9dce439fdb8f70a14d82a1f599/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:36 compute-0 podman[99054]: 2025-11-25 23:33:36.025115334 +0000 UTC m=+0.161154917 container init bcdc9b7823e96a848b30d9ca17759a6d29c38ec5ba4814321c953e649fef602e (image=quay.io/ceph/ceph:v18, name=gallant_mcnulty, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Nov 25 23:33:36 compute-0 podman[99054]: 2025-11-25 23:33:36.034906219 +0000 UTC m=+0.170945772 container start bcdc9b7823e96a848b30d9ca17759a6d29c38ec5ba4814321c953e649fef602e (image=quay.io/ceph/ceph:v18, name=gallant_mcnulty, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:33:36 compute-0 podman[99054]: 2025-11-25 23:33:36.038443771 +0000 UTC m=+0.174483424 container attach bcdc9b7823e96a848b30d9ca17759a6d29c38ec5ba4814321c953e649fef602e (image=quay.io/ceph/ceph:v18, name=gallant_mcnulty, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:33:36 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0) v1
Nov 25 23:33:36 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1958623184' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Nov 25 23:33:36 compute-0 gallant_mcnulty[99071]: [client.openstack]
Nov 25 23:33:36 compute-0 gallant_mcnulty[99071]:         key = AQAfPCZpAAAAABAAikUZSrYMJ3qAPbvPGOplUw==
Nov 25 23:33:36 compute-0 gallant_mcnulty[99071]:         caps mgr = "allow *"
Nov 25 23:33:36 compute-0 gallant_mcnulty[99071]:         caps mon = "profile rbd"
Nov 25 23:33:36 compute-0 gallant_mcnulty[99071]:         caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Nov 25 23:33:36 compute-0 systemd[1]: libpod-bcdc9b7823e96a848b30d9ca17759a6d29c38ec5ba4814321c953e649fef602e.scope: Deactivated successfully.
Nov 25 23:33:36 compute-0 podman[99054]: 2025-11-25 23:33:36.629795685 +0000 UTC m=+0.765835238 container died bcdc9b7823e96a848b30d9ca17759a6d29c38ec5ba4814321c953e649fef602e (image=quay.io/ceph/ceph:v18, name=gallant_mcnulty, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:33:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-4edcfe5b168b08b3daa6d5dc5e6de882ab41de9dce439fdb8f70a14d82a1f599-merged.mount: Deactivated successfully.
Nov 25 23:33:36 compute-0 podman[99054]: 2025-11-25 23:33:36.68438943 +0000 UTC m=+0.820428983 container remove bcdc9b7823e96a848b30d9ca17759a6d29c38ec5ba4814321c953e649fef602e (image=quay.io/ceph/ceph:v18, name=gallant_mcnulty, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:33:36 compute-0 systemd[1]: libpod-conmon-bcdc9b7823e96a848b30d9ca17759a6d29c38ec5ba4814321c953e649fef602e.scope: Deactivated successfully.
Nov 25 23:33:36 compute-0 sudo[99027]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:36 compute-0 frosty_merkle[99051]: {
Nov 25 23:33:36 compute-0 frosty_merkle[99051]:     "019d967b-1a56-4e90-8682-a890da577e20": {
Nov 25 23:33:36 compute-0 frosty_merkle[99051]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:33:36 compute-0 frosty_merkle[99051]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 23:33:36 compute-0 frosty_merkle[99051]:         "osd_id": 2,
Nov 25 23:33:36 compute-0 frosty_merkle[99051]:         "osd_uuid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 23:33:36 compute-0 frosty_merkle[99051]:         "type": "bluestore"
Nov 25 23:33:36 compute-0 frosty_merkle[99051]:     },
Nov 25 23:33:36 compute-0 frosty_merkle[99051]:     "21bbab34-bea3-466b-8bf7-812749fcef47": {
Nov 25 23:33:36 compute-0 frosty_merkle[99051]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:33:36 compute-0 frosty_merkle[99051]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 23:33:36 compute-0 frosty_merkle[99051]:         "osd_id": 1,
Nov 25 23:33:36 compute-0 frosty_merkle[99051]:         "osd_uuid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 23:33:36 compute-0 frosty_merkle[99051]:         "type": "bluestore"
Nov 25 23:33:36 compute-0 frosty_merkle[99051]:     },
Nov 25 23:33:36 compute-0 frosty_merkle[99051]:     "c5ddf08a-6193-41e0-8332-60b5083aa62e": {
Nov 25 23:33:36 compute-0 frosty_merkle[99051]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:33:36 compute-0 frosty_merkle[99051]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 23:33:36 compute-0 frosty_merkle[99051]:         "osd_id": 0,
Nov 25 23:33:36 compute-0 frosty_merkle[99051]:         "osd_uuid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 23:33:36 compute-0 frosty_merkle[99051]:         "type": "bluestore"
Nov 25 23:33:36 compute-0 frosty_merkle[99051]:     }
Nov 25 23:33:36 compute-0 frosty_merkle[99051]: }
Nov 25 23:33:36 compute-0 systemd[1]: libpod-c1299bc5ab0376dca5f2b7f92c4ce9bb6bc4c0214ae945f0346723aa71eb6735.scope: Deactivated successfully.
Nov 25 23:33:36 compute-0 podman[99136]: 2025-11-25 23:33:36.925132733 +0000 UTC m=+0.041028693 container died c1299bc5ab0376dca5f2b7f92c4ce9bb6bc4c0214ae945f0346723aa71eb6735 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_merkle, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 23:33:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-8c5e9b3fa028cc97b6805d8e0cadab1cfc6e31ff24a57ab9c2a223cb7b754ca3-merged.mount: Deactivated successfully.
Nov 25 23:33:36 compute-0 podman[99136]: 2025-11-25 23:33:36.983524446 +0000 UTC m=+0.099420386 container remove c1299bc5ab0376dca5f2b7f92c4ce9bb6bc4c0214ae945f0346723aa71eb6735 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_merkle, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 23:33:36 compute-0 systemd[1]: libpod-conmon-c1299bc5ab0376dca5f2b7f92c4ce9bb6bc4c0214ae945f0346723aa71eb6735.scope: Deactivated successfully.
Nov 25 23:33:37 compute-0 sudo[98889]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:37 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 23:33:37 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:33:37 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 23:33:37 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:33:37 compute-0 ceph-mgr[75954]: [progress INFO root] update: starting ev 343aa52f-96fb-47a1-88c6-7d88c4dd8423 (Updating mds.cephfs deployment (+1 -> 1))
Nov 25 23:33:37 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.bgauhq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) v1
Nov 25 23:33:37 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.bgauhq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 25 23:33:37 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.bgauhq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 25 23:33:37 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 23:33:37 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:33:37 compute-0 ceph-mgr[75954]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.bgauhq on compute-0
Nov 25 23:33:37 compute-0 ceph-mgr[75954]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.bgauhq on compute-0
Nov 25 23:33:37 compute-0 sudo[99151]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:33:37 compute-0 sudo[99151]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:37 compute-0 sudo[99151]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:37 compute-0 sudo[99176]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:33:37 compute-0 sudo[99176]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:37 compute-0 sudo[99176]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:37 compute-0 sudo[99201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:33:37 compute-0 sudo[99201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:37 compute-0 sudo[99201]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:37 compute-0 sudo[99226]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 101922db-575f-58e2-980f-928050464f69
Nov 25 23:33:37 compute-0 sudo[99226]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:37 compute-0 ceph-mon[75654]: pgmap v73: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:33:37 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/1958623184' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Nov 25 23:33:37 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:33:37 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:33:37 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.bgauhq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 25 23:33:37 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.bgauhq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 25 23:33:37 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:33:37 compute-0 podman[99294]: 2025-11-25 23:33:37.665761571 +0000 UTC m=+0.042399497 container create 235a9c2120502549b74b4741ae768b661891e14f262e55159900881b8c070e54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_bell, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:33:37 compute-0 systemd[1]: Started libpod-conmon-235a9c2120502549b74b4741ae768b661891e14f262e55159900881b8c070e54.scope.
Nov 25 23:33:37 compute-0 podman[99294]: 2025-11-25 23:33:37.645565687 +0000 UTC m=+0.022203643 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:33:37 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:33:37 compute-0 podman[99294]: 2025-11-25 23:33:37.773571694 +0000 UTC m=+0.150209620 container init 235a9c2120502549b74b4741ae768b661891e14f262e55159900881b8c070e54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_bell, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 23:33:37 compute-0 podman[99294]: 2025-11-25 23:33:37.784265704 +0000 UTC m=+0.160903660 container start 235a9c2120502549b74b4741ae768b661891e14f262e55159900881b8c070e54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_bell, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 23:33:37 compute-0 podman[99294]: 2025-11-25 23:33:37.788488897 +0000 UTC m=+0.165126843 container attach 235a9c2120502549b74b4741ae768b661891e14f262e55159900881b8c070e54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_bell, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:33:37 compute-0 wonderful_bell[99318]: 167 167
Nov 25 23:33:37 compute-0 systemd[1]: libpod-235a9c2120502549b74b4741ae768b661891e14f262e55159900881b8c070e54.scope: Deactivated successfully.
Nov 25 23:33:37 compute-0 podman[99294]: 2025-11-25 23:33:37.790755563 +0000 UTC m=+0.167393519 container died 235a9c2120502549b74b4741ae768b661891e14f262e55159900881b8c070e54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_bell, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 23:33:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-3e415b742c9a48c8f006850b4cc2b077e4ca35e5985266a3d5378ff4abe3838b-merged.mount: Deactivated successfully.
Nov 25 23:33:37 compute-0 podman[99294]: 2025-11-25 23:33:37.832693247 +0000 UTC m=+0.209331203 container remove 235a9c2120502549b74b4741ae768b661891e14f262e55159900881b8c070e54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 23:33:37 compute-0 systemd[1]: libpod-conmon-235a9c2120502549b74b4741ae768b661891e14f262e55159900881b8c070e54.scope: Deactivated successfully.
Nov 25 23:33:37 compute-0 systemd[1]: Reloading.
Nov 25 23:33:37 compute-0 systemd-rc-local-generator[99424]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 23:33:37 compute-0 systemd-sysv-generator[99430]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 23:33:38 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v74: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:33:38 compute-0 systemd[1]: Reloading.
Nov 25 23:33:38 compute-0 systemd-rc-local-generator[99545]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 23:33:38 compute-0 systemd-sysv-generator[99549]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 23:33:38 compute-0 ceph-mon[75654]: Deploying daemon mds.cephfs.compute-0.bgauhq on compute-0
Nov 25 23:33:38 compute-0 sudo[99514]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjjudprrmccwywsrfcuycyaifkktzgie ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764113617.772573-36887-60033739649088/async_wrapper.py j119083935891 30 /home/zuul/.ansible/tmp/ansible-tmp-1764113617.772573-36887-60033739649088/AnsiballZ_command.py _'
Nov 25 23:33:38 compute-0 sudo[99514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:33:38 compute-0 systemd[1]: Starting Ceph mds.cephfs.compute-0.bgauhq for 101922db-575f-58e2-980f-928050464f69...
Nov 25 23:33:38 compute-0 ansible-async_wrapper.py[99556]: Invoked with j119083935891 30 /home/zuul/.ansible/tmp/ansible-tmp-1764113617.772573-36887-60033739649088/AnsiballZ_command.py _
Nov 25 23:33:38 compute-0 ansible-async_wrapper.py[99585]: Starting module and watcher
Nov 25 23:33:38 compute-0 ansible-async_wrapper.py[99585]: Start watching 99586 (30)
Nov 25 23:33:38 compute-0 ansible-async_wrapper.py[99586]: Start module (99586)
Nov 25 23:33:38 compute-0 ansible-async_wrapper.py[99556]: Return async_wrapper task started.
Nov 25 23:33:38 compute-0 sudo[99514]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:38 compute-0 python3[99587]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 101922db-575f-58e2-980f-928050464f69 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:33:38 compute-0 podman[99611]: 2025-11-25 23:33:38.829586542 +0000 UTC m=+0.054093483 container create 6b229c37a0bc838d3ecf26a033cf171f9949458f3108f561377ced7bcf58e37f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-mds-cephfs-compute-0-bgauhq, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:33:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4294b571599acfa933f696cc4337fcd86d2118bd3bdd123445aa1a56bba7651f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4294b571599acfa933f696cc4337fcd86d2118bd3bdd123445aa1a56bba7651f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4294b571599acfa933f696cc4337fcd86d2118bd3bdd123445aa1a56bba7651f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4294b571599acfa933f696cc4337fcd86d2118bd3bdd123445aa1a56bba7651f/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.bgauhq supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:38 compute-0 podman[99611]: 2025-11-25 23:33:38.801588128 +0000 UTC m=+0.026095159 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:33:38 compute-0 podman[99611]: 2025-11-25 23:33:38.911222165 +0000 UTC m=+0.135729186 container init 6b229c37a0bc838d3ecf26a033cf171f9949458f3108f561377ced7bcf58e37f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-mds-cephfs-compute-0-bgauhq, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 23:33:38 compute-0 podman[99611]: 2025-11-25 23:33:38.915710834 +0000 UTC m=+0.140217815 container start 6b229c37a0bc838d3ecf26a033cf171f9949458f3108f561377ced7bcf58e37f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-101922db-575f-58e2-980f-928050464f69-mds-cephfs-compute-0-bgauhq, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 23:33:38 compute-0 bash[99611]: 6b229c37a0bc838d3ecf26a033cf171f9949458f3108f561377ced7bcf58e37f
Nov 25 23:33:38 compute-0 podman[99624]: 2025-11-25 23:33:38.926476508 +0000 UTC m=+0.071896507 container create 68cb4f44af288c5bd30e76d811d9d3a02e6369c2216338179245b54a29b023c2 (image=quay.io/ceph/ceph:v18, name=vibrant_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:33:38 compute-0 systemd[1]: Started Ceph mds.cephfs.compute-0.bgauhq for 101922db-575f-58e2-980f-928050464f69.
Nov 25 23:33:38 compute-0 ceph-mds[99641]: set uid:gid to 167:167 (ceph:ceph)
Nov 25 23:33:38 compute-0 ceph-mds[99641]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mds, pid 2
Nov 25 23:33:38 compute-0 sudo[99226]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:38 compute-0 ceph-mds[99641]: main not setting numa affinity
Nov 25 23:33:38 compute-0 ceph-mds[99641]: pidfile_write: ignore empty --pid-file
Nov 25 23:33:38 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-mds-cephfs-compute-0-bgauhq[99635]: starting mds.cephfs.compute-0.bgauhq at 
Nov 25 23:33:38 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 23:33:38 compute-0 ceph-mds[99641]: mds.cephfs.compute-0.bgauhq Updating MDS map to version 2 from mon.0
Nov 25 23:33:38 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:33:38 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 23:33:38 compute-0 systemd[1]: Started libpod-conmon-68cb4f44af288c5bd30e76d811d9d3a02e6369c2216338179245b54a29b023c2.scope.
Nov 25 23:33:38 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:33:38 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 25 23:33:38 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:33:38 compute-0 ceph-mgr[75954]: [progress INFO root] complete: finished ev 343aa52f-96fb-47a1-88c6-7d88c4dd8423 (Updating mds.cephfs deployment (+1 -> 1))
Nov 25 23:33:38 compute-0 ceph-mgr[75954]: [progress INFO root] Completed event 343aa52f-96fb-47a1-88c6-7d88c4dd8423 (Updating mds.cephfs deployment (+1 -> 1)) in 2 seconds
Nov 25 23:33:38 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0) v1
Nov 25 23:33:38 compute-0 podman[99624]: 2025-11-25 23:33:38.89833258 +0000 UTC m=+0.043752669 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 23:33:38 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:33:38 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 25 23:33:38 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:33:39 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:33:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c5bccdc04fc614844d60f4975be1ede26c0c2e2aba5932a60a0eb6ea2852520/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c5bccdc04fc614844d60f4975be1ede26c0c2e2aba5932a60a0eb6ea2852520/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:39 compute-0 podman[99624]: 2025-11-25 23:33:39.035237374 +0000 UTC m=+0.180657433 container init 68cb4f44af288c5bd30e76d811d9d3a02e6369c2216338179245b54a29b023c2 (image=quay.io/ceph/ceph:v18, name=vibrant_einstein, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:33:39 compute-0 podman[99624]: 2025-11-25 23:33:39.047288498 +0000 UTC m=+0.192708537 container start 68cb4f44af288c5bd30e76d811d9d3a02e6369c2216338179245b54a29b023c2 (image=quay.io/ceph/ceph:v18, name=vibrant_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:33:39 compute-0 podman[99624]: 2025-11-25 23:33:39.053419238 +0000 UTC m=+0.198839317 container attach 68cb4f44af288c5bd30e76d811d9d3a02e6369c2216338179245b54a29b023c2 (image=quay.io/ceph/ceph:v18, name=vibrant_einstein, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 23:33:39 compute-0 sudo[99668]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:33:39 compute-0 sudo[99668]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:39 compute-0 sudo[99668]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:39 compute-0 sudo[99694]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 23:33:39 compute-0 sudo[99694]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:39 compute-0 sudo[99694]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:39 compute-0 sudo[99719]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:33:39 compute-0 sudo[99719]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:39 compute-0 sudo[99719]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:39 compute-0 sudo[99744]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:33:39 compute-0 sudo[99744]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:39 compute-0 sudo[99744]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:39 compute-0 sudo[99779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:33:39 compute-0 sudo[99779]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:39 compute-0 sudo[99779]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:39 compute-0 ceph-mon[75654]: pgmap v74: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:33:39 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:33:39 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:33:39 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:33:39 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:33:39 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:33:39 compute-0 sudo[99813]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 25 23:33:39 compute-0 sudo[99813]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:39 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14254 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 25 23:33:39 compute-0 vibrant_einstein[99665]: 
Nov 25 23:33:39 compute-0 vibrant_einstein[99665]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 25 23:33:39 compute-0 systemd[1]: libpod-68cb4f44af288c5bd30e76d811d9d3a02e6369c2216338179245b54a29b023c2.scope: Deactivated successfully.
Nov 25 23:33:39 compute-0 podman[99624]: 2025-11-25 23:33:39.614013657 +0000 UTC m=+0.759433676 container died 68cb4f44af288c5bd30e76d811d9d3a02e6369c2216338179245b54a29b023c2 (image=quay.io/ceph/ceph:v18, name=vibrant_einstein, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 23:33:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-6c5bccdc04fc614844d60f4975be1ede26c0c2e2aba5932a60a0eb6ea2852520-merged.mount: Deactivated successfully.
Nov 25 23:33:39 compute-0 podman[99624]: 2025-11-25 23:33:39.677883398 +0000 UTC m=+0.823303427 container remove 68cb4f44af288c5bd30e76d811d9d3a02e6369c2216338179245b54a29b023c2 (image=quay.io/ceph/ceph:v18, name=vibrant_einstein, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 23:33:39 compute-0 systemd[1]: libpod-conmon-68cb4f44af288c5bd30e76d811d9d3a02e6369c2216338179245b54a29b023c2.scope: Deactivated successfully.
Nov 25 23:33:39 compute-0 ansible-async_wrapper.py[99586]: Module complete (99586)
Nov 25 23:33:39 compute-0 sudo[99931]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptonlthmzocewcgovawefzyxdxrailgj ; /usr/bin/python3'
Nov 25 23:33:39 compute-0 sudo[99931]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:33:39 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).mds e3 new map
Nov 25 23:33:39 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).mds e3 print_map
                                           e3
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-25T23:33:28.179667+0000
                                           modified        2025-11-25T23:33:28.179713+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.bgauhq{-1:14252} state up:standby seq 1 addr [v2:192.168.122.100:6814/1816913204,v1:192.168.122.100:6815/1816913204] compat {c=[1],r=[1],i=[7ff]}]
Nov 25 23:33:39 compute-0 ceph-mds[99641]: mds.cephfs.compute-0.bgauhq Updating MDS map to version 3 from mon.0
Nov 25 23:33:39 compute-0 ceph-mds[99641]: mds.cephfs.compute-0.bgauhq Monitors have assigned me to become a standby.
Nov 25 23:33:39 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/1816913204,v1:192.168.122.100:6815/1816913204] up:boot
Nov 25 23:33:39 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.100:6814/1816913204,v1:192.168.122.100:6815/1816913204] as mds.0
Nov 25 23:33:39 compute-0 ceph-mon[75654]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.bgauhq assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Nov 25 23:33:39 compute-0 ceph-mon[75654]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Nov 25 23:33:39 compute-0 ceph-mon[75654]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Nov 25 23:33:39 compute-0 ceph-mon[75654]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 25 23:33:39 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Nov 25 23:33:39 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.bgauhq"} v 0) v1
Nov 25 23:33:39 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.bgauhq"}]: dispatch
Nov 25 23:33:39 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).mds e3 all = 0
Nov 25 23:33:39 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).mds e4 new map
Nov 25 23:33:39 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).mds e4 print_map
                                           e4
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        4
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-25T23:33:28.179667+0000
                                           modified        2025-11-25T23:33:39.976596+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=14252}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           [mds.cephfs.compute-0.bgauhq{0:14252} state up:creating seq 1 addr [v2:192.168.122.100:6814/1816913204,v1:192.168.122.100:6815/1816913204] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
Nov 25 23:33:39 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.bgauhq=up:creating}
Nov 25 23:33:39 compute-0 ceph-mds[99641]: mds.cephfs.compute-0.bgauhq Updating MDS map to version 4 from mon.0
Nov 25 23:33:39 compute-0 ceph-mds[99641]: mds.0.4 handle_mds_map i am now mds.0.4
Nov 25 23:33:39 compute-0 ceph-mds[99641]: mds.0.4 handle_mds_map state change up:standby --> up:creating
Nov 25 23:33:39 compute-0 ceph-mds[99641]: mds.0.cache creating system inode with ino:0x1
Nov 25 23:33:39 compute-0 ceph-mds[99641]: mds.0.cache creating system inode with ino:0x100
Nov 25 23:33:39 compute-0 ceph-mds[99641]: mds.0.cache creating system inode with ino:0x600
Nov 25 23:33:39 compute-0 ceph-mds[99641]: mds.0.cache creating system inode with ino:0x601
Nov 25 23:33:39 compute-0 ceph-mds[99641]: mds.0.cache creating system inode with ino:0x602
Nov 25 23:33:39 compute-0 ceph-mds[99641]: mds.0.cache creating system inode with ino:0x603
Nov 25 23:33:39 compute-0 ceph-mds[99641]: mds.0.cache creating system inode with ino:0x604
Nov 25 23:33:39 compute-0 ceph-mds[99641]: mds.0.cache creating system inode with ino:0x605
Nov 25 23:33:39 compute-0 ceph-mds[99641]: mds.0.cache creating system inode with ino:0x606
Nov 25 23:33:39 compute-0 ceph-mds[99641]: mds.0.cache creating system inode with ino:0x607
Nov 25 23:33:39 compute-0 ceph-mds[99641]: mds.0.cache creating system inode with ino:0x608
Nov 25 23:33:39 compute-0 ceph-mds[99641]: mds.0.cache creating system inode with ino:0x609
Nov 25 23:33:40 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v75: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:33:40 compute-0 ceph-mds[99641]: mds.0.4 creating_done
Nov 25 23:33:40 compute-0 ceph-mon[75654]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.bgauhq is now active in filesystem cephfs as rank 0
Nov 25 23:33:40 compute-0 python3[99939]: ansible-ansible.legacy.async_status Invoked with jid=j119083935891.99556 mode=status _async_dir=/root/.ansible_async
Nov 25 23:33:40 compute-0 sudo[99931]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:40 compute-0 podman[99982]: 2025-11-25 23:33:40.165570687 +0000 UTC m=+0.073299221 container exec 42789e176a5dd28a3c234ac87642cbe4e8e4b6a7a43641214b6da8a496b18af2 (image=quay.io/ceph/ceph:v18, name=ceph-101922db-575f-58e2-980f-928050464f69-mon-compute-0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 23:33:40 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:33:40 compute-0 sudo[100048]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kaeouwmcrgssdflybmghbxgtbnryqpga ; /usr/bin/python3'
Nov 25 23:33:40 compute-0 sudo[100048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:33:40 compute-0 podman[99982]: 2025-11-25 23:33:40.314543734 +0000 UTC m=+0.222272148 container exec_died 42789e176a5dd28a3c234ac87642cbe4e8e4b6a7a43641214b6da8a496b18af2 (image=quay.io/ceph/ceph:v18, name=ceph-101922db-575f-58e2-980f-928050464f69-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:33:40 compute-0 python3[100050]: ansible-ansible.legacy.async_status Invoked with jid=j119083935891.99556 mode=cleanup _async_dir=/root/.ansible_async
Nov 25 23:33:40 compute-0 sudo[100048]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:40 compute-0 sudo[100192]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nutvszydeuethsqjyjcdxieznzzdzoek ; /usr/bin/python3'
Nov 25 23:33:40 compute-0 sudo[100192]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:33:40 compute-0 sudo[99813]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:40 compute-0 ceph-mon[75654]: from='client.14254 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 25 23:33:40 compute-0 ceph-mon[75654]: mds.? [v2:192.168.122.100:6814/1816913204,v1:192.168.122.100:6815/1816913204] up:boot
Nov 25 23:33:40 compute-0 ceph-mon[75654]: daemon mds.cephfs.compute-0.bgauhq assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Nov 25 23:33:40 compute-0 ceph-mon[75654]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Nov 25 23:33:40 compute-0 ceph-mon[75654]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Nov 25 23:33:40 compute-0 ceph-mon[75654]: Cluster is now healthy
Nov 25 23:33:40 compute-0 ceph-mon[75654]: fsmap cephfs:0 1 up:standby
Nov 25 23:33:40 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.bgauhq"}]: dispatch
Nov 25 23:33:40 compute-0 ceph-mon[75654]: fsmap cephfs:1 {0=cephfs.compute-0.bgauhq=up:creating}
Nov 25 23:33:40 compute-0 ceph-mon[75654]: pgmap v75: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:33:40 compute-0 ceph-mon[75654]: daemon mds.cephfs.compute-0.bgauhq is now active in filesystem cephfs as rank 0
Nov 25 23:33:40 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 23:33:40 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).mds e5 new map
Nov 25 23:33:40 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).mds e5 print_map
                                           e5
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-25T23:33:28.179667+0000
                                           modified        2025-11-25T23:33:40.984330+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=14252}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           [mds.cephfs.compute-0.bgauhq{0:14252} state up:active seq 2 join_fscid=1 addr [v2:192.168.122.100:6814/1816913204,v1:192.168.122.100:6815/1816913204] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
Nov 25 23:33:40 compute-0 ceph-mds[99641]: mds.cephfs.compute-0.bgauhq Updating MDS map to version 5 from mon.0
Nov 25 23:33:40 compute-0 ceph-mds[99641]: mds.0.4 handle_mds_map i am now mds.0.4
Nov 25 23:33:40 compute-0 ceph-mds[99641]: mds.0.4 handle_mds_map state change up:creating --> up:active
Nov 25 23:33:40 compute-0 ceph-mds[99641]: mds.0.4 recovery_done -- successful recovery!
Nov 25 23:33:40 compute-0 ceph-mds[99641]: mds.0.4 active_start
Nov 25 23:33:40 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/1816913204,v1:192.168.122.100:6815/1816913204] up:active
Nov 25 23:33:40 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.bgauhq=up:active}
Nov 25 23:33:40 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:33:41 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 23:33:41 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:33:41 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 23:33:41 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:33:41 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 23:33:41 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 23:33:41 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 23:33:41 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:33:41 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev ac0906c8-6b00-499f-ae50-1fb05cac968c does not exist
Nov 25 23:33:41 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev 2abc0533-6a99-4a7e-ab93-a86cbaf900c4 does not exist
Nov 25 23:33:41 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev 298aa286-44c3-4080-b22a-8342d0fd1a79 does not exist
Nov 25 23:33:41 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 23:33:41 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 23:33:41 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 23:33:41 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 23:33:41 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 23:33:41 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:33:41 compute-0 ceph-mgr[75954]: [progress INFO root] Writing back 4 completed events
Nov 25 23:33:41 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 25 23:33:41 compute-0 python3[100196]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 101922db-575f-58e2-980f-928050464f69 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:33:41 compute-0 sudo[100199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:33:41 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:33:41 compute-0 sudo[100199]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:41 compute-0 sudo[100199]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:41 compute-0 podman[100223]: 2025-11-25 23:33:41.170235342 +0000 UTC m=+0.050682670 container create 7872e106bbf32f5b2ebabc81ee6172d1aab33d4625b0eb42b9c1049a0bbaca5d (image=quay.io/ceph/ceph:v18, name=intelligent_raman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 23:33:41 compute-0 sudo[100225]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:33:41 compute-0 sudo[100225]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:41 compute-0 sudo[100225]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:41 compute-0 systemd[1]: Started libpod-conmon-7872e106bbf32f5b2ebabc81ee6172d1aab33d4625b0eb42b9c1049a0bbaca5d.scope.
Nov 25 23:33:41 compute-0 podman[100223]: 2025-11-25 23:33:41.153033341 +0000 UTC m=+0.033480659 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 23:33:41 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:33:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d918b341a5630ffb48c3b70ff4b41be27c4f27786a539ca97a5eb8ced76714d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d918b341a5630ffb48c3b70ff4b41be27c4f27786a539ca97a5eb8ced76714d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:41 compute-0 sudo[100264]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:33:41 compute-0 sudo[100264]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:41 compute-0 podman[100223]: 2025-11-25 23:33:41.271232118 +0000 UTC m=+0.151679456 container init 7872e106bbf32f5b2ebabc81ee6172d1aab33d4625b0eb42b9c1049a0bbaca5d (image=quay.io/ceph/ceph:v18, name=intelligent_raman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:33:41 compute-0 sudo[100264]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:41 compute-0 podman[100223]: 2025-11-25 23:33:41.276690081 +0000 UTC m=+0.157137419 container start 7872e106bbf32f5b2ebabc81ee6172d1aab33d4625b0eb42b9c1049a0bbaca5d (image=quay.io/ceph/ceph:v18, name=intelligent_raman, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 25 23:33:41 compute-0 podman[100223]: 2025-11-25 23:33:41.28236423 +0000 UTC m=+0.162811578 container attach 7872e106bbf32f5b2ebabc81ee6172d1aab33d4625b0eb42b9c1049a0bbaca5d (image=quay.io/ceph/ceph:v18, name=intelligent_raman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:33:41 compute-0 sudo[100293]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 23:33:41 compute-0 sudo[100293]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:41 compute-0 podman[100376]: 2025-11-25 23:33:41.723346179 +0000 UTC m=+0.058820428 container create 64c0cf8d60960662c162c14c35453be15f5d5dacecef7bb801fe1c3c262494a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_matsumoto, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:33:41 compute-0 systemd[1]: Started libpod-conmon-64c0cf8d60960662c162c14c35453be15f5d5dacecef7bb801fe1c3c262494a1.scope.
Nov 25 23:33:41 compute-0 podman[100376]: 2025-11-25 23:33:41.693401387 +0000 UTC m=+0.028875636 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:33:41 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:33:41 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14256 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 25 23:33:41 compute-0 intelligent_raman[100271]: 
Nov 25 23:33:41 compute-0 intelligent_raman[100271]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 25 23:33:41 compute-0 podman[100376]: 2025-11-25 23:33:41.812317471 +0000 UTC m=+0.147791720 container init 64c0cf8d60960662c162c14c35453be15f5d5dacecef7bb801fe1c3c262494a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_matsumoto, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 23:33:41 compute-0 systemd[1]: libpod-7872e106bbf32f5b2ebabc81ee6172d1aab33d4625b0eb42b9c1049a0bbaca5d.scope: Deactivated successfully.
Nov 25 23:33:41 compute-0 podman[100223]: 2025-11-25 23:33:41.816281708 +0000 UTC m=+0.696729036 container died 7872e106bbf32f5b2ebabc81ee6172d1aab33d4625b0eb42b9c1049a0bbaca5d (image=quay.io/ceph/ceph:v18, name=intelligent_raman, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 23:33:41 compute-0 podman[100376]: 2025-11-25 23:33:41.825258007 +0000 UTC m=+0.160732256 container start 64c0cf8d60960662c162c14c35453be15f5d5dacecef7bb801fe1c3c262494a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_matsumoto, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:33:41 compute-0 podman[100376]: 2025-11-25 23:33:41.828830734 +0000 UTC m=+0.164305043 container attach 64c0cf8d60960662c162c14c35453be15f5d5dacecef7bb801fe1c3c262494a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_matsumoto, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:33:41 compute-0 busy_matsumoto[100392]: 167 167
Nov 25 23:33:41 compute-0 systemd[1]: libpod-64c0cf8d60960662c162c14c35453be15f5d5dacecef7bb801fe1c3c262494a1.scope: Deactivated successfully.
Nov 25 23:33:41 compute-0 podman[100376]: 2025-11-25 23:33:41.833717403 +0000 UTC m=+0.169191652 container died 64c0cf8d60960662c162c14c35453be15f5d5dacecef7bb801fe1c3c262494a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_matsumoto, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:33:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-9d918b341a5630ffb48c3b70ff4b41be27c4f27786a539ca97a5eb8ced76714d-merged.mount: Deactivated successfully.
Nov 25 23:33:41 compute-0 podman[100223]: 2025-11-25 23:33:41.878898427 +0000 UTC m=+0.759345755 container remove 7872e106bbf32f5b2ebabc81ee6172d1aab33d4625b0eb42b9c1049a0bbaca5d (image=quay.io/ceph/ceph:v18, name=intelligent_raman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:33:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-2ee2873269a2001c4e694ec1fc085f31c51b0c9a1ab0c0081d4026a1612edbac-merged.mount: Deactivated successfully.
Nov 25 23:33:41 compute-0 systemd[1]: libpod-conmon-7872e106bbf32f5b2ebabc81ee6172d1aab33d4625b0eb42b9c1049a0bbaca5d.scope: Deactivated successfully.
Nov 25 23:33:41 compute-0 sudo[100192]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:41 compute-0 podman[100376]: 2025-11-25 23:33:41.922441811 +0000 UTC m=+0.257916040 container remove 64c0cf8d60960662c162c14c35453be15f5d5dacecef7bb801fe1c3c262494a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:33:41 compute-0 systemd[1]: libpod-conmon-64c0cf8d60960662c162c14c35453be15f5d5dacecef7bb801fe1c3c262494a1.scope: Deactivated successfully.
Nov 25 23:33:41 compute-0 ceph-mon[75654]: mds.? [v2:192.168.122.100:6814/1816913204,v1:192.168.122.100:6815/1816913204] up:active
Nov 25 23:33:41 compute-0 ceph-mon[75654]: fsmap cephfs:1 {0=cephfs.compute-0.bgauhq=up:active}
Nov 25 23:33:41 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:33:41 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:33:41 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:33:41 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 23:33:41 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:33:41 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 23:33:41 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 23:33:41 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:33:41 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:33:42 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v76: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:33:42 compute-0 podman[100428]: 2025-11-25 23:33:42.148688975 +0000 UTC m=+0.066963205 container create 6f56cf7fddf190a9a2d7af9eb5c1b0218dabfc408206823c82a62402d47991a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 23:33:42 compute-0 systemd[1]: Started libpod-conmon-6f56cf7fddf190a9a2d7af9eb5c1b0218dabfc408206823c82a62402d47991a9.scope.
Nov 25 23:33:42 compute-0 podman[100428]: 2025-11-25 23:33:42.12187714 +0000 UTC m=+0.040151420 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:33:42 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:33:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd17c32f19ca499b4835b86a1c94693069af9254b0e08220a6d33d66ee9b5dbd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd17c32f19ca499b4835b86a1c94693069af9254b0e08220a6d33d66ee9b5dbd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd17c32f19ca499b4835b86a1c94693069af9254b0e08220a6d33d66ee9b5dbd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd17c32f19ca499b4835b86a1c94693069af9254b0e08220a6d33d66ee9b5dbd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd17c32f19ca499b4835b86a1c94693069af9254b0e08220a6d33d66ee9b5dbd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:42 compute-0 podman[100428]: 2025-11-25 23:33:42.283646131 +0000 UTC m=+0.201920411 container init 6f56cf7fddf190a9a2d7af9eb5c1b0218dabfc408206823c82a62402d47991a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_kalam, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:33:42 compute-0 podman[100428]: 2025-11-25 23:33:42.295958802 +0000 UTC m=+0.214233042 container start 6f56cf7fddf190a9a2d7af9eb5c1b0218dabfc408206823c82a62402d47991a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_kalam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:33:42 compute-0 podman[100428]: 2025-11-25 23:33:42.300196496 +0000 UTC m=+0.218470746 container attach 6f56cf7fddf190a9a2d7af9eb5c1b0218dabfc408206823c82a62402d47991a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_kalam, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True)
Nov 25 23:33:42 compute-0 sudo[100473]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vixybhygbijsrutzpljvrxsefunlhaqp ; /usr/bin/python3'
Nov 25 23:33:42 compute-0 sudo[100473]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:33:42 compute-0 python3[100475]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 101922db-575f-58e2-980f-928050464f69 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:33:42 compute-0 podman[100476]: 2025-11-25 23:33:42.925097996 +0000 UTC m=+0.063065011 container create b0a58d2f5e716cbf017d0a78b7fd059fac299039b364c307ababe7aa3cfecd6c (image=quay.io/ceph/ceph:v18, name=trusting_lewin, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:33:42 compute-0 systemd[1]: Started libpod-conmon-b0a58d2f5e716cbf017d0a78b7fd059fac299039b364c307ababe7aa3cfecd6c.scope.
Nov 25 23:33:42 compute-0 podman[100476]: 2025-11-25 23:33:42.900378082 +0000 UTC m=+0.038345137 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 23:33:43 compute-0 ceph-mon[75654]: from='client.14256 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 25 23:33:43 compute-0 ceph-mon[75654]: pgmap v76: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:33:43 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:33:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76dc89a677651595fc7596504a6a280aa28849e37c903471e1795359f28901ab/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76dc89a677651595fc7596504a6a280aa28849e37c903471e1795359f28901ab/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:43 compute-0 podman[100476]: 2025-11-25 23:33:43.027783363 +0000 UTC m=+0.165750458 container init b0a58d2f5e716cbf017d0a78b7fd059fac299039b364c307ababe7aa3cfecd6c (image=quay.io/ceph/ceph:v18, name=trusting_lewin, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:33:43 compute-0 podman[100476]: 2025-11-25 23:33:43.039618273 +0000 UTC m=+0.177585318 container start b0a58d2f5e716cbf017d0a78b7fd059fac299039b364c307ababe7aa3cfecd6c (image=quay.io/ceph/ceph:v18, name=trusting_lewin, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:33:43 compute-0 podman[100476]: 2025-11-25 23:33:43.042906163 +0000 UTC m=+0.180873208 container attach b0a58d2f5e716cbf017d0a78b7fd059fac299039b364c307ababe7aa3cfecd6c (image=quay.io/ceph/ceph:v18, name=trusting_lewin, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 25 23:33:43 compute-0 eager_kalam[100445]: --> passed data devices: 0 physical, 3 LVM
Nov 25 23:33:43 compute-0 eager_kalam[100445]: --> relative data size: 1.0
Nov 25 23:33:43 compute-0 eager_kalam[100445]: --> All data devices are unavailable
Nov 25 23:33:43 compute-0 systemd[1]: libpod-6f56cf7fddf190a9a2d7af9eb5c1b0218dabfc408206823c82a62402d47991a9.scope: Deactivated successfully.
Nov 25 23:33:43 compute-0 podman[100428]: 2025-11-25 23:33:43.406390689 +0000 UTC m=+1.324664889 container died 6f56cf7fddf190a9a2d7af9eb5c1b0218dabfc408206823c82a62402d47991a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_kalam, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:33:43 compute-0 systemd[1]: libpod-6f56cf7fddf190a9a2d7af9eb5c1b0218dabfc408206823c82a62402d47991a9.scope: Consumed 1.040s CPU time.
Nov 25 23:33:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-fd17c32f19ca499b4835b86a1c94693069af9254b0e08220a6d33d66ee9b5dbd-merged.mount: Deactivated successfully.
Nov 25 23:33:43 compute-0 podman[100428]: 2025-11-25 23:33:43.454236688 +0000 UTC m=+1.372510888 container remove 6f56cf7fddf190a9a2d7af9eb5c1b0218dabfc408206823c82a62402d47991a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_kalam, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 25 23:33:43 compute-0 systemd[1]: libpod-conmon-6f56cf7fddf190a9a2d7af9eb5c1b0218dabfc408206823c82a62402d47991a9.scope: Deactivated successfully.
Nov 25 23:33:43 compute-0 sudo[100293]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:43 compute-0 sudo[100550]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:33:43 compute-0 sudo[100550]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:43 compute-0 sudo[100550]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:43 compute-0 sudo[100575]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:33:43 compute-0 sudo[100575]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:43 compute-0 sudo[100575]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:43 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14258 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 25 23:33:43 compute-0 trusting_lewin[100495]: 
Nov 25 23:33:43 compute-0 trusting_lewin[100495]: [{"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0", "/dev/ceph_vg1/ceph_lv1", "/dev/ceph_vg2/ceph_lv2"]}, "filter_logic": "AND", "objectstore": "bluestore"}}]
Nov 25 23:33:43 compute-0 sudo[100600]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:33:43 compute-0 sudo[100600]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:43 compute-0 sudo[100600]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:43 compute-0 systemd[1]: libpod-b0a58d2f5e716cbf017d0a78b7fd059fac299039b364c307ababe7aa3cfecd6c.scope: Deactivated successfully.
Nov 25 23:33:43 compute-0 podman[100476]: 2025-11-25 23:33:43.661495759 +0000 UTC m=+0.799462774 container died b0a58d2f5e716cbf017d0a78b7fd059fac299039b364c307ababe7aa3cfecd6c (image=quay.io/ceph/ceph:v18, name=trusting_lewin, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:33:43 compute-0 ansible-async_wrapper.py[99585]: Done in kid B.
Nov 25 23:33:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-76dc89a677651595fc7596504a6a280aa28849e37c903471e1795359f28901ab-merged.mount: Deactivated successfully.
Nov 25 23:33:43 compute-0 podman[100476]: 2025-11-25 23:33:43.699046846 +0000 UTC m=+0.837013861 container remove b0a58d2f5e716cbf017d0a78b7fd059fac299039b364c307ababe7aa3cfecd6c (image=quay.io/ceph/ceph:v18, name=trusting_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:33:43 compute-0 systemd[1]: libpod-conmon-b0a58d2f5e716cbf017d0a78b7fd059fac299039b364c307ababe7aa3cfecd6c.scope: Deactivated successfully.
Nov 25 23:33:43 compute-0 sudo[100473]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:43 compute-0 sudo[100628]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 -- lvm list --format json
Nov 25 23:33:43 compute-0 sudo[100628]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:44 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v77: 7 pgs: 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s wr, 3 op/s
Nov 25 23:33:44 compute-0 podman[100705]: 2025-11-25 23:33:44.060442131 +0000 UTC m=+0.056487351 container create fd6384582a559416e96b07b746f7ea12cc2e38584313451f482520dd2604959c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_joliot, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 23:33:44 compute-0 systemd[1]: Started libpod-conmon-fd6384582a559416e96b07b746f7ea12cc2e38584313451f482520dd2604959c.scope.
Nov 25 23:33:44 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:33:44 compute-0 podman[100705]: 2025-11-25 23:33:44.042703818 +0000 UTC m=+0.038749058 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:33:44 compute-0 podman[100705]: 2025-11-25 23:33:44.13821675 +0000 UTC m=+0.134262010 container init fd6384582a559416e96b07b746f7ea12cc2e38584313451f482520dd2604959c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:33:44 compute-0 podman[100705]: 2025-11-25 23:33:44.144582266 +0000 UTC m=+0.140627476 container start fd6384582a559416e96b07b746f7ea12cc2e38584313451f482520dd2604959c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_joliot, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:33:44 compute-0 podman[100705]: 2025-11-25 23:33:44.14884028 +0000 UTC m=+0.144885520 container attach fd6384582a559416e96b07b746f7ea12cc2e38584313451f482520dd2604959c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_joliot, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:33:44 compute-0 sad_joliot[100721]: 167 167
Nov 25 23:33:44 compute-0 systemd[1]: libpod-fd6384582a559416e96b07b746f7ea12cc2e38584313451f482520dd2604959c.scope: Deactivated successfully.
Nov 25 23:33:44 compute-0 podman[100705]: 2025-11-25 23:33:44.150442709 +0000 UTC m=+0.146487939 container died fd6384582a559416e96b07b746f7ea12cc2e38584313451f482520dd2604959c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 23:33:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-418fb8786eb49df7ba841069155a9aef950f24003c65ef6bb321cdc9f77f31c0-merged.mount: Deactivated successfully.
Nov 25 23:33:44 compute-0 podman[100705]: 2025-11-25 23:33:44.191390589 +0000 UTC m=+0.187435839 container remove fd6384582a559416e96b07b746f7ea12cc2e38584313451f482520dd2604959c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_joliot, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:33:44 compute-0 systemd[1]: libpod-conmon-fd6384582a559416e96b07b746f7ea12cc2e38584313451f482520dd2604959c.scope: Deactivated successfully.
Nov 25 23:33:44 compute-0 podman[100745]: 2025-11-25 23:33:44.372858411 +0000 UTC m=+0.037495257 container create e247544de738d30fe81a8eea530cceab09629ad9c9750d50b5dfeb1adb4b4632 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_wright, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:33:44 compute-0 systemd[1]: Started libpod-conmon-e247544de738d30fe81a8eea530cceab09629ad9c9750d50b5dfeb1adb4b4632.scope.
Nov 25 23:33:44 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:33:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64d84f74cc53deaba3ebdc2fa2d2ef1361a2e80735fc495aaaac579299d11bb1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64d84f74cc53deaba3ebdc2fa2d2ef1361a2e80735fc495aaaac579299d11bb1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64d84f74cc53deaba3ebdc2fa2d2ef1361a2e80735fc495aaaac579299d11bb1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64d84f74cc53deaba3ebdc2fa2d2ef1361a2e80735fc495aaaac579299d11bb1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:44 compute-0 podman[100745]: 2025-11-25 23:33:44.452988987 +0000 UTC m=+0.117625843 container init e247544de738d30fe81a8eea530cceab09629ad9c9750d50b5dfeb1adb4b4632 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 23:33:44 compute-0 podman[100745]: 2025-11-25 23:33:44.358536021 +0000 UTC m=+0.023172887 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:33:44 compute-0 podman[100745]: 2025-11-25 23:33:44.460432829 +0000 UTC m=+0.125069675 container start e247544de738d30fe81a8eea530cceab09629ad9c9750d50b5dfeb1adb4b4632 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_wright, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 25 23:33:44 compute-0 podman[100745]: 2025-11-25 23:33:44.46293343 +0000 UTC m=+0.127570296 container attach e247544de738d30fe81a8eea530cceab09629ad9c9750d50b5dfeb1adb4b4632 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 23:33:44 compute-0 sudo[100790]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htfhdpdisdhgaptxevbbraxeayiqured ; /usr/bin/python3'
Nov 25 23:33:44 compute-0 sudo[100790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:33:44 compute-0 python3[100792]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 101922db-575f-58e2-980f-928050464f69 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:33:44 compute-0 podman[100793]: 2025-11-25 23:33:44.701735462 +0000 UTC m=+0.036764589 container create ef0210761cd33e8d1fc3e41aa4e3556568e21243194fd3184bf22332f3b2f62f (image=quay.io/ceph/ceph:v18, name=hardcore_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:33:44 compute-0 systemd[1]: Started libpod-conmon-ef0210761cd33e8d1fc3e41aa4e3556568e21243194fd3184bf22332f3b2f62f.scope.
Nov 25 23:33:44 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:33:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c35bc2c1aa6742e14dd57843e66050bda32fd28d815a554e80c29f945c8847c2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c35bc2c1aa6742e14dd57843e66050bda32fd28d815a554e80c29f945c8847c2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:44 compute-0 podman[100793]: 2025-11-25 23:33:44.763034469 +0000 UTC m=+0.098063606 container init ef0210761cd33e8d1fc3e41aa4e3556568e21243194fd3184bf22332f3b2f62f (image=quay.io/ceph/ceph:v18, name=hardcore_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 23:33:44 compute-0 podman[100793]: 2025-11-25 23:33:44.768460961 +0000 UTC m=+0.103490088 container start ef0210761cd33e8d1fc3e41aa4e3556568e21243194fd3184bf22332f3b2f62f (image=quay.io/ceph/ceph:v18, name=hardcore_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:33:44 compute-0 podman[100793]: 2025-11-25 23:33:44.77169219 +0000 UTC m=+0.106721317 container attach ef0210761cd33e8d1fc3e41aa4e3556568e21243194fd3184bf22332f3b2f62f (image=quay.io/ceph/ceph:v18, name=hardcore_mirzakhani, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:33:44 compute-0 podman[100793]: 2025-11-25 23:33:44.68569134 +0000 UTC m=+0.020720487 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 23:33:45 compute-0 ceph-mon[75654]: from='client.14258 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 25 23:33:45 compute-0 ceph-mon[75654]: pgmap v77: 7 pgs: 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s wr, 3 op/s
Nov 25 23:33:45 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:33:45 compute-0 zen_wright[100762]: {
Nov 25 23:33:45 compute-0 zen_wright[100762]:     "0": [
Nov 25 23:33:45 compute-0 zen_wright[100762]:         {
Nov 25 23:33:45 compute-0 zen_wright[100762]:             "devices": [
Nov 25 23:33:45 compute-0 zen_wright[100762]:                 "/dev/loop3"
Nov 25 23:33:45 compute-0 zen_wright[100762]:             ],
Nov 25 23:33:45 compute-0 zen_wright[100762]:             "lv_name": "ceph_lv0",
Nov 25 23:33:45 compute-0 zen_wright[100762]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 23:33:45 compute-0 zen_wright[100762]:             "lv_size": "21470642176",
Nov 25 23:33:45 compute-0 zen_wright[100762]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c5ddf08a-6193-41e0-8332-60b5083aa62e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 23:33:45 compute-0 zen_wright[100762]:             "lv_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 23:33:45 compute-0 zen_wright[100762]:             "name": "ceph_lv0",
Nov 25 23:33:45 compute-0 zen_wright[100762]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 23:33:45 compute-0 zen_wright[100762]:             "tags": {
Nov 25 23:33:45 compute-0 zen_wright[100762]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 23:33:45 compute-0 zen_wright[100762]:                 "ceph.block_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 23:33:45 compute-0 zen_wright[100762]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 23:33:45 compute-0 zen_wright[100762]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:33:45 compute-0 zen_wright[100762]:                 "ceph.cluster_name": "ceph",
Nov 25 23:33:45 compute-0 zen_wright[100762]:                 "ceph.crush_device_class": "",
Nov 25 23:33:45 compute-0 zen_wright[100762]:                 "ceph.encrypted": "0",
Nov 25 23:33:45 compute-0 zen_wright[100762]:                 "ceph.osd_fsid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 23:33:45 compute-0 zen_wright[100762]:                 "ceph.osd_id": "0",
Nov 25 23:33:45 compute-0 zen_wright[100762]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 23:33:45 compute-0 zen_wright[100762]:                 "ceph.type": "block",
Nov 25 23:33:45 compute-0 zen_wright[100762]:                 "ceph.vdo": "0"
Nov 25 23:33:45 compute-0 zen_wright[100762]:             },
Nov 25 23:33:45 compute-0 zen_wright[100762]:             "type": "block",
Nov 25 23:33:45 compute-0 zen_wright[100762]:             "vg_name": "ceph_vg0"
Nov 25 23:33:45 compute-0 zen_wright[100762]:         }
Nov 25 23:33:45 compute-0 zen_wright[100762]:     ],
Nov 25 23:33:45 compute-0 zen_wright[100762]:     "1": [
Nov 25 23:33:45 compute-0 zen_wright[100762]:         {
Nov 25 23:33:45 compute-0 zen_wright[100762]:             "devices": [
Nov 25 23:33:45 compute-0 zen_wright[100762]:                 "/dev/loop4"
Nov 25 23:33:45 compute-0 zen_wright[100762]:             ],
Nov 25 23:33:45 compute-0 zen_wright[100762]:             "lv_name": "ceph_lv1",
Nov 25 23:33:45 compute-0 zen_wright[100762]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 23:33:45 compute-0 zen_wright[100762]:             "lv_size": "21470642176",
Nov 25 23:33:45 compute-0 zen_wright[100762]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21bbab34-bea3-466b-8bf7-812749fcef47,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 23:33:45 compute-0 zen_wright[100762]:             "lv_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 23:33:45 compute-0 zen_wright[100762]:             "name": "ceph_lv1",
Nov 25 23:33:45 compute-0 zen_wright[100762]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 23:33:45 compute-0 zen_wright[100762]:             "tags": {
Nov 25 23:33:45 compute-0 zen_wright[100762]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 23:33:45 compute-0 zen_wright[100762]:                 "ceph.block_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 23:33:45 compute-0 zen_wright[100762]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 23:33:45 compute-0 zen_wright[100762]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:33:45 compute-0 zen_wright[100762]:                 "ceph.cluster_name": "ceph",
Nov 25 23:33:45 compute-0 zen_wright[100762]:                 "ceph.crush_device_class": "",
Nov 25 23:33:45 compute-0 zen_wright[100762]:                 "ceph.encrypted": "0",
Nov 25 23:33:45 compute-0 zen_wright[100762]:                 "ceph.osd_fsid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 23:33:45 compute-0 zen_wright[100762]:                 "ceph.osd_id": "1",
Nov 25 23:33:45 compute-0 zen_wright[100762]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 23:33:45 compute-0 zen_wright[100762]:                 "ceph.type": "block",
Nov 25 23:33:45 compute-0 zen_wright[100762]:                 "ceph.vdo": "0"
Nov 25 23:33:45 compute-0 zen_wright[100762]:             },
Nov 25 23:33:45 compute-0 zen_wright[100762]:             "type": "block",
Nov 25 23:33:45 compute-0 zen_wright[100762]:             "vg_name": "ceph_vg1"
Nov 25 23:33:45 compute-0 zen_wright[100762]:         }
Nov 25 23:33:45 compute-0 zen_wright[100762]:     ],
Nov 25 23:33:45 compute-0 zen_wright[100762]:     "2": [
Nov 25 23:33:45 compute-0 zen_wright[100762]:         {
Nov 25 23:33:45 compute-0 zen_wright[100762]:             "devices": [
Nov 25 23:33:45 compute-0 zen_wright[100762]:                 "/dev/loop5"
Nov 25 23:33:45 compute-0 zen_wright[100762]:             ],
Nov 25 23:33:45 compute-0 zen_wright[100762]:             "lv_name": "ceph_lv2",
Nov 25 23:33:45 compute-0 zen_wright[100762]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 23:33:45 compute-0 zen_wright[100762]:             "lv_size": "21470642176",
Nov 25 23:33:45 compute-0 zen_wright[100762]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=019d967b-1a56-4e90-8682-a890da577e20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 23:33:45 compute-0 zen_wright[100762]:             "lv_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 23:33:45 compute-0 zen_wright[100762]:             "name": "ceph_lv2",
Nov 25 23:33:45 compute-0 zen_wright[100762]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 23:33:45 compute-0 zen_wright[100762]:             "tags": {
Nov 25 23:33:45 compute-0 zen_wright[100762]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 23:33:45 compute-0 zen_wright[100762]:                 "ceph.block_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 23:33:45 compute-0 zen_wright[100762]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 23:33:45 compute-0 zen_wright[100762]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:33:45 compute-0 zen_wright[100762]:                 "ceph.cluster_name": "ceph",
Nov 25 23:33:45 compute-0 zen_wright[100762]:                 "ceph.crush_device_class": "",
Nov 25 23:33:45 compute-0 zen_wright[100762]:                 "ceph.encrypted": "0",
Nov 25 23:33:45 compute-0 zen_wright[100762]:                 "ceph.osd_fsid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 23:33:45 compute-0 zen_wright[100762]:                 "ceph.osd_id": "2",
Nov 25 23:33:45 compute-0 zen_wright[100762]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 23:33:45 compute-0 zen_wright[100762]:                 "ceph.type": "block",
Nov 25 23:33:45 compute-0 zen_wright[100762]:                 "ceph.vdo": "0"
Nov 25 23:33:45 compute-0 zen_wright[100762]:             },
Nov 25 23:33:45 compute-0 zen_wright[100762]:             "type": "block",
Nov 25 23:33:45 compute-0 zen_wright[100762]:             "vg_name": "ceph_vg2"
Nov 25 23:33:45 compute-0 zen_wright[100762]:         }
Nov 25 23:33:45 compute-0 zen_wright[100762]:     ]
Nov 25 23:33:45 compute-0 zen_wright[100762]: }
Nov 25 23:33:45 compute-0 systemd[1]: libpod-e247544de738d30fe81a8eea530cceab09629ad9c9750d50b5dfeb1adb4b4632.scope: Deactivated successfully.
Nov 25 23:33:45 compute-0 conmon[100762]: conmon e247544de738d30fe81a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e247544de738d30fe81a8eea530cceab09629ad9c9750d50b5dfeb1adb4b4632.scope/container/memory.events
Nov 25 23:33:45 compute-0 podman[100745]: 2025-11-25 23:33:45.233067387 +0000 UTC m=+0.897704233 container died e247544de738d30fe81a8eea530cceab09629ad9c9750d50b5dfeb1adb4b4632 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_wright, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 23:33:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-64d84f74cc53deaba3ebdc2fa2d2ef1361a2e80735fc495aaaac579299d11bb1-merged.mount: Deactivated successfully.
Nov 25 23:33:45 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14260 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 25 23:33:45 compute-0 hardcore_mirzakhani[100808]: 
Nov 25 23:33:45 compute-0 hardcore_mirzakhani[100808]: [{"container_id": "42d7403704ba", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "0.63%", "created": "2025-11-25T23:32:23.768831Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "events": ["2025-11-25T23:32:23.818771Z daemon:crash.compute-0 [INFO] \"Deployed crash.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-25T23:33:40.982592Z", "memory_usage": 11607736, "ports": [], "service_name": "crash", "started": "2025-11-25T23:32:23.606904Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-101922db-575f-58e2-980f-928050464f69@crash.compute-0", "version": "18.2.7"}, {"container_id": "6b229c37a0bc", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "5.93%", "created": "2025-11-25T23:33:38.932742Z", "daemon_id": "cephfs.compute-0.bgauhq", "daemon_name": "mds.cephfs.compute-0.bgauhq", "daemon_type": "mds", "events": ["2025-11-25T23:33:38.979298Z daemon:mds.cephfs.compute-0.bgauhq [INFO] \"Deployed mds.cephfs.compute-0.bgauhq on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-25T23:33:40.982819Z", "memory_usage": 13516144, "ports": [], "service_name": "mds.cephfs", "started": "2025-11-25T23:33:38.809974Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-101922db-575f-58e2-980f-928050464f69@mds.cephfs.compute-0.bgauhq", "version": "18.2.7"}, {"container_id": "cb17cd0be6b6", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "30.06%", "created": "2025-11-25T23:31:12.640654Z", "daemon_id": "compute-0.gwqfsl", "daemon_name": "mgr.compute-0.gwqfsl", "daemon_type": "mgr", "events": ["2025-11-25T23:33:14.755740Z daemon:mgr.compute-0.gwqfsl [INFO] \"Reconfigured mgr.compute-0.gwqfsl on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-25T23:33:40.982532Z", "memory_usage": 547880960, "ports": [9283, 8765], "service_name": "mgr", "started": "2025-11-25T23:31:12.511267Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-101922db-575f-58e2-980f-928050464f69@mgr.compute-0.gwqfsl", "version": "18.2.7"}, {"container_id": "42789e176a5d", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "2.28%", "created": "2025-11-25T23:31:07.103509Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "events": ["2025-11-25T23:33:13.882670Z daemon:mon.compute-0 [INFO] \"Reconfigured mon.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-25T23:33:40.982456Z", "memory_request": 2147483648, "memory_usage": 37098618, "ports": [], "service_name": "mon", "started": "2025-11-25T23:31:09.966387Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-101922db-575f-58e2-980f-928050464f69@mon.compute-0", "version": "18.2.7"}, {"container_id": "1cdf379c2ca7", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "1.72%", "created": "2025-11-25T23:32:50.058505Z", "daemon_id": "0", "daemon_name": "osd.0", "daemon_type": "osd", "events": ["2025-11-25T23:32:50.125877Z daemon:osd.0 [INFO] \"Deployed osd.0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-25T23:33:40.982650Z", "memory_request": 4294967296, "memory_usage": 58709770, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-11-25T23:32:49.921521Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-101922db-575f-58e2-980f-928050464f69@osd.0", "version": "18.2.7"}, {"container_id": "210a65a79e01", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "2.00%", "created": "2025-11-25T23:32:54.628621Z", "daemon_id": "1", "daemon_name": "osd.1", "daemon_type": "osd", "events": ["2025-11-25T23:32:54.700615Z daemon:osd.1 [INFO] \"Deployed osd.1 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-25T23:33:40.982707Z", "memory_request": 4294967296, "memory_usage": 57336135, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-11-25T23:32:54.499199Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-101922db-575f-58e2-980f-928050464f69@osd.1", "version": "18.2.7"}, {"container_id": "4adea0c725a0", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "2.14%", "created": "2025-11-25T23:32:59.635748Z", "daemon_id": "2", "daemon_name": "osd.2", "daemon_type": "osd", "events": ["2025-11-25T23:32:59.682219Z daemon:osd.2 [INFO] \"Deployed osd.2 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-25T23:33:40.982763Z", "memory_request": 4294967296, "memory_usage": 55585013, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-11-25T23:32:59.539403Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-101922db-575f-58e2-980f-928050464f69@osd.2", "version": "18.2.7"}]
Nov 25 23:33:45 compute-0 podman[100745]: 2025-11-25 23:33:45.279645185 +0000 UTC m=+0.944282021 container remove e247544de738d30fe81a8eea530cceab09629ad9c9750d50b5dfeb1adb4b4632 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_wright, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 23:33:45 compute-0 systemd[1]: libpod-conmon-e247544de738d30fe81a8eea530cceab09629ad9c9750d50b5dfeb1adb4b4632.scope: Deactivated successfully.
Nov 25 23:33:45 compute-0 systemd[1]: libpod-ef0210761cd33e8d1fc3e41aa4e3556568e21243194fd3184bf22332f3b2f62f.scope: Deactivated successfully.
Nov 25 23:33:45 compute-0 podman[100793]: 2025-11-25 23:33:45.293123944 +0000 UTC m=+0.628153071 container died ef0210761cd33e8d1fc3e41aa4e3556568e21243194fd3184bf22332f3b2f62f (image=quay.io/ceph/ceph:v18, name=hardcore_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 23:33:45 compute-0 sudo[100628]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-c35bc2c1aa6742e14dd57843e66050bda32fd28d815a554e80c29f945c8847c2-merged.mount: Deactivated successfully.
Nov 25 23:33:45 compute-0 podman[100793]: 2025-11-25 23:33:45.336105253 +0000 UTC m=+0.671134390 container remove ef0210761cd33e8d1fc3e41aa4e3556568e21243194fd3184bf22332f3b2f62f (image=quay.io/ceph/ceph:v18, name=hardcore_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:33:45 compute-0 systemd[1]: libpod-conmon-ef0210761cd33e8d1fc3e41aa4e3556568e21243194fd3184bf22332f3b2f62f.scope: Deactivated successfully.
Nov 25 23:33:45 compute-0 sudo[100790]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:45 compute-0 sudo[100856]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:33:45 compute-0 sudo[100856]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:45 compute-0 sudo[100856]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:45 compute-0 sudo[100888]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:33:45 compute-0 sudo[100888]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:45 compute-0 sudo[100888]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:45 compute-0 sudo[100913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:33:45 compute-0 sudo[100913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:45 compute-0 sudo[100913]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:45 compute-0 sudo[100938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 -- raw list --format json
Nov 25 23:33:45 compute-0 sudo[100938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:45 compute-0 podman[101004]: 2025-11-25 23:33:45.817485509 +0000 UTC m=+0.046611460 container create 9fd5d22d0a49366a3f1b9f77b2b6e3917611a06a34efb9983095aa2e5e6c595d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_poincare, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:33:45 compute-0 systemd[1]: Started libpod-conmon-9fd5d22d0a49366a3f1b9f77b2b6e3917611a06a34efb9983095aa2e5e6c595d.scope.
Nov 25 23:33:45 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:33:45 compute-0 podman[101004]: 2025-11-25 23:33:45.793288668 +0000 UTC m=+0.022414679 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:33:45 compute-0 podman[101004]: 2025-11-25 23:33:45.900380353 +0000 UTC m=+0.129506354 container init 9fd5d22d0a49366a3f1b9f77b2b6e3917611a06a34efb9983095aa2e5e6c595d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_poincare, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:33:45 compute-0 podman[101004]: 2025-11-25 23:33:45.906495802 +0000 UTC m=+0.135621703 container start 9fd5d22d0a49366a3f1b9f77b2b6e3917611a06a34efb9983095aa2e5e6c595d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_poincare, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:33:45 compute-0 podman[101004]: 2025-11-25 23:33:45.909324701 +0000 UTC m=+0.138450652 container attach 9fd5d22d0a49366a3f1b9f77b2b6e3917611a06a34efb9983095aa2e5e6c595d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_poincare, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 23:33:45 compute-0 adoring_poincare[101020]: 167 167
Nov 25 23:33:45 compute-0 systemd[1]: libpod-9fd5d22d0a49366a3f1b9f77b2b6e3917611a06a34efb9983095aa2e5e6c595d.scope: Deactivated successfully.
Nov 25 23:33:45 compute-0 podman[101004]: 2025-11-25 23:33:45.912768966 +0000 UTC m=+0.141894887 container died 9fd5d22d0a49366a3f1b9f77b2b6e3917611a06a34efb9983095aa2e5e6c595d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_poincare, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:33:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-a17f1b74fdd230ab66a069872264c815ffa5a679d517e921658b882ef5352305-merged.mount: Deactivated successfully.
Nov 25 23:33:45 compute-0 podman[101004]: 2025-11-25 23:33:45.955094169 +0000 UTC m=+0.184220090 container remove 9fd5d22d0a49366a3f1b9f77b2b6e3917611a06a34efb9983095aa2e5e6c595d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_poincare, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:33:45 compute-0 systemd[1]: libpod-conmon-9fd5d22d0a49366a3f1b9f77b2b6e3917611a06a34efb9983095aa2e5e6c595d.scope: Deactivated successfully.
Nov 25 23:33:46 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v78: 7 pgs: 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s wr, 3 op/s
Nov 25 23:33:46 compute-0 ceph-mon[75654]: from='client.14260 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 25 23:33:46 compute-0 podman[101044]: 2025-11-25 23:33:46.140455166 +0000 UTC m=+0.059316260 container create dabb82600a6099ad1d76fc58c38ecc9d89911847b1759dbbcdb6eff06f483dad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_tu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 23:33:46 compute-0 systemd[1]: Started libpod-conmon-dabb82600a6099ad1d76fc58c38ecc9d89911847b1759dbbcdb6eff06f483dad.scope.
Nov 25 23:33:46 compute-0 sudo[101081]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksohujpqrdrhpbzuaxitfyyzfpwgfvya ; /usr/bin/python3'
Nov 25 23:33:46 compute-0 sudo[101081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:33:46 compute-0 podman[101044]: 2025-11-25 23:33:46.113540809 +0000 UTC m=+0.032401963 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:33:46 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:33:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3dc709cbd50f3900818d3f9e76cf0f176dd87599e7c791ebea591b386e08f60/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3dc709cbd50f3900818d3f9e76cf0f176dd87599e7c791ebea591b386e08f60/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3dc709cbd50f3900818d3f9e76cf0f176dd87599e7c791ebea591b386e08f60/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3dc709cbd50f3900818d3f9e76cf0f176dd87599e7c791ebea591b386e08f60/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:46 compute-0 podman[101044]: 2025-11-25 23:33:46.225479692 +0000 UTC m=+0.144340816 container init dabb82600a6099ad1d76fc58c38ecc9d89911847b1759dbbcdb6eff06f483dad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_tu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:33:46 compute-0 podman[101044]: 2025-11-25 23:33:46.23277874 +0000 UTC m=+0.151639814 container start dabb82600a6099ad1d76fc58c38ecc9d89911847b1759dbbcdb6eff06f483dad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_tu, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 25 23:33:46 compute-0 podman[101044]: 2025-11-25 23:33:46.235292432 +0000 UTC m=+0.154153546 container attach dabb82600a6099ad1d76fc58c38ecc9d89911847b1759dbbcdb6eff06f483dad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_tu, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:33:46 compute-0 python3[101087]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 101922db-575f-58e2-980f-928050464f69 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:33:46 compute-0 podman[101091]: 2025-11-25 23:33:46.391600069 +0000 UTC m=+0.049007968 container create 10a567da6cd964883044d103919e39aeaaa88889779c880133e50de63551204d (image=quay.io/ceph/ceph:v18, name=youthful_yonath, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 23:33:46 compute-0 systemd[1]: Started libpod-conmon-10a567da6cd964883044d103919e39aeaaa88889779c880133e50de63551204d.scope.
Nov 25 23:33:46 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:33:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/638e4b05250fb483a9812d6e81a156713fbdffb65b1a0e95ca92487a8e36b669/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/638e4b05250fb483a9812d6e81a156713fbdffb65b1a0e95ca92487a8e36b669/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:46 compute-0 podman[101091]: 2025-11-25 23:33:46.46084265 +0000 UTC m=+0.118250569 container init 10a567da6cd964883044d103919e39aeaaa88889779c880133e50de63551204d (image=quay.io/ceph/ceph:v18, name=youthful_yonath, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:33:46 compute-0 podman[101091]: 2025-11-25 23:33:46.369555931 +0000 UTC m=+0.026963880 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 23:33:46 compute-0 podman[101091]: 2025-11-25 23:33:46.470545707 +0000 UTC m=+0.127953616 container start 10a567da6cd964883044d103919e39aeaaa88889779c880133e50de63551204d (image=quay.io/ceph/ceph:v18, name=youthful_yonath, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 23:33:46 compute-0 podman[101091]: 2025-11-25 23:33:46.474735149 +0000 UTC m=+0.132143068 container attach 10a567da6cd964883044d103919e39aeaaa88889779c880133e50de63551204d (image=quay.io/ceph/ceph:v18, name=youthful_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 23:33:47 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 25 23:33:47 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/661170587' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 25 23:33:47 compute-0 youthful_yonath[101107]: 
Nov 25 23:33:47 compute-0 youthful_yonath[101107]: {"fsid":"101922db-575f-58e2-980f-928050464f69","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":156,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":32,"num_osds":3,"num_up_osds":3,"osd_up_since":1764113585,"num_in_osds":3,"osd_in_since":1764113559,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":7}],"num_pgs":7,"num_pools":7,"num_objects":23,"data_bytes":461642,"bytes_used":83886080,"bytes_avail":64328040448,"bytes_total":64411926528,"write_bytes_sec":1194,"read_op_per_sec":0,"write_op_per_sec":3},"fsmap":{"epoch":5,"id":1,"up":1,"in":1,"max":1,"by_rank":[{"filesystem_id":1,"rank":0,"name":"cephfs.compute-0.bgauhq","status":"up:active","gid":14252}],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-11-25T23:32:57.989925+0000","services":{"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Nov 25 23:33:47 compute-0 systemd[1]: libpod-10a567da6cd964883044d103919e39aeaaa88889779c880133e50de63551204d.scope: Deactivated successfully.
Nov 25 23:33:47 compute-0 ceph-mon[75654]: pgmap v78: 7 pgs: 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s wr, 3 op/s
Nov 25 23:33:47 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/661170587' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 25 23:33:47 compute-0 podman[101148]: 2025-11-25 23:33:47.101845544 +0000 UTC m=+0.030075066 container died 10a567da6cd964883044d103919e39aeaaa88889779c880133e50de63551204d (image=quay.io/ceph/ceph:v18, name=youthful_yonath, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 23:33:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-638e4b05250fb483a9812d6e81a156713fbdffb65b1a0e95ca92487a8e36b669-merged.mount: Deactivated successfully.
Nov 25 23:33:47 compute-0 podman[101148]: 2025-11-25 23:33:47.137636258 +0000 UTC m=+0.065865750 container remove 10a567da6cd964883044d103919e39aeaaa88889779c880133e50de63551204d (image=quay.io/ceph/ceph:v18, name=youthful_yonath, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:33:47 compute-0 systemd[1]: libpod-conmon-10a567da6cd964883044d103919e39aeaaa88889779c880133e50de63551204d.scope: Deactivated successfully.
Nov 25 23:33:47 compute-0 sudo[101081]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:47 compute-0 compassionate_tu[101085]: {
Nov 25 23:33:47 compute-0 compassionate_tu[101085]:     "019d967b-1a56-4e90-8682-a890da577e20": {
Nov 25 23:33:47 compute-0 compassionate_tu[101085]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:33:47 compute-0 compassionate_tu[101085]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 23:33:47 compute-0 compassionate_tu[101085]:         "osd_id": 2,
Nov 25 23:33:47 compute-0 compassionate_tu[101085]:         "osd_uuid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 23:33:47 compute-0 compassionate_tu[101085]:         "type": "bluestore"
Nov 25 23:33:47 compute-0 compassionate_tu[101085]:     },
Nov 25 23:33:47 compute-0 compassionate_tu[101085]:     "21bbab34-bea3-466b-8bf7-812749fcef47": {
Nov 25 23:33:47 compute-0 compassionate_tu[101085]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:33:47 compute-0 compassionate_tu[101085]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 23:33:47 compute-0 compassionate_tu[101085]:         "osd_id": 1,
Nov 25 23:33:47 compute-0 compassionate_tu[101085]:         "osd_uuid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 23:33:47 compute-0 compassionate_tu[101085]:         "type": "bluestore"
Nov 25 23:33:47 compute-0 compassionate_tu[101085]:     },
Nov 25 23:33:47 compute-0 compassionate_tu[101085]:     "c5ddf08a-6193-41e0-8332-60b5083aa62e": {
Nov 25 23:33:47 compute-0 compassionate_tu[101085]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:33:47 compute-0 compassionate_tu[101085]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 23:33:47 compute-0 compassionate_tu[101085]:         "osd_id": 0,
Nov 25 23:33:47 compute-0 compassionate_tu[101085]:         "osd_uuid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 23:33:47 compute-0 compassionate_tu[101085]:         "type": "bluestore"
Nov 25 23:33:47 compute-0 compassionate_tu[101085]:     }
Nov 25 23:33:47 compute-0 compassionate_tu[101085]: }
Nov 25 23:33:47 compute-0 systemd[1]: libpod-dabb82600a6099ad1d76fc58c38ecc9d89911847b1759dbbcdb6eff06f483dad.scope: Deactivated successfully.
Nov 25 23:33:47 compute-0 systemd[1]: libpod-dabb82600a6099ad1d76fc58c38ecc9d89911847b1759dbbcdb6eff06f483dad.scope: Consumed 1.012s CPU time.
Nov 25 23:33:47 compute-0 podman[101044]: 2025-11-25 23:33:47.242387635 +0000 UTC m=+1.161248709 container died dabb82600a6099ad1d76fc58c38ecc9d89911847b1759dbbcdb6eff06f483dad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_tu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:33:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-b3dc709cbd50f3900818d3f9e76cf0f176dd87599e7c791ebea591b386e08f60-merged.mount: Deactivated successfully.
Nov 25 23:33:47 compute-0 podman[101044]: 2025-11-25 23:33:47.296658111 +0000 UTC m=+1.215519205 container remove dabb82600a6099ad1d76fc58c38ecc9d89911847b1759dbbcdb6eff06f483dad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_tu, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 23:33:47 compute-0 systemd[1]: libpod-conmon-dabb82600a6099ad1d76fc58c38ecc9d89911847b1759dbbcdb6eff06f483dad.scope: Deactivated successfully.
Nov 25 23:33:47 compute-0 sudo[100938]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:47 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 23:33:47 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:33:47 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 23:33:47 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:33:47 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev 606ebbee-60bc-4d6e-8e06-221f1623ff2e does not exist
Nov 25 23:33:47 compute-0 sudo[101187]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:33:47 compute-0 sudo[101187]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:47 compute-0 sudo[101187]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:47 compute-0 sudo[101212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 23:33:47 compute-0 sudo[101212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:47 compute-0 sudo[101212]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:47 compute-0 sudo[101237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:33:47 compute-0 sudo[101237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:47 compute-0 sudo[101237]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:47 compute-0 sudo[101262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:33:47 compute-0 sudo[101262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:47 compute-0 sudo[101262]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:47 compute-0 sudo[101287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:33:47 compute-0 sudo[101287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:47 compute-0 sudo[101287]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:47 compute-0 sudo[101312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 25 23:33:47 compute-0 sudo[101312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:47 compute-0 sudo[101377]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhvpwloblnbboefmrjdecvnxlzlbgfwc ; /usr/bin/python3'
Nov 25 23:33:47 compute-0 sudo[101377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:33:48 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v79: 7 pgs: 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s wr, 3 op/s
Nov 25 23:33:48 compute-0 python3[101386]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 101922db-575f-58e2-980f-928050464f69 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:33:48 compute-0 podman[101419]: 2025-11-25 23:33:48.179193012 +0000 UTC m=+0.053722642 container create 0e7f414704c9ea62e2b46f6712b51ce69bf2afa5d34bf595bc0084978726622a (image=quay.io/ceph/ceph:v18, name=focused_volhard, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:33:48 compute-0 systemd[1]: Started libpod-conmon-0e7f414704c9ea62e2b46f6712b51ce69bf2afa5d34bf595bc0084978726622a.scope.
Nov 25 23:33:48 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:33:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fa560be9b574c473870ffd69e901d616d92c5592d46db1827d8d15435289feb/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fa560be9b574c473870ffd69e901d616d92c5592d46db1827d8d15435289feb/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:48 compute-0 podman[101419]: 2025-11-25 23:33:48.150661526 +0000 UTC m=+0.025191196 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 23:33:48 compute-0 podman[101419]: 2025-11-25 23:33:48.282772242 +0000 UTC m=+0.157301902 container init 0e7f414704c9ea62e2b46f6712b51ce69bf2afa5d34bf595bc0084978726622a (image=quay.io/ceph/ceph:v18, name=focused_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:33:48 compute-0 podman[101443]: 2025-11-25 23:33:48.283529321 +0000 UTC m=+0.111887264 container exec 42789e176a5dd28a3c234ac87642cbe4e8e4b6a7a43641214b6da8a496b18af2 (image=quay.io/ceph/ceph:v18, name=ceph-101922db-575f-58e2-980f-928050464f69-mon-compute-0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef)
Nov 25 23:33:48 compute-0 podman[101419]: 2025-11-25 23:33:48.291147166 +0000 UTC m=+0.165676786 container start 0e7f414704c9ea62e2b46f6712b51ce69bf2afa5d34bf595bc0084978726622a (image=quay.io/ceph/ceph:v18, name=focused_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 23:33:48 compute-0 podman[101419]: 2025-11-25 23:33:48.309671859 +0000 UTC m=+0.184201489 container attach 0e7f414704c9ea62e2b46f6712b51ce69bf2afa5d34bf595bc0084978726622a (image=quay.io/ceph/ceph:v18, name=focused_volhard, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 23:33:48 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:33:48 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:33:48 compute-0 podman[101443]: 2025-11-25 23:33:48.401492721 +0000 UTC m=+0.229850604 container exec_died 42789e176a5dd28a3c234ac87642cbe4e8e4b6a7a43641214b6da8a496b18af2 (image=quay.io/ceph/ceph:v18, name=ceph-101922db-575f-58e2-980f-928050464f69-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:33:48 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 25 23:33:48 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1470785052' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 25 23:33:48 compute-0 focused_volhard[101460]: 
Nov 25 23:33:48 compute-0 focused_volhard[101460]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"6","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr_standby_modules","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mds.cephfs","name":"mds_join_fs","value":"cephfs","level":"basic","can_update_at_runtime":true,"mask":""}]
Nov 25 23:33:48 compute-0 systemd[1]: libpod-0e7f414704c9ea62e2b46f6712b51ce69bf2afa5d34bf595bc0084978726622a.scope: Deactivated successfully.
Nov 25 23:33:48 compute-0 podman[101594]: 2025-11-25 23:33:48.877150037 +0000 UTC m=+0.020723707 container died 0e7f414704c9ea62e2b46f6712b51ce69bf2afa5d34bf595bc0084978726622a (image=quay.io/ceph/ceph:v18, name=focused_volhard, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:33:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-1fa560be9b574c473870ffd69e901d616d92c5592d46db1827d8d15435289feb-merged.mount: Deactivated successfully.
Nov 25 23:33:48 compute-0 podman[101594]: 2025-11-25 23:33:48.992014902 +0000 UTC m=+0.135588562 container remove 0e7f414704c9ea62e2b46f6712b51ce69bf2afa5d34bf595bc0084978726622a (image=quay.io/ceph/ceph:v18, name=focused_volhard, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 23:33:48 compute-0 systemd[1]: libpod-conmon-0e7f414704c9ea62e2b46f6712b51ce69bf2afa5d34bf595bc0084978726622a.scope: Deactivated successfully.
Nov 25 23:33:49 compute-0 sudo[101312]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:49 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 23:33:49 compute-0 sudo[101377]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:49 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:33:49 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 23:33:49 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:33:49 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 23:33:49 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:33:49 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 23:33:49 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 23:33:49 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 23:33:49 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:33:49 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev 9da7b15e-4eef-4c77-82c9-a6215d5e6beb does not exist
Nov 25 23:33:49 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev fe68a7cc-c80c-48f0-b63c-a49914e80135 does not exist
Nov 25 23:33:49 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev 68cc684d-90b3-4f1e-b500-bcd7c1bb97ff does not exist
Nov 25 23:33:49 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 23:33:49 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 23:33:49 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 23:33:49 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 23:33:49 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 23:33:49 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:33:49 compute-0 sudo[101627]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:33:49 compute-0 sudo[101627]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:49 compute-0 sudo[101627]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:49 compute-0 sudo[101652]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:33:49 compute-0 sudo[101652]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:49 compute-0 sudo[101652]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:49 compute-0 sudo[101677]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:33:49 compute-0 sudo[101677]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:49 compute-0 sudo[101677]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:49 compute-0 sudo[101702]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 23:33:49 compute-0 sudo[101702]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:49 compute-0 ceph-mon[75654]: pgmap v79: 7 pgs: 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s wr, 3 op/s
Nov 25 23:33:49 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/1470785052' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 25 23:33:49 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:33:49 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:33:49 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:33:49 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 23:33:49 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:33:49 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 23:33:49 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 23:33:49 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:33:49 compute-0 podman[101768]: 2025-11-25 23:33:49.549157848 +0000 UTC m=+0.047810299 container create 28c88adca9a48106aa8af11c580b95a3154659f15f246b5b305cbe22c3c55413 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_shamir, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:33:49 compute-0 systemd[1]: Started libpod-conmon-28c88adca9a48106aa8af11c580b95a3154659f15f246b5b305cbe22c3c55413.scope.
Nov 25 23:33:49 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:33:49 compute-0 podman[101768]: 2025-11-25 23:33:49.608443175 +0000 UTC m=+0.107095516 container init 28c88adca9a48106aa8af11c580b95a3154659f15f246b5b305cbe22c3c55413 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_shamir, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 23:33:49 compute-0 podman[101768]: 2025-11-25 23:33:49.522831145 +0000 UTC m=+0.021483546 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:33:49 compute-0 podman[101768]: 2025-11-25 23:33:49.617796794 +0000 UTC m=+0.116449155 container start 28c88adca9a48106aa8af11c580b95a3154659f15f246b5b305cbe22c3c55413 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_shamir, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:33:49 compute-0 determined_shamir[101784]: 167 167
Nov 25 23:33:49 compute-0 systemd[1]: libpod-28c88adca9a48106aa8af11c580b95a3154659f15f246b5b305cbe22c3c55413.scope: Deactivated successfully.
Nov 25 23:33:49 compute-0 podman[101768]: 2025-11-25 23:33:49.62133314 +0000 UTC m=+0.119985461 container attach 28c88adca9a48106aa8af11c580b95a3154659f15f246b5b305cbe22c3c55413 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_shamir, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:33:49 compute-0 podman[101768]: 2025-11-25 23:33:49.621917015 +0000 UTC m=+0.120569386 container died 28c88adca9a48106aa8af11c580b95a3154659f15f246b5b305cbe22c3c55413 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_shamir, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 23:33:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-14c9877a4843b81d88f490d937592c54c322f124354c5e8ed6df562e8e8c1678-merged.mount: Deactivated successfully.
Nov 25 23:33:49 compute-0 podman[101768]: 2025-11-25 23:33:49.663684495 +0000 UTC m=+0.162336856 container remove 28c88adca9a48106aa8af11c580b95a3154659f15f246b5b305cbe22c3c55413 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_shamir, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 25 23:33:49 compute-0 systemd[1]: libpod-conmon-28c88adca9a48106aa8af11c580b95a3154659f15f246b5b305cbe22c3c55413.scope: Deactivated successfully.
Nov 25 23:33:49 compute-0 podman[101809]: 2025-11-25 23:33:49.859426434 +0000 UTC m=+0.038582922 container create 934182023e781879905d18bdda7a639f7b8e54d854ef277cf5701c2248a87950 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_bouman, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 23:33:49 compute-0 systemd[1]: Started libpod-conmon-934182023e781879905d18bdda7a639f7b8e54d854ef277cf5701c2248a87950.scope.
Nov 25 23:33:49 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:33:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0884ec651dac78951bf0fdc39a7d87f5037c58885636b9bb5f967e10ca17a4b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0884ec651dac78951bf0fdc39a7d87f5037c58885636b9bb5f967e10ca17a4b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0884ec651dac78951bf0fdc39a7d87f5037c58885636b9bb5f967e10ca17a4b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0884ec651dac78951bf0fdc39a7d87f5037c58885636b9bb5f967e10ca17a4b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0884ec651dac78951bf0fdc39a7d87f5037c58885636b9bb5f967e10ca17a4b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:49 compute-0 sudo[101850]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uyxxgwdztfzllcvnrssqadysvnzohtvw ; /usr/bin/python3'
Nov 25 23:33:49 compute-0 podman[101809]: 2025-11-25 23:33:49.844318116 +0000 UTC m=+0.023474614 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:33:49 compute-0 sudo[101850]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:33:49 compute-0 podman[101809]: 2025-11-25 23:33:49.946801799 +0000 UTC m=+0.125958367 container init 934182023e781879905d18bdda7a639f7b8e54d854ef277cf5701c2248a87950 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:33:49 compute-0 podman[101809]: 2025-11-25 23:33:49.962402479 +0000 UTC m=+0.141558997 container start 934182023e781879905d18bdda7a639f7b8e54d854ef277cf5701c2248a87950 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_bouman, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:33:49 compute-0 podman[101809]: 2025-11-25 23:33:49.967896284 +0000 UTC m=+0.147052842 container attach 934182023e781879905d18bdda7a639f7b8e54d854ef277cf5701c2248a87950 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_bouman, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef)
Nov 25 23:33:50 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v80: 7 pgs: 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s wr, 3 op/s
Nov 25 23:33:50 compute-0 python3[101853]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 101922db-575f-58e2-980f-928050464f69 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:33:50 compute-0 podman[101856]: 2025-11-25 23:33:50.163561792 +0000 UTC m=+0.051237653 container create 86e4f0259fc10d8a3c58ce6a4bee62bdd589a8860e5fed68472d70cba4375724 (image=quay.io/ceph/ceph:v18, name=gifted_wozniak, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:33:50 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:33:50 compute-0 systemd[1]: Started libpod-conmon-86e4f0259fc10d8a3c58ce6a4bee62bdd589a8860e5fed68472d70cba4375724.scope.
Nov 25 23:33:50 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:33:50 compute-0 podman[101856]: 2025-11-25 23:33:50.147019798 +0000 UTC m=+0.034695669 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 23:33:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccf3ae1d7f881332b7bc9939a78c0e177fa21248178a69c919c377bbbcc87398/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccf3ae1d7f881332b7bc9939a78c0e177fa21248178a69c919c377bbbcc87398/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:50 compute-0 podman[101856]: 2025-11-25 23:33:50.2613408 +0000 UTC m=+0.149016741 container init 86e4f0259fc10d8a3c58ce6a4bee62bdd589a8860e5fed68472d70cba4375724 (image=quay.io/ceph/ceph:v18, name=gifted_wozniak, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 23:33:50 compute-0 podman[101856]: 2025-11-25 23:33:50.270427271 +0000 UTC m=+0.158103162 container start 86e4f0259fc10d8a3c58ce6a4bee62bdd589a8860e5fed68472d70cba4375724 (image=quay.io/ceph/ceph:v18, name=gifted_wozniak, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:33:50 compute-0 podman[101856]: 2025-11-25 23:33:50.274001309 +0000 UTC m=+0.161677200 container attach 86e4f0259fc10d8a3c58ce6a4bee62bdd589a8860e5fed68472d70cba4375724 (image=quay.io/ceph/ceph:v18, name=gifted_wozniak, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:33:50 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0) v1
Nov 25 23:33:50 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2840893065' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Nov 25 23:33:50 compute-0 gifted_wozniak[101871]: mimic
Nov 25 23:33:50 compute-0 systemd[1]: libpod-86e4f0259fc10d8a3c58ce6a4bee62bdd589a8860e5fed68472d70cba4375724.scope: Deactivated successfully.
Nov 25 23:33:50 compute-0 podman[101856]: 2025-11-25 23:33:50.851286016 +0000 UTC m=+0.738961957 container died 86e4f0259fc10d8a3c58ce6a4bee62bdd589a8860e5fed68472d70cba4375724 (image=quay.io/ceph/ceph:v18, name=gifted_wozniak, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:33:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-ccf3ae1d7f881332b7bc9939a78c0e177fa21248178a69c919c377bbbcc87398-merged.mount: Deactivated successfully.
Nov 25 23:33:50 compute-0 podman[101856]: 2025-11-25 23:33:50.905336296 +0000 UTC m=+0.793012147 container remove 86e4f0259fc10d8a3c58ce6a4bee62bdd589a8860e5fed68472d70cba4375724 (image=quay.io/ceph/ceph:v18, name=gifted_wozniak, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:33:50 compute-0 systemd[1]: libpod-conmon-86e4f0259fc10d8a3c58ce6a4bee62bdd589a8860e5fed68472d70cba4375724.scope: Deactivated successfully.
Nov 25 23:33:50 compute-0 sudo[101850]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:51 compute-0 hungry_bouman[101846]: --> passed data devices: 0 physical, 3 LVM
Nov 25 23:33:51 compute-0 hungry_bouman[101846]: --> relative data size: 1.0
Nov 25 23:33:51 compute-0 hungry_bouman[101846]: --> All data devices are unavailable
Nov 25 23:33:51 compute-0 systemd[1]: libpod-934182023e781879905d18bdda7a639f7b8e54d854ef277cf5701c2248a87950.scope: Deactivated successfully.
Nov 25 23:33:51 compute-0 systemd[1]: libpod-934182023e781879905d18bdda7a639f7b8e54d854ef277cf5701c2248a87950.scope: Consumed 1.044s CPU time.
Nov 25 23:33:51 compute-0 podman[101809]: 2025-11-25 23:33:51.057751298 +0000 UTC m=+1.236907816 container died 934182023e781879905d18bdda7a639f7b8e54d854ef277cf5701c2248a87950 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_bouman, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 25 23:33:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-b0884ec651dac78951bf0fdc39a7d87f5037c58885636b9bb5f967e10ca17a4b-merged.mount: Deactivated successfully.
Nov 25 23:33:51 compute-0 podman[101809]: 2025-11-25 23:33:51.124621531 +0000 UTC m=+1.303778029 container remove 934182023e781879905d18bdda7a639f7b8e54d854ef277cf5701c2248a87950 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_bouman, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:33:51 compute-0 systemd[1]: libpod-conmon-934182023e781879905d18bdda7a639f7b8e54d854ef277cf5701c2248a87950.scope: Deactivated successfully.
Nov 25 23:33:51 compute-0 sudo[101702]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:51 compute-0 sudo[101945]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:33:51 compute-0 sudo[101945]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:51 compute-0 sudo[101945]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:51 compute-0 sudo[101970]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:33:51 compute-0 sudo[101970]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:51 compute-0 sudo[101970]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:51 compute-0 sudo[101995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:33:51 compute-0 sudo[101995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:51 compute-0 sudo[101995]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:51 compute-0 ceph-mon[75654]: pgmap v80: 7 pgs: 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s wr, 3 op/s
Nov 25 23:33:51 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/2840893065' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Nov 25 23:33:51 compute-0 sudo[102020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 -- lvm list --format json
Nov 25 23:33:51 compute-0 sudo[102020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:51 compute-0 sudo[102104]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqukzzsqkwmdpcjuzlljzbxbqjbwdlri ; /usr/bin/python3'
Nov 25 23:33:51 compute-0 sudo[102104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:33:51 compute-0 podman[102111]: 2025-11-25 23:33:51.866870897 +0000 UTC m=+0.061615876 container create 8787b13a45966a4f5ba15bc10c237c8e95267efd6b4a41c2ae98682925919d8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_wright, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:33:51 compute-0 systemd[1]: Started libpod-conmon-8787b13a45966a4f5ba15bc10c237c8e95267efd6b4a41c2ae98682925919d8c.scope.
Nov 25 23:33:51 compute-0 python3[102110]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 101922db-575f-58e2-980f-928050464f69 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:33:51 compute-0 podman[102111]: 2025-11-25 23:33:51.839384747 +0000 UTC m=+0.034129766 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:33:51 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:33:51 compute-0 podman[102111]: 2025-11-25 23:33:51.963835765 +0000 UTC m=+0.158580784 container init 8787b13a45966a4f5ba15bc10c237c8e95267efd6b4a41c2ae98682925919d8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_wright, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507)
Nov 25 23:33:51 compute-0 podman[102111]: 2025-11-25 23:33:51.970401775 +0000 UTC m=+0.165146714 container start 8787b13a45966a4f5ba15bc10c237c8e95267efd6b4a41c2ae98682925919d8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_wright, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:33:51 compute-0 distracted_wright[102128]: 167 167
Nov 25 23:33:51 compute-0 systemd[1]: libpod-8787b13a45966a4f5ba15bc10c237c8e95267efd6b4a41c2ae98682925919d8c.scope: Deactivated successfully.
Nov 25 23:33:51 compute-0 podman[102111]: 2025-11-25 23:33:51.974518236 +0000 UTC m=+0.169263205 container attach 8787b13a45966a4f5ba15bc10c237c8e95267efd6b4a41c2ae98682925919d8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_wright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 23:33:51 compute-0 conmon[102128]: conmon 8787b13a45966a4f5ba1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8787b13a45966a4f5ba15bc10c237c8e95267efd6b4a41c2ae98682925919d8c.scope/container/memory.events
Nov 25 23:33:51 compute-0 podman[102111]: 2025-11-25 23:33:51.976290409 +0000 UTC m=+0.171035378 container died 8787b13a45966a4f5ba15bc10c237c8e95267efd6b4a41c2ae98682925919d8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_wright, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Nov 25 23:33:52 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v81: 7 pgs: 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s wr, 3 op/s
Nov 25 23:33:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-1c2557da1c7e0c6744c6a6926da1353facbc7e62f7bde4dfc9c523f2f08445d1-merged.mount: Deactivated successfully.
Nov 25 23:33:52 compute-0 podman[102111]: 2025-11-25 23:33:52.030052833 +0000 UTC m=+0.224797792 container remove 8787b13a45966a4f5ba15bc10c237c8e95267efd6b4a41c2ae98682925919d8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_wright, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:33:52 compute-0 podman[102131]: 2025-11-25 23:33:52.035369973 +0000 UTC m=+0.084147167 container create 3effb79cbe39d7fc73cb237bdd79d79e169c8ab81aedb8657bb7ed7c359aa654 (image=quay.io/ceph/ceph:v18, name=crazy_einstein, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 23:33:52 compute-0 systemd[1]: libpod-conmon-8787b13a45966a4f5ba15bc10c237c8e95267efd6b4a41c2ae98682925919d8c.scope: Deactivated successfully.
Nov 25 23:33:52 compute-0 systemd[1]: Started libpod-conmon-3effb79cbe39d7fc73cb237bdd79d79e169c8ab81aedb8657bb7ed7c359aa654.scope.
Nov 25 23:33:52 compute-0 podman[102131]: 2025-11-25 23:33:51.997808085 +0000 UTC m=+0.046585319 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 25 23:33:52 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:33:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb5d94a7754066c2649dde9d59fd3f447942b1fc7c1a5779cfe3835d399ea9f1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb5d94a7754066c2649dde9d59fd3f447942b1fc7c1a5779cfe3835d399ea9f1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:52 compute-0 podman[102131]: 2025-11-25 23:33:52.12169037 +0000 UTC m=+0.170467524 container init 3effb79cbe39d7fc73cb237bdd79d79e169c8ab81aedb8657bb7ed7c359aa654 (image=quay.io/ceph/ceph:v18, name=crazy_einstein, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 25 23:33:52 compute-0 podman[102131]: 2025-11-25 23:33:52.129039429 +0000 UTC m=+0.177816613 container start 3effb79cbe39d7fc73cb237bdd79d79e169c8ab81aedb8657bb7ed7c359aa654 (image=quay.io/ceph/ceph:v18, name=crazy_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:33:52 compute-0 podman[102131]: 2025-11-25 23:33:52.132679158 +0000 UTC m=+0.181456312 container attach 3effb79cbe39d7fc73cb237bdd79d79e169c8ab81aedb8657bb7ed7c359aa654 (image=quay.io/ceph/ceph:v18, name=crazy_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 23:33:52 compute-0 podman[102169]: 2025-11-25 23:33:52.247454491 +0000 UTC m=+0.062035315 container create 2abc1017359260ee9e7a3e1c9975e2d041f3fdbe98f20bcb95dd1559e3b7fb07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_wright, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 23:33:52 compute-0 systemd[1]: Started libpod-conmon-2abc1017359260ee9e7a3e1c9975e2d041f3fdbe98f20bcb95dd1559e3b7fb07.scope.
Nov 25 23:33:52 compute-0 podman[102169]: 2025-11-25 23:33:52.222208595 +0000 UTC m=+0.036789469 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:33:52 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:33:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/677bea79493d566ecf8d2b7fb295dcf53304fda40e618ccf4d1ab3055c6bc946/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/677bea79493d566ecf8d2b7fb295dcf53304fda40e618ccf4d1ab3055c6bc946/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/677bea79493d566ecf8d2b7fb295dcf53304fda40e618ccf4d1ab3055c6bc946/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/677bea79493d566ecf8d2b7fb295dcf53304fda40e618ccf4d1ab3055c6bc946/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:52 compute-0 podman[102169]: 2025-11-25 23:33:52.358822271 +0000 UTC m=+0.173403145 container init 2abc1017359260ee9e7a3e1c9975e2d041f3fdbe98f20bcb95dd1559e3b7fb07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_wright, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 23:33:52 compute-0 podman[102169]: 2025-11-25 23:33:52.373725225 +0000 UTC m=+0.188306039 container start 2abc1017359260ee9e7a3e1c9975e2d041f3fdbe98f20bcb95dd1559e3b7fb07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_wright, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:33:52 compute-0 podman[102169]: 2025-11-25 23:33:52.377262712 +0000 UTC m=+0.191843586 container attach 2abc1017359260ee9e7a3e1c9975e2d041f3fdbe98f20bcb95dd1559e3b7fb07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_wright, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 23:33:52 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions", "format": "json"} v 0) v1
Nov 25 23:33:52 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/357608728' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Nov 25 23:33:52 compute-0 crazy_einstein[102160]: 
Nov 25 23:33:52 compute-0 crazy_einstein[102160]: {"mon":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"mgr":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"osd":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":3},"mds":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"overall":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":6}}
Nov 25 23:33:52 compute-0 systemd[1]: libpod-3effb79cbe39d7fc73cb237bdd79d79e169c8ab81aedb8657bb7ed7c359aa654.scope: Deactivated successfully.
Nov 25 23:33:52 compute-0 podman[102131]: 2025-11-25 23:33:52.742920341 +0000 UTC m=+0.791697565 container died 3effb79cbe39d7fc73cb237bdd79d79e169c8ab81aedb8657bb7ed7c359aa654 (image=quay.io/ceph/ceph:v18, name=crazy_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:33:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-bb5d94a7754066c2649dde9d59fd3f447942b1fc7c1a5779cfe3835d399ea9f1-merged.mount: Deactivated successfully.
Nov 25 23:33:52 compute-0 podman[102131]: 2025-11-25 23:33:52.80106435 +0000 UTC m=+0.849841544 container remove 3effb79cbe39d7fc73cb237bdd79d79e169c8ab81aedb8657bb7ed7c359aa654 (image=quay.io/ceph/ceph:v18, name=crazy_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:33:52 compute-0 systemd[1]: libpod-conmon-3effb79cbe39d7fc73cb237bdd79d79e169c8ab81aedb8657bb7ed7c359aa654.scope: Deactivated successfully.
Nov 25 23:33:52 compute-0 sudo[102104]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:53 compute-0 hopeful_wright[102186]: {
Nov 25 23:33:53 compute-0 hopeful_wright[102186]:     "0": [
Nov 25 23:33:53 compute-0 hopeful_wright[102186]:         {
Nov 25 23:33:53 compute-0 hopeful_wright[102186]:             "devices": [
Nov 25 23:33:53 compute-0 hopeful_wright[102186]:                 "/dev/loop3"
Nov 25 23:33:53 compute-0 hopeful_wright[102186]:             ],
Nov 25 23:33:53 compute-0 hopeful_wright[102186]:             "lv_name": "ceph_lv0",
Nov 25 23:33:53 compute-0 hopeful_wright[102186]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 23:33:53 compute-0 hopeful_wright[102186]:             "lv_size": "21470642176",
Nov 25 23:33:53 compute-0 hopeful_wright[102186]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c5ddf08a-6193-41e0-8332-60b5083aa62e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 23:33:53 compute-0 hopeful_wright[102186]:             "lv_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 23:33:53 compute-0 hopeful_wright[102186]:             "name": "ceph_lv0",
Nov 25 23:33:53 compute-0 hopeful_wright[102186]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 23:33:53 compute-0 hopeful_wright[102186]:             "tags": {
Nov 25 23:33:53 compute-0 hopeful_wright[102186]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 23:33:53 compute-0 hopeful_wright[102186]:                 "ceph.block_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 23:33:53 compute-0 hopeful_wright[102186]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 23:33:53 compute-0 hopeful_wright[102186]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:33:53 compute-0 hopeful_wright[102186]:                 "ceph.cluster_name": "ceph",
Nov 25 23:33:53 compute-0 hopeful_wright[102186]:                 "ceph.crush_device_class": "",
Nov 25 23:33:53 compute-0 hopeful_wright[102186]:                 "ceph.encrypted": "0",
Nov 25 23:33:53 compute-0 hopeful_wright[102186]:                 "ceph.osd_fsid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 23:33:53 compute-0 hopeful_wright[102186]:                 "ceph.osd_id": "0",
Nov 25 23:33:53 compute-0 hopeful_wright[102186]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 23:33:53 compute-0 hopeful_wright[102186]:                 "ceph.type": "block",
Nov 25 23:33:53 compute-0 hopeful_wright[102186]:                 "ceph.vdo": "0"
Nov 25 23:33:53 compute-0 hopeful_wright[102186]:             },
Nov 25 23:33:53 compute-0 hopeful_wright[102186]:             "type": "block",
Nov 25 23:33:53 compute-0 hopeful_wright[102186]:             "vg_name": "ceph_vg0"
Nov 25 23:33:53 compute-0 hopeful_wright[102186]:         }
Nov 25 23:33:53 compute-0 hopeful_wright[102186]:     ],
Nov 25 23:33:53 compute-0 hopeful_wright[102186]:     "1": [
Nov 25 23:33:53 compute-0 hopeful_wright[102186]:         {
Nov 25 23:33:53 compute-0 hopeful_wright[102186]:             "devices": [
Nov 25 23:33:53 compute-0 hopeful_wright[102186]:                 "/dev/loop4"
Nov 25 23:33:53 compute-0 hopeful_wright[102186]:             ],
Nov 25 23:33:53 compute-0 hopeful_wright[102186]:             "lv_name": "ceph_lv1",
Nov 25 23:33:53 compute-0 hopeful_wright[102186]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 23:33:53 compute-0 hopeful_wright[102186]:             "lv_size": "21470642176",
Nov 25 23:33:53 compute-0 hopeful_wright[102186]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21bbab34-bea3-466b-8bf7-812749fcef47,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 23:33:53 compute-0 hopeful_wright[102186]:             "lv_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 23:33:53 compute-0 hopeful_wright[102186]:             "name": "ceph_lv1",
Nov 25 23:33:53 compute-0 hopeful_wright[102186]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 23:33:53 compute-0 hopeful_wright[102186]:             "tags": {
Nov 25 23:33:53 compute-0 hopeful_wright[102186]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 23:33:53 compute-0 hopeful_wright[102186]:                 "ceph.block_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 23:33:53 compute-0 hopeful_wright[102186]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 23:33:53 compute-0 hopeful_wright[102186]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:33:53 compute-0 hopeful_wright[102186]:                 "ceph.cluster_name": "ceph",
Nov 25 23:33:53 compute-0 hopeful_wright[102186]:                 "ceph.crush_device_class": "",
Nov 25 23:33:53 compute-0 hopeful_wright[102186]:                 "ceph.encrypted": "0",
Nov 25 23:33:53 compute-0 hopeful_wright[102186]:                 "ceph.osd_fsid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 23:33:53 compute-0 hopeful_wright[102186]:                 "ceph.osd_id": "1",
Nov 25 23:33:53 compute-0 hopeful_wright[102186]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 23:33:53 compute-0 hopeful_wright[102186]:                 "ceph.type": "block",
Nov 25 23:33:53 compute-0 hopeful_wright[102186]:                 "ceph.vdo": "0"
Nov 25 23:33:53 compute-0 hopeful_wright[102186]:             },
Nov 25 23:33:53 compute-0 hopeful_wright[102186]:             "type": "block",
Nov 25 23:33:53 compute-0 hopeful_wright[102186]:             "vg_name": "ceph_vg1"
Nov 25 23:33:53 compute-0 hopeful_wright[102186]:         }
Nov 25 23:33:53 compute-0 hopeful_wright[102186]:     ],
Nov 25 23:33:53 compute-0 hopeful_wright[102186]:     "2": [
Nov 25 23:33:53 compute-0 hopeful_wright[102186]:         {
Nov 25 23:33:53 compute-0 hopeful_wright[102186]:             "devices": [
Nov 25 23:33:53 compute-0 hopeful_wright[102186]:                 "/dev/loop5"
Nov 25 23:33:53 compute-0 hopeful_wright[102186]:             ],
Nov 25 23:33:53 compute-0 hopeful_wright[102186]:             "lv_name": "ceph_lv2",
Nov 25 23:33:53 compute-0 hopeful_wright[102186]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 23:33:53 compute-0 hopeful_wright[102186]:             "lv_size": "21470642176",
Nov 25 23:33:53 compute-0 hopeful_wright[102186]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=019d967b-1a56-4e90-8682-a890da577e20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 23:33:53 compute-0 hopeful_wright[102186]:             "lv_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 23:33:53 compute-0 hopeful_wright[102186]:             "name": "ceph_lv2",
Nov 25 23:33:53 compute-0 hopeful_wright[102186]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 23:33:53 compute-0 hopeful_wright[102186]:             "tags": {
Nov 25 23:33:53 compute-0 hopeful_wright[102186]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 23:33:53 compute-0 hopeful_wright[102186]:                 "ceph.block_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 23:33:53 compute-0 hopeful_wright[102186]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 23:33:53 compute-0 hopeful_wright[102186]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:33:53 compute-0 hopeful_wright[102186]:                 "ceph.cluster_name": "ceph",
Nov 25 23:33:53 compute-0 hopeful_wright[102186]:                 "ceph.crush_device_class": "",
Nov 25 23:33:53 compute-0 hopeful_wright[102186]:                 "ceph.encrypted": "0",
Nov 25 23:33:53 compute-0 hopeful_wright[102186]:                 "ceph.osd_fsid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 23:33:53 compute-0 hopeful_wright[102186]:                 "ceph.osd_id": "2",
Nov 25 23:33:53 compute-0 hopeful_wright[102186]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 23:33:53 compute-0 hopeful_wright[102186]:                 "ceph.type": "block",
Nov 25 23:33:53 compute-0 hopeful_wright[102186]:                 "ceph.vdo": "0"
Nov 25 23:33:53 compute-0 hopeful_wright[102186]:             },
Nov 25 23:33:53 compute-0 hopeful_wright[102186]:             "type": "block",
Nov 25 23:33:53 compute-0 hopeful_wright[102186]:             "vg_name": "ceph_vg2"
Nov 25 23:33:53 compute-0 hopeful_wright[102186]:         }
Nov 25 23:33:53 compute-0 hopeful_wright[102186]:     ]
Nov 25 23:33:53 compute-0 hopeful_wright[102186]: }
Nov 25 23:33:53 compute-0 systemd[1]: libpod-2abc1017359260ee9e7a3e1c9975e2d041f3fdbe98f20bcb95dd1559e3b7fb07.scope: Deactivated successfully.
Nov 25 23:33:53 compute-0 podman[102228]: 2025-11-25 23:33:53.239082607 +0000 UTC m=+0.028127098 container died 2abc1017359260ee9e7a3e1c9975e2d041f3fdbe98f20bcb95dd1559e3b7fb07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_wright, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 23:33:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-677bea79493d566ecf8d2b7fb295dcf53304fda40e618ccf4d1ab3055c6bc946-merged.mount: Deactivated successfully.
Nov 25 23:33:53 compute-0 irqbalance[783]: Cannot change IRQ 27 affinity: Operation not permitted
Nov 25 23:33:53 compute-0 irqbalance[783]: IRQ 27 affinity is now unmanaged
Nov 25 23:33:53 compute-0 podman[102228]: 2025-11-25 23:33:53.296965171 +0000 UTC m=+0.086009612 container remove 2abc1017359260ee9e7a3e1c9975e2d041f3fdbe98f20bcb95dd1559e3b7fb07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_wright, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True)
Nov 25 23:33:53 compute-0 systemd[1]: libpod-conmon-2abc1017359260ee9e7a3e1c9975e2d041f3fdbe98f20bcb95dd1559e3b7fb07.scope: Deactivated successfully.
Nov 25 23:33:53 compute-0 sudo[102020]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:53 compute-0 sudo[102242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:33:53 compute-0 sudo[102242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:53 compute-0 sudo[102242]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:53 compute-0 ceph-mon[75654]: pgmap v81: 7 pgs: 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s wr, 3 op/s
Nov 25 23:33:53 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/357608728' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Nov 25 23:33:53 compute-0 sudo[102267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:33:53 compute-0 sudo[102267]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:53 compute-0 sudo[102267]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:53 compute-0 sudo[102292]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:33:53 compute-0 sudo[102292]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:53 compute-0 sudo[102292]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:53 compute-0 sudo[102317]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 -- raw list --format json
Nov 25 23:33:53 compute-0 sudo[102317]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:53 compute-0 podman[102381]: 2025-11-25 23:33:53.993560142 +0000 UTC m=+0.036792089 container create 70ccd3a14bb9068263394db925d112bb3c881c7686e2458b9774e913b3b15964 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_haibt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 25 23:33:54 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v82: 7 pgs: 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s wr, 3 op/s
Nov 25 23:33:54 compute-0 systemd[1]: Started libpod-conmon-70ccd3a14bb9068263394db925d112bb3c881c7686e2458b9774e913b3b15964.scope.
Nov 25 23:33:54 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:33:54 compute-0 podman[102381]: 2025-11-25 23:33:54.065488308 +0000 UTC m=+0.108720275 container init 70ccd3a14bb9068263394db925d112bb3c881c7686e2458b9774e913b3b15964 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_haibt, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 23:33:54 compute-0 podman[102381]: 2025-11-25 23:33:53.974707472 +0000 UTC m=+0.017939469 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:33:54 compute-0 podman[102381]: 2025-11-25 23:33:54.071895635 +0000 UTC m=+0.115127612 container start 70ccd3a14bb9068263394db925d112bb3c881c7686e2458b9774e913b3b15964 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_haibt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:33:54 compute-0 compassionate_haibt[102397]: 167 167
Nov 25 23:33:54 compute-0 podman[102381]: 2025-11-25 23:33:54.075399091 +0000 UTC m=+0.118631038 container attach 70ccd3a14bb9068263394db925d112bb3c881c7686e2458b9774e913b3b15964 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:33:54 compute-0 systemd[1]: libpod-70ccd3a14bb9068263394db925d112bb3c881c7686e2458b9774e913b3b15964.scope: Deactivated successfully.
Nov 25 23:33:54 compute-0 podman[102381]: 2025-11-25 23:33:54.076561418 +0000 UTC m=+0.119793396 container died 70ccd3a14bb9068263394db925d112bb3c881c7686e2458b9774e913b3b15964 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_haibt, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2)
Nov 25 23:33:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-77230e5533964828db4a8b5d68fb85c6688195af4308f87e6ce5dc8117e9a18e-merged.mount: Deactivated successfully.
Nov 25 23:33:54 compute-0 podman[102381]: 2025-11-25 23:33:54.113937432 +0000 UTC m=+0.157169409 container remove 70ccd3a14bb9068263394db925d112bb3c881c7686e2458b9774e913b3b15964 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_haibt, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:33:54 compute-0 systemd[1]: libpod-conmon-70ccd3a14bb9068263394db925d112bb3c881c7686e2458b9774e913b3b15964.scope: Deactivated successfully.
Nov 25 23:33:54 compute-0 podman[102419]: 2025-11-25 23:33:54.252583707 +0000 UTC m=+0.038907391 container create 99324a4a58761886a8a8512bb8ebe4d560089b5a03b6a3f893656b41b6a343a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_galileo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 23:33:54 compute-0 systemd[1]: Started libpod-conmon-99324a4a58761886a8a8512bb8ebe4d560089b5a03b6a3f893656b41b6a343a3.scope.
Nov 25 23:33:54 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:33:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65119ed7c1f89c8c937150332c19b793dcbfa5002c0d8399c4a9a416aac63b77/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65119ed7c1f89c8c937150332c19b793dcbfa5002c0d8399c4a9a416aac63b77/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65119ed7c1f89c8c937150332c19b793dcbfa5002c0d8399c4a9a416aac63b77/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65119ed7c1f89c8c937150332c19b793dcbfa5002c0d8399c4a9a416aac63b77/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:33:54 compute-0 podman[102419]: 2025-11-25 23:33:54.317033811 +0000 UTC m=+0.103357505 container init 99324a4a58761886a8a8512bb8ebe4d560089b5a03b6a3f893656b41b6a343a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_galileo, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 25 23:33:54 compute-0 podman[102419]: 2025-11-25 23:33:54.323656113 +0000 UTC m=+0.109979797 container start 99324a4a58761886a8a8512bb8ebe4d560089b5a03b6a3f893656b41b6a343a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_galileo, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:33:54 compute-0 podman[102419]: 2025-11-25 23:33:54.326736508 +0000 UTC m=+0.113060212 container attach 99324a4a58761886a8a8512bb8ebe4d560089b5a03b6a3f893656b41b6a343a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_galileo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:33:54 compute-0 podman[102419]: 2025-11-25 23:33:54.235982662 +0000 UTC m=+0.022306376 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:33:55 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:33:55 compute-0 musing_galileo[102435]: {
Nov 25 23:33:55 compute-0 musing_galileo[102435]:     "019d967b-1a56-4e90-8682-a890da577e20": {
Nov 25 23:33:55 compute-0 musing_galileo[102435]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:33:55 compute-0 musing_galileo[102435]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 23:33:55 compute-0 musing_galileo[102435]:         "osd_id": 2,
Nov 25 23:33:55 compute-0 musing_galileo[102435]:         "osd_uuid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 23:33:55 compute-0 musing_galileo[102435]:         "type": "bluestore"
Nov 25 23:33:55 compute-0 musing_galileo[102435]:     },
Nov 25 23:33:55 compute-0 musing_galileo[102435]:     "21bbab34-bea3-466b-8bf7-812749fcef47": {
Nov 25 23:33:55 compute-0 musing_galileo[102435]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:33:55 compute-0 musing_galileo[102435]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 23:33:55 compute-0 musing_galileo[102435]:         "osd_id": 1,
Nov 25 23:33:55 compute-0 musing_galileo[102435]:         "osd_uuid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 23:33:55 compute-0 musing_galileo[102435]:         "type": "bluestore"
Nov 25 23:33:55 compute-0 musing_galileo[102435]:     },
Nov 25 23:33:55 compute-0 musing_galileo[102435]:     "c5ddf08a-6193-41e0-8332-60b5083aa62e": {
Nov 25 23:33:55 compute-0 musing_galileo[102435]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:33:55 compute-0 musing_galileo[102435]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 23:33:55 compute-0 musing_galileo[102435]:         "osd_id": 0,
Nov 25 23:33:55 compute-0 musing_galileo[102435]:         "osd_uuid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 23:33:55 compute-0 musing_galileo[102435]:         "type": "bluestore"
Nov 25 23:33:55 compute-0 musing_galileo[102435]:     }
Nov 25 23:33:55 compute-0 musing_galileo[102435]: }
Nov 25 23:33:55 compute-0 systemd[1]: libpod-99324a4a58761886a8a8512bb8ebe4d560089b5a03b6a3f893656b41b6a343a3.scope: Deactivated successfully.
Nov 25 23:33:55 compute-0 podman[102419]: 2025-11-25 23:33:55.283003591 +0000 UTC m=+1.069327275 container died 99324a4a58761886a8a8512bb8ebe4d560089b5a03b6a3f893656b41b6a343a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_galileo, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 23:33:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-65119ed7c1f89c8c937150332c19b793dcbfa5002c0d8399c4a9a416aac63b77-merged.mount: Deactivated successfully.
Nov 25 23:33:55 compute-0 podman[102419]: 2025-11-25 23:33:55.344620515 +0000 UTC m=+1.130944189 container remove 99324a4a58761886a8a8512bb8ebe4d560089b5a03b6a3f893656b41b6a343a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_galileo, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 23:33:55 compute-0 systemd[1]: libpod-conmon-99324a4a58761886a8a8512bb8ebe4d560089b5a03b6a3f893656b41b6a343a3.scope: Deactivated successfully.
Nov 25 23:33:55 compute-0 sudo[102317]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:55 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 23:33:55 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:33:55 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 23:33:55 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:33:55 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev 87346fcb-af3e-4939-85f9-4a21fdfa9d2a does not exist
Nov 25 23:33:55 compute-0 sudo[102480]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:33:55 compute-0 sudo[102480]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:55 compute-0 sudo[102480]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:55 compute-0 ceph-mon[75654]: pgmap v82: 7 pgs: 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s wr, 3 op/s
Nov 25 23:33:55 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:33:55 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:33:55 compute-0 sudo[102505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 23:33:55 compute-0 sudo[102505]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:33:55 compute-0 sudo[102505]: pam_unix(sudo:session): session closed for user root
Nov 25 23:33:55 compute-0 ceph-mgr[75954]: [balancer INFO root] Optimize plan auto_2025-11-25_23:33:55
Nov 25 23:33:55 compute-0 ceph-mgr[75954]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 23:33:55 compute-0 ceph-mgr[75954]: [balancer INFO root] do_upmap
Nov 25 23:33:55 compute-0 ceph-mgr[75954]: [balancer INFO root] pools ['cephfs.cephfs.data', 'vms', 'volumes', '.mgr', 'backups', 'cephfs.cephfs.meta', 'images']
Nov 25 23:33:55 compute-0 ceph-mgr[75954]: [balancer INFO root] prepared 0/10 changes
Nov 25 23:33:56 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v83: 7 pgs: 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 0 B/s wr, 0 op/s
Nov 25 23:33:56 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 23:33:56 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:33:56 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 23:33:56 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:33:56 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 25 23:33:56 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:33:56 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 25 23:33:56 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:33:56 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 25 23:33:56 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:33:56 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 25 23:33:56 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:33:56 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 1)
Nov 25 23:33:56 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:33:56 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 25 23:33:56 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0) v1
Nov 25 23:33:56 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 23:33:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 23:33:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 23:33:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 23:33:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 23:33:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:33:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:33:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 23:33:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 23:33:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 23:33:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 23:33:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 23:33:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 23:33:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:33:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:33:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:33:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:33:56 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Nov 25 23:33:56 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 23:33:56 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Nov 25 23:33:56 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e33 e33: 3 total, 3 up, 3 in
Nov 25 23:33:56 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 3 up, 3 in
Nov 25 23:33:56 compute-0 ceph-mgr[75954]: [progress INFO root] update: starting ev a294a9b9-bd08-41e2-b985-63fc655b363c (PG autoscaler increasing pool 2 PGs from 1 to 32)
Nov 25 23:33:56 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0) v1
Nov 25 23:33:56 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 23:33:56 compute-0 rsyslogd[1001]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 23:33:57 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Nov 25 23:33:57 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Nov 25 23:33:57 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e34 e34: 3 total, 3 up, 3 in
Nov 25 23:33:57 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 3 up, 3 in
Nov 25 23:33:57 compute-0 ceph-mgr[75954]: [progress INFO root] update: starting ev 1c51dc09-4661-4ea2-ada0-20b90163e486 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Nov 25 23:33:57 compute-0 ceph-mon[75654]: pgmap v83: 7 pgs: 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 0 B/s wr, 0 op/s
Nov 25 23:33:57 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Nov 25 23:33:57 compute-0 ceph-mon[75654]: osdmap e33: 3 total, 3 up, 3 in
Nov 25 23:33:57 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 23:33:57 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0) v1
Nov 25 23:33:57 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 23:33:58 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v86: 7 pgs: 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:33:58 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 25 23:33:58 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 23:33:58 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 25 23:33:58 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 23:33:58 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Nov 25 23:33:58 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Nov 25 23:33:58 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 23:33:58 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 23:33:58 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Nov 25 23:33:58 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Nov 25 23:33:58 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 35 pg[2.0( empty local-lis/les=17/18 n=0 ec=15/15 lis/c=17/17 les/c/f=18/18/0 sis=35 pruub=11.871255875s) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active pruub 69.295204163s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:33:58 compute-0 ceph-mgr[75954]: [progress INFO root] update: starting ev 05bce111-1627-4951-bdf9-8e4cc223bd79 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Nov 25 23:33:58 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Nov 25 23:33:58 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 35 pg[2.0( empty local-lis/les=17/18 n=0 ec=15/15 lis/c=17/17 les/c/f=18/18/0 sis=35 pruub=11.871255875s) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown pruub 69.295204163s@ mbc={}] state<Start>: transitioning to Primary
Nov 25 23:33:58 compute-0 ceph-mon[75654]: osdmap e34: 3 total, 3 up, 3 in
Nov 25 23:33:58 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 23:33:58 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 23:33:58 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 23:33:58 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0) v1
Nov 25 23:33:58 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 23:33:59 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 35 pg[3.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=35 pruub=10.114383698s) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 active pruub 73.546501160s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:33:59 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 35 pg[3.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=35 pruub=10.114383698s) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 unknown pruub 73.546501160s@ mbc={}] state<Start>: transitioning to Primary
Nov 25 23:33:59 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Nov 25 23:33:59 compute-0 ceph-mon[75654]: pgmap v86: 7 pgs: 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:33:59 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Nov 25 23:33:59 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 23:33:59 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 23:33:59 compute-0 ceph-mon[75654]: osdmap e35: 3 total, 3 up, 3 in
Nov 25 23:33:59 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 23:33:59 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Nov 25 23:33:59 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Nov 25 23:33:59 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Nov 25 23:33:59 compute-0 ceph-mgr[75954]: [progress INFO root] update: starting ev b1f63197-f9e1-4ab4-b487-3863e818d0ec (PG autoscaler increasing pool 5 PGs from 1 to 32)
Nov 25 23:33:59 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} v 0) v1
Nov 25 23:33:59 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Nov 25 23:33:59 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.1c( empty local-lis/les=16/17 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:33:59 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.1e( empty local-lis/les=16/17 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:33:59 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.1f( empty local-lis/les=16/17 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:33:59 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.1d( empty local-lis/les=16/17 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:33:59 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.1b( empty local-lis/les=16/17 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:33:59 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.9( empty local-lis/les=16/17 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:33:59 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.7( empty local-lis/les=16/17 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:33:59 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.8( empty local-lis/les=16/17 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:33:59 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.5( empty local-lis/les=16/17 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:33:59 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.6( empty local-lis/les=16/17 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:33:59 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.a( empty local-lis/les=16/17 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:33:59 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.3( empty local-lis/les=16/17 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:33:59 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.1( empty local-lis/les=16/17 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:33:59 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.4( empty local-lis/les=16/17 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:33:59 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.2( empty local-lis/les=16/17 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:33:59 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.d( empty local-lis/les=16/17 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:33:59 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.b( empty local-lis/les=16/17 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:33:59 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.e( empty local-lis/les=16/17 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:33:59 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.1e( empty local-lis/les=17/18 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:33:59 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.1f( empty local-lis/les=17/18 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:33:59 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.10( empty local-lis/les=16/17 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:33:59 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.12( empty local-lis/les=16/17 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:33:59 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.1d( empty local-lis/les=17/18 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:33:59 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.b( empty local-lis/les=17/18 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:33:59 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.11( empty local-lis/les=16/17 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:33:59 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.a( empty local-lis/les=17/18 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:33:59 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.13( empty local-lis/les=16/17 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:33:59 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.8( empty local-lis/les=17/18 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:33:59 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.6( empty local-lis/les=17/18 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:33:59 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.14( empty local-lis/les=16/17 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:33:59 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.15( empty local-lis/les=16/17 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:33:59 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.c( empty local-lis/les=16/17 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:33:59 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.16( empty local-lis/les=16/17 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:33:59 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.f( empty local-lis/les=16/17 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:33:59 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.18( empty local-lis/les=16/17 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:33:59 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.17( empty local-lis/les=16/17 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:33:59 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.19( empty local-lis/les=16/17 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:33:59 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.1c( empty local-lis/les=17/18 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:33:59 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.4( empty local-lis/les=17/18 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:33:59 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.5( empty local-lis/les=17/18 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:33:59 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.3( empty local-lis/les=17/18 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:33:59 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.9( empty local-lis/les=17/18 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:33:59 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.2( empty local-lis/les=17/18 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:33:59 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.1( empty local-lis/les=17/18 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:33:59 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.c( empty local-lis/les=17/18 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:33:59 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.d( empty local-lis/les=17/18 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:33:59 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.e( empty local-lis/les=17/18 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:33:59 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.f( empty local-lis/les=17/18 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:33:59 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.7( empty local-lis/les=17/18 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:33:59 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.11( empty local-lis/les=17/18 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:33:59 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.12( empty local-lis/les=17/18 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:33:59 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.13( empty local-lis/les=17/18 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:33:59 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.10( empty local-lis/les=17/18 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:33:59 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.14( empty local-lis/les=17/18 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:33:59 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.1a( empty local-lis/les=16/17 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:33:59 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.15( empty local-lis/les=17/18 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:33:59 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.16( empty local-lis/les=17/18 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:33:59 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.17( empty local-lis/les=17/18 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:33:59 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.18( empty local-lis/les=17/18 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:33:59 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.19( empty local-lis/les=17/18 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:33:59 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.1a( empty local-lis/les=17/18 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:33:59 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.1b( empty local-lis/les=17/18 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:33:59 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.1c( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:33:59 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.1f( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:33:59 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.1e( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:33:59 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.1e( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:33:59 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.7( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:33:59 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.9( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:33:59 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.1b( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:33:59 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.5( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:33:59 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.8( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:33:59 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.1d( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:33:59 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.1( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:33:59 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.0( empty local-lis/les=35/36 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:33:59 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.6( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:33:59 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.3( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:33:59 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.2( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:33:59 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.4( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:33:59 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.a( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:33:59 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.e( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:33:59 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.b( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:33:59 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.d( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:33:59 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.10( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:33:59 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.12( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:33:59 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:33:59 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:33:59 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:33:59 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:33:59 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:33:59 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:33:59 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:33:59 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:33:59 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:33:59 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:33:59 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:33:59 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.1( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:33:59 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.0( empty local-lis/les=35/36 n=0 ec=15/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:33:59 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:33:59 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.c( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:33:59 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.e( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:33:59 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:33:59 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:33:59 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:33:59 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.12( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:33:59 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.10( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:33:59 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:33:59 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:33:59 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.14( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:33:59 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:33:59 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:33:59 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:33:59 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:33:59 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.1a( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:33:59 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:33:59 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 36 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=17/17 les/c/f=18/18/0 sis=35) [2] r=0 lpr=35 pi=[17,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:33:59 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.14( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:33:59 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.13( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:33:59 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.11( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:33:59 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.c( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:33:59 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.f( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:33:59 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.17( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:33:59 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.18( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:33:59 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.19( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:33:59 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.1a( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:33:59 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.15( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:33:59 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 36 pg[3.16( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=16/16 les/c/f=17/17/0 sis=35) [1] r=0 lpr=35 pi=[16,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:00 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v89: 69 pgs: 62 unknown, 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:34:00 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 25 23:34:00 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 23:34:00 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 25 23:34:00 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 23:34:00 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e36 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:34:00 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Nov 25 23:34:00 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Nov 25 23:34:00 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 23:34:00 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 23:34:00 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Nov 25 23:34:00 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Nov 25 23:34:00 compute-0 ceph-mgr[75954]: [progress INFO root] update: starting ev 3ace7068-2acb-4dfe-803b-f2931d775e25 (PG autoscaler increasing pool 6 PGs from 1 to 16)
Nov 25 23:34:00 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0) v1
Nov 25 23:34:00 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 23:34:00 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Nov 25 23:34:00 compute-0 ceph-mon[75654]: osdmap e36: 3 total, 3 up, 3 in
Nov 25 23:34:00 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Nov 25 23:34:00 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 23:34:00 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 23:34:01 compute-0 ceph-mgr[75954]: [progress WARNING root] Starting Global Recovery Event,124 pgs not in active + clean state
Nov 25 23:34:01 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 37 pg[4.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=37 pruub=10.073327065s) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active pruub 79.905021667s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:01 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 37 pg[4.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=37 pruub=10.073327065s) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown pruub 79.905021667s@ mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:01 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Nov 25 23:34:01 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Nov 25 23:34:01 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Nov 25 23:34:01 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Nov 25 23:34:01 compute-0 ceph-mgr[75954]: [progress INFO root] update: starting ev 7699bc09-a2a8-4e48-b17a-5c13d7a72e5a (PG autoscaler increasing pool 7 PGs from 1 to 32)
Nov 25 23:34:01 compute-0 ceph-mgr[75954]: [progress INFO root] complete: finished ev a294a9b9-bd08-41e2-b985-63fc655b363c (PG autoscaler increasing pool 2 PGs from 1 to 32)
Nov 25 23:34:01 compute-0 ceph-mgr[75954]: [progress INFO root] Completed event a294a9b9-bd08-41e2-b985-63fc655b363c (PG autoscaler increasing pool 2 PGs from 1 to 32) in 5 seconds
Nov 25 23:34:01 compute-0 ceph-mgr[75954]: [progress INFO root] complete: finished ev 1c51dc09-4661-4ea2-ada0-20b90163e486 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Nov 25 23:34:01 compute-0 ceph-mgr[75954]: [progress INFO root] Completed event 1c51dc09-4661-4ea2-ada0-20b90163e486 (PG autoscaler increasing pool 3 PGs from 1 to 32) in 4 seconds
Nov 25 23:34:01 compute-0 ceph-mgr[75954]: [progress INFO root] complete: finished ev 05bce111-1627-4951-bdf9-8e4cc223bd79 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Nov 25 23:34:01 compute-0 ceph-mgr[75954]: [progress INFO root] Completed event 05bce111-1627-4951-bdf9-8e4cc223bd79 (PG autoscaler increasing pool 4 PGs from 1 to 32) in 3 seconds
Nov 25 23:34:01 compute-0 ceph-mgr[75954]: [progress INFO root] complete: finished ev b1f63197-f9e1-4ab4-b487-3863e818d0ec (PG autoscaler increasing pool 5 PGs from 1 to 32)
Nov 25 23:34:01 compute-0 ceph-mgr[75954]: [progress INFO root] Completed event b1f63197-f9e1-4ab4-b487-3863e818d0ec (PG autoscaler increasing pool 5 PGs from 1 to 32) in 2 seconds
Nov 25 23:34:01 compute-0 ceph-mgr[75954]: [progress INFO root] complete: finished ev 3ace7068-2acb-4dfe-803b-f2931d775e25 (PG autoscaler increasing pool 6 PGs from 1 to 16)
Nov 25 23:34:01 compute-0 ceph-mgr[75954]: [progress INFO root] Completed event 3ace7068-2acb-4dfe-803b-f2931d775e25 (PG autoscaler increasing pool 6 PGs from 1 to 16) in 1 seconds
Nov 25 23:34:01 compute-0 ceph-mgr[75954]: [progress INFO root] complete: finished ev 7699bc09-a2a8-4e48-b17a-5c13d7a72e5a (PG autoscaler increasing pool 7 PGs from 1 to 32)
Nov 25 23:34:01 compute-0 ceph-mgr[75954]: [progress INFO root] Completed event 7699bc09-a2a8-4e48-b17a-5c13d7a72e5a (PG autoscaler increasing pool 7 PGs from 1 to 32) in 0 seconds
Nov 25 23:34:01 compute-0 ceph-mon[75654]: pgmap v89: 69 pgs: 62 unknown, 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:34:01 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Nov 25 23:34:01 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 23:34:01 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 23:34:01 compute-0 ceph-mon[75654]: osdmap e37: 3 total, 3 up, 3 in
Nov 25 23:34:01 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 23:34:01 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Nov 25 23:34:01 compute-0 ceph-mon[75654]: osdmap e38: 3 total, 3 up, 3 in
Nov 25 23:34:01 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.1f( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:01 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.1d( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:01 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.7( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:01 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.1c( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:01 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.8( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:01 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.b( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:01 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.6( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:01 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.1e( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:01 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.1b( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:01 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.a( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:01 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.1a( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:01 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.5( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:01 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.9( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:01 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.4( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:01 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.19( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:01 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.2( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:01 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.1( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:01 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.c( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:01 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.3( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:01 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.d( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:01 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.f( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:01 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.e( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:01 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.10( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:01 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.12( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:01 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.13( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:01 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.11( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:01 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.14( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:01 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.15( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:01 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.17( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:01 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.16( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:01 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.18( empty local-lis/les=18/19 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:01 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.1f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:01 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.1d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:01 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:01 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.b( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:01 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:01 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:01 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.6( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:01 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:01 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:01 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:01 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.19( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:01 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:01 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:01 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:01 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:01 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.0( empty local-lis/les=37/38 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:01 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:01 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.c( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:01 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:01 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:01 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:01 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:01 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:01 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:01 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:01 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.3( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:01 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:01 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.15( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:01 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.17( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:01 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.1e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:01 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.16( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:01 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 38 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=18/18 les/c/f=19/19/0 sis=37) [0] r=0 lpr=37 pi=[18,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:02 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v92: 131 pgs: 62 unknown, 69 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:34:02 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 25 23:34:02 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 23:34:02 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} v 0) v1
Nov 25 23:34:02 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Nov 25 23:34:02 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Nov 25 23:34:02 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Nov 25 23:34:02 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Nov 25 23:34:02 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 23:34:02 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Nov 25 23:34:02 compute-0 ceph-mon[75654]: 2.1 scrub starts
Nov 25 23:34:02 compute-0 ceph-mon[75654]: 2.1 scrub ok
Nov 25 23:34:02 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 23:34:02 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Nov 25 23:34:02 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Nov 25 23:34:02 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Nov 25 23:34:02 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 37 pg[5.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=37 pruub=10.846710205s) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active pruub 72.344696045s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:02 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 39 pg[5.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=37 pruub=10.846710205s) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown pruub 72.344696045s@ mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:02 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 39 pg[5.1e( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:02 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 39 pg[5.1f( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:02 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 39 pg[5.1a( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:02 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 39 pg[5.1b( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:02 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 39 pg[5.1c( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:02 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 39 pg[5.1d( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:02 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 39 pg[5.1( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:02 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 39 pg[5.2( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:02 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 39 pg[5.3( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:02 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 39 pg[5.6( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:02 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 39 pg[5.7( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:02 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 39 pg[5.8( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:02 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 39 pg[5.9( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:02 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 39 pg[5.a( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:02 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 39 pg[5.c( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:02 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 39 pg[5.b( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:02 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 39 pg[5.d( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:02 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 39 pg[5.e( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:02 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 39 pg[5.f( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:02 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 39 pg[5.10( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:02 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 39 pg[5.11( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:02 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 39 pg[5.12( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:02 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 39 pg[5.13( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:02 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 39 pg[5.16( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:02 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 39 pg[5.17( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:02 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 39 pg[5.14( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:02 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 39 pg[5.15( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:02 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 39 pg[5.18( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:02 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 39 pg[5.19( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:02 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 39 pg[5.4( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:02 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 39 pg[5.5( empty local-lis/les=20/21 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:03 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 39 pg[7.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=39 pruub=14.256612778s) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active pruub 81.685195923s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:03 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 39 pg[7.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=39 pruub=14.256612778s) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown pruub 81.685195923s@ mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:03 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Nov 25 23:34:03 compute-0 ceph-mon[75654]: pgmap v92: 131 pgs: 62 unknown, 69 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:34:03 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 23:34:03 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Nov 25 23:34:03 compute-0 ceph-mon[75654]: osdmap e39: 3 total, 3 up, 3 in
Nov 25 23:34:03 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Nov 25 23:34:03 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Nov 25 23:34:03 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 39 pg[6.0( v 32'39 (0'0,32'39] local-lis/les=22/23 n=22 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=39 pruub=11.870360374s) [0] r=0 lpr=39 pi=[22,39)/1 crt=32'39 lcod 32'38 mlcod 32'38 active pruub 83.977050781s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:03 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 40 pg[6.0( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=22/23 n=1 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=39 pruub=11.870360374s) [0] r=0 lpr=39 pi=[22,39)/1 crt=32'39 lcod 32'38 mlcod 0'0 unknown pruub 83.977050781s@ mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:03 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.1e( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:03 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.1c( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:03 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.1d( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:03 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.13( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:03 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.12( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:03 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.10( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:03 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.17( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:03 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.16( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:03 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.15( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:03 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.14( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:03 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.b( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:03 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.a( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:03 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.9( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:03 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.8( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:03 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.f( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:03 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.6( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:03 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.4( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:03 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.5( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:03 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.7( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:03 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.1( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:03 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 40 pg[6.8( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=22/23 n=1 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=32'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:03 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 40 pg[6.7( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=22/23 n=1 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=32'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:03 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 40 pg[6.5( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=22/23 n=2 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=32'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:03 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 40 pg[6.6( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=22/23 n=2 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=32'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:03 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.2( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:03 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.3( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:03 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 40 pg[6.9( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=22/23 n=1 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=32'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:03 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.c( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:03 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.e( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:03 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 40 pg[6.a( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=22/23 n=1 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=32'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:03 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.d( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:03 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 40 pg[6.b( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=22/23 n=1 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=32'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:03 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.1f( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:03 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.11( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:03 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.19( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:03 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 40 pg[6.c( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=22/23 n=1 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=32'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:03 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.1a( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:03 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 40 pg[6.1( v 32'39 (0'0,32'39] local-lis/les=22/23 n=2 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:03 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 40 pg[6.2( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=22/23 n=2 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=32'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:03 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.1b( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:03 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 40 pg[6.3( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=22/23 n=2 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=32'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:03 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.18( empty local-lis/les=24/25 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:03 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 40 pg[6.4( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=22/23 n=2 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=32'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:03 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 40 pg[6.d( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=22/23 n=1 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=32'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:03 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 40 pg[6.e( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=22/23 n=1 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=32'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:03 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 40 pg[6.f( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=22/23 n=1 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=32'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:03 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.1f( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:03 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.1e( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:03 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.13( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:03 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.12( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:03 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.1d( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:03 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.10( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:03 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.14( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:03 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.17( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:03 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.b( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:03 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.15( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:03 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.8( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:03 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.9( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:03 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.a( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:03 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.f( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:03 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.6( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:03 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.0( empty local-lis/les=39/40 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:03 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.4( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:03 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.7( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:03 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.1( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:03 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.16( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:03 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.3( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:03 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.2( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:03 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.c( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:03 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.e( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:03 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.1c( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:03 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.1a( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:03 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.1f( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:03 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.d( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:03 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.11( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:03 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.5( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:03 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.18( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:03 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.1b( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:03 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 40 pg[7.19( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=24/24 les/c/f=25/25/0 sis=39) [1] r=0 lpr=39 pi=[24,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:03 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.13( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:03 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.10( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:03 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.11( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:03 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.12( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:03 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.1d( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:03 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.1e( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:03 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.14( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:03 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.16( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:03 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.15( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:03 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.17( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:03 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.8( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:03 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.1c( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:03 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.7( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:03 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.0( empty local-lis/les=37/40 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:03 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.5( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:03 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.a( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:03 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.2( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:03 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.1( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:03 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.9( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:03 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.b( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:03 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.f( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:03 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.3( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:03 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.4( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:03 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.c( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:03 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.e( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:03 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.1b( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:03 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.19( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:03 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.6( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:03 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.1a( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:03 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.d( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:03 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.18( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=20/20 les/c/f=21/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:04 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v95: 177 pgs: 77 unknown, 100 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:34:04 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 4.1 deep-scrub starts
Nov 25 23:34:04 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 4.1 deep-scrub ok
Nov 25 23:34:04 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Nov 25 23:34:04 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Nov 25 23:34:04 compute-0 ceph-mon[75654]: osdmap e40: 3 total, 3 up, 3 in
Nov 25 23:34:04 compute-0 ceph-mon[75654]: 4.1 deep-scrub starts
Nov 25 23:34:04 compute-0 ceph-mon[75654]: 4.1 deep-scrub ok
Nov 25 23:34:04 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Nov 25 23:34:04 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 41 pg[6.5( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:04 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 41 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:04 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 41 pg[6.8( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:04 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 41 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:04 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 41 pg[6.4( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:04 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 41 pg[6.7( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:04 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 41 pg[6.1( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:04 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 41 pg[6.6( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:04 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 41 pg[6.3( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:04 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 41 pg[6.e( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:04 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 41 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:04 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 41 pg[6.2( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:04 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 41 pg[6.0( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=32'39 lcod 32'38 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:04 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 41 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:04 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 41 pg[6.c( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:04 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 41 pg[6.9( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:04 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 3.1 deep-scrub starts
Nov 25 23:34:04 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 3.1 deep-scrub ok
Nov 25 23:34:05 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:34:05 compute-0 ceph-mon[75654]: pgmap v95: 177 pgs: 77 unknown, 100 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:34:05 compute-0 ceph-mon[75654]: osdmap e41: 3 total, 3 up, 3 in
Nov 25 23:34:05 compute-0 ceph-mon[75654]: 3.1 deep-scrub starts
Nov 25 23:34:05 compute-0 ceph-mon[75654]: 3.1 deep-scrub ok
Nov 25 23:34:05 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Nov 25 23:34:05 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Nov 25 23:34:06 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v97: 177 pgs: 77 unknown, 100 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:34:06 compute-0 ceph-mgr[75954]: [progress INFO root] Writing back 10 completed events
Nov 25 23:34:06 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 25 23:34:06 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:34:06 compute-0 ceph-mon[75654]: 3.2 scrub starts
Nov 25 23:34:06 compute-0 ceph-mon[75654]: 3.2 scrub ok
Nov 25 23:34:06 compute-0 ceph-mon[75654]: pgmap v97: 177 pgs: 77 unknown, 100 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:34:06 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:34:06 compute-0 sshd-session[102531]: Invalid user administrator from 124.18.141.70 port 41830
Nov 25 23:34:07 compute-0 sshd-session[102531]: Received disconnect from 124.18.141.70 port 41830:11: Bye Bye [preauth]
Nov 25 23:34:07 compute-0 sshd-session[102531]: Disconnected from invalid user administrator 124.18.141.70 port 41830 [preauth]
Nov 25 23:34:07 compute-0 sshd-session[102533]: Accepted publickey for zuul from 192.168.122.30 port 37356 ssh2: ECDSA SHA256:7fdySOwLsSW4/MY3qn2D8coSBpPDcuftesbV+ivPlmY
Nov 25 23:34:07 compute-0 systemd-logind[789]: New session 34 of user zuul.
Nov 25 23:34:07 compute-0 systemd[1]: Started Session 34 of User zuul.
Nov 25 23:34:07 compute-0 sshd-session[102533]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 23:34:08 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v98: 177 pgs: 177 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:34:08 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 25 23:34:08 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 23:34:08 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 25 23:34:08 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 23:34:08 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} v 0) v1
Nov 25 23:34:08 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 25 23:34:08 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 25 23:34:08 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 23:34:08 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 25 23:34:08 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 23:34:08 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 25 23:34:08 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 23:34:08 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Nov 25 23:34:08 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 23:34:08 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 23:34:08 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 25 23:34:08 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 23:34:08 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 23:34:08 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 23:34:08 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Nov 25 23:34:08 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 23:34:08 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 23:34:08 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 25 23:34:08 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 23:34:08 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 23:34:08 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=9.487118721s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 86.075416565s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=9.487162590s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 86.075515747s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=9.487066269s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.075515747s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=9.487041473s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 86.075500488s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=9.486937523s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.075416565s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[6.9( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=12.541646957s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active pruub 89.130157471s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=9.486957550s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.075500488s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[6.9( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=12.541588783s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.130157471s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=9.487282753s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 86.075973511s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[6.5( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=12.536790848s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active pruub 89.125488281s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=9.487258911s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.075973511s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[6.5( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=12.536728859s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.125488281s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=9.487183571s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 86.076034546s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=9.487133026s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.076034546s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[6.7( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=12.540464401s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active pruub 89.129493713s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=9.487164497s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 86.076225281s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=9.486917496s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 86.075981140s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=9.487134933s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.076225281s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[6.7( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=12.540402412s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.129493713s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=12.540250778s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active pruub 89.129425049s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=9.486831665s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.075981140s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=9.486952782s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 86.076194763s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=12.540205002s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.129425049s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=9.486927032s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.076194763s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=9.486803055s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 86.076232910s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=9.486779213s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.076232910s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[6.3( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=12.540124893s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active pruub 89.129638672s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=9.486709595s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 86.076293945s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=9.486627579s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 86.076232910s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=9.486623764s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.076293945s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[6.1( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=12.539930344s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active pruub 89.129531860s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=9.486513138s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.076232910s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[6.1( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=12.539773941s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.129531860s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=12.540101051s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active pruub 89.130050659s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=12.540071487s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.130050659s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=9.486405373s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 86.076408386s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=9.486298561s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 86.076354980s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=9.486351013s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.076408386s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=12.539968491s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active pruub 89.130088806s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=9.486248016s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.076354980s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=12.539942741s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.130088806s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[4.f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=9.486095428s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 86.076354980s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[4.f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=9.486060143s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.076354980s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=9.486003876s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 86.076400757s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=9.486073494s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 86.076492310s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[6.3( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=12.540076256s) [1] r=-1 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.129638672s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=9.486012459s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.076492310s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=9.485896111s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.076400757s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=9.485866547s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 86.076431274s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=9.485957146s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 86.076538086s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=9.485918999s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.076538086s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=9.485754967s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 86.076446533s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=9.485826492s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 86.076614380s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=9.485813141s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.076431274s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=9.485706329s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.076446533s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42 pruub=9.485775948s) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.076614380s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.18( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1b( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1a( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.e( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.d( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.f( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[6.3( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.2( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.a( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.4( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[6.1( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.7( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.5( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.13( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.11( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1c( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.456751823s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 82.465843201s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.456731796s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465843201s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1d( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.520897865s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 78.530090332s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1d( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.520885468s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530090332s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1e( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.520854950s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 78.530136108s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1e( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.520842552s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530136108s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[6.7( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.456146240s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 82.465744019s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.456072807s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465744019s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.455876350s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 82.465759277s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.11( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.520113945s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 78.530021667s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.455842018s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465759277s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.11( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.520085335s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530021667s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.455184937s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 82.465225220s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.12( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.519988060s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 78.530075073s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.455161095s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465225220s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.12( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.519961357s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530075073s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.13( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.519800186s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 78.530029297s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.13( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.519779205s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530029297s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.454953194s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 82.465225220s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.1e( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.9( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.16( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.454927444s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465225220s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.14( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.519281387s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 78.530158997s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.14( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.519255638s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530158997s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.454054832s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 82.465164185s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.454019547s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465164185s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.15( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518911362s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 78.530174255s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.15( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518884659s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530174255s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.16( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518821716s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 78.530166626s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.16( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518773079s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530166626s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.453702927s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 82.465126038s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.453660965s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465126038s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.454210281s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 82.465759277s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.9( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518668175s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 78.530258179s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.9( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518647194s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530258179s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.454153061s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465759277s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.453030586s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 82.464744568s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.7( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518463135s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 78.530212402s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.452983856s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464744568s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.7( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518447876s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530212402s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.452710152s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 82.464645386s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.13( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.452659607s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464645386s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.452497482s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 82.464584351s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.8( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.5( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518067360s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 78.530242920s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.452450752s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464584351s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.14( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.5( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518048286s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530242920s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.4( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518050194s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 78.530319214s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.453290939s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 82.465820312s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.4( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518003464s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530319214s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.453218460s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465820312s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.451832771s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 82.464576721s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.451805115s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464576721s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.452794075s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 82.465126038s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.452293396s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465126038s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.3( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.517469406s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 78.530319214s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.451684952s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 82.464576721s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.3( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.517417908s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530319214s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.451636314s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464576721s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.2( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.517217636s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 78.530265808s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.2( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.517190933s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530265808s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.451214790s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 82.464385986s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.452562332s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 82.465751648s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.11( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.15( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.14( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.451136589s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464385986s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.452430725s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465751648s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.451215744s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 82.464645386s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.451201439s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464645386s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.f( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.516825676s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 78.530311584s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.450861931s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 82.464378357s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.450849533s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464378357s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.f( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.516798973s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530311584s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.c( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.516772270s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 78.530380249s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.c( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.516757965s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530380249s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.450617790s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 82.464271545s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.450549126s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 82.464279175s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.450786591s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 82.464523315s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.450569153s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464271545s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.516482353s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 78.530273438s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.516411781s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530273438s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.12( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.450525284s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464279175s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.19( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.516325951s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 78.530418396s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.19( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.516311646s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530418396s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.18( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.516283035s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 78.530479431s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.18( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.516271591s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530479431s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.450763702s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464523315s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1a( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.515798569s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 78.530448914s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1a( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.515743256s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530448914s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.f( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.18( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.10( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[7.1c( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=11.513943672s) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active pruub 83.805473328s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[7.1c( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=11.513921738s) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.805473328s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.18( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.448771477s) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 87.740425110s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.18( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.448755264s) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.740425110s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.17( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.448654175s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 87.740394592s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.17( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.448639870s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.740394592s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.448849678s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 82.464324951s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.15( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.448562622s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 87.740417480s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.15( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.448523521s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.740417480s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[7.13( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=11.513177872s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active pruub 83.805099487s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[7.13( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=11.513118744s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.805099487s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.448781013s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464324951s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.16( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.447863579s) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 87.740653992s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[7.11( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=11.512705803s) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active pruub 83.805541992s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.12( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.447410583s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 87.740371704s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.7( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.12( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.447385788s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.740371704s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[7.11( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=11.512588501s) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.805541992s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.16( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.447697639s) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.740653992s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.f( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.447261810s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 87.740386963s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.f( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.447239876s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.740386963s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.11( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.447119713s) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 87.740371704s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[7.a( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=11.512052536s) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active pruub 83.805320740s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[7.9( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=11.511995316s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active pruub 83.805313110s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[7.9( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=11.511977196s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.805313110s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.e( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.446464539s) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 87.739807129s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[7.a( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=11.512003899s) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.805320740s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.11( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.447058678s) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.740371704s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.e( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.446429253s) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.739807129s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.c( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.446962357s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 87.740394592s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.c( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.446947098s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.740394592s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[7.15( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=11.511787415s) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active pruub 83.805274963s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[7.8( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=11.511771202s) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active pruub 83.805305481s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[7.8( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=11.511754990s) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.805305481s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[7.15( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=11.511736870s) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.805274963s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[7.f( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=11.511671066s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active pruub 83.805328369s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[7.4( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=11.511650085s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active pruub 83.805366516s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[7.4( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=11.511633873s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.805366516s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[7.f( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=11.511618614s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.805328369s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[7.5( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=11.511734009s) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active pruub 83.805572510s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.2( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[7.5( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=11.511713028s) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.805572510s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.1( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.445724487s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 87.739570618s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.3( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.445761681s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 87.739692688s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.5( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.445349693s) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 87.739334106s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1c( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.1( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.445615768s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.739570618s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.5( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.445330620s) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.739334106s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[7.1( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=11.511328697s) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active pruub 83.805397034s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.3( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.445724487s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.739692688s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.6( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.445547104s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 87.739692688s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.18( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[7.1( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=11.511263847s) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.805397034s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.6( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.445530891s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.739692688s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.7( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.444708824s) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 87.739219666s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[7.2( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=11.510831833s) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active pruub 83.805419922s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.7( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.444672585s) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.739219666s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[7.2( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=11.510773659s) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.805419922s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.11( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.8( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.444632530s) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 87.739364624s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[7.3( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=11.510651588s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active pruub 83.805412292s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[7.c( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=11.510663033s) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active pruub 83.805442810s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[7.3( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=11.510628700s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.805412292s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.8( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.444575310s) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.739364624s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[7.c( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=11.510622025s) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.805442810s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.a( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.444879532s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 87.739822388s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.a( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.444858551s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.739822388s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[7.6( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=11.510386467s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active pruub 83.805358887s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[7.e( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=11.510434151s) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active pruub 83.805465698s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.9( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.444198608s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 87.739227295s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.1b( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.444249153s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 87.739318848s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[7.6( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=11.510313988s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.805358887s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[7.e( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=11.510400772s) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.805465698s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.1b( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.444231987s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.739318848s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[7.1f( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=11.510384560s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active pruub 83.805511475s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.9( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.444146156s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.739227295s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[7.1f( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=11.510334015s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.805511475s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.1d( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.444341660s) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 87.739570618s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[7.18( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=11.510371208s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active pruub 83.805610657s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.1d( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.444326401s) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.739570618s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[7.18( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=11.510343552s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.805610657s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.1e( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.443850517s) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 87.739219666s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.1e( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.443834305s) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.739219666s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[7.1a( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=11.510123253s) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active pruub 83.805511475s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[7.1b( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=11.510214806s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active pruub 83.805618286s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[7.1a( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=11.510066032s) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.805511475s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[7.1b( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42 pruub=11.510153770s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.805618286s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.1b( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.1d( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.5( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.17( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.11( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.17( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.16( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.15( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.15( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.1f( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.436471939s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 87.733673096s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.12( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.1f( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.436425209s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.733673096s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.13( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.13( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.9( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.16( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.4( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.d( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.12( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.a( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.3( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.4( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.7( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.11( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.6( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.f( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.8( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.9( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.15( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.a( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.19( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.e( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.3( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.c( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.9( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.f( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.5( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.2( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.1( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.5( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.19( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.c( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.5( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.18( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.1a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.8( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.7( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.b( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.4( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.2( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.c( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.8( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.e( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.f( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.1( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.3( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.6( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.3( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.1d( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.a( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.1e( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.1d( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1a( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.1b( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.1c( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.6( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.9( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.1f( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.18( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.1b( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.1f( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.1f( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:08 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Nov 25 23:34:08 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Nov 25 23:34:08 compute-0 python3.9[102686]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 23:34:08 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 2.c scrub starts
Nov 25 23:34:08 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 2.c scrub ok
Nov 25 23:34:09 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Nov 25 23:34:09 compute-0 ceph-mon[75654]: pgmap v98: 177 pgs: 177 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:34:09 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 23:34:09 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 23:34:09 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 25 23:34:09 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 23:34:09 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 23:34:09 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 23:34:09 compute-0 ceph-mon[75654]: osdmap e42: 3 total, 3 up, 3 in
Nov 25 23:34:09 compute-0 ceph-mon[75654]: 3.4 scrub starts
Nov 25 23:34:09 compute-0 ceph-mon[75654]: 3.4 scrub ok
Nov 25 23:34:09 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Nov 25 23:34:09 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Nov 25 23:34:09 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.18( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.1c( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.18( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.1a( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.19( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.18( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.c( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.1c( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.16( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.11( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.11( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.15( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.e( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.a( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.11( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.5( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.8( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.13( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.a( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.2( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.1( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.1( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.5( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.c( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.8( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.e( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.1d( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.1a( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.1e( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.7( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.1a( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.18( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.e( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.1b( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[5.4( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.1d( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[5.1e( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[5.5( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.f( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.1c( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.1f( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.2( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[5.3( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.b( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[5.2( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.8( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.19( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[5.14( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[5.15( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.16( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[5.7( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.1f( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.13( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.11( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.15( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.1b( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.13( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.9( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.a( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.f( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.3( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.3( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.6( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.18( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.9( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.6( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.1( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.c( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.f( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.4( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.1f( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.1b( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.17( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.12( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.1d( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.9( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.f( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.5( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.6( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.1( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.d( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.9( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.a( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.12( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.13( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.17( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.11( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.15( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.1b( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.7( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.10( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.3( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.14( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.12( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.8( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.4( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.9( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.9( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.5( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.7( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.b( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.1( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.16( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.4( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.5( v 32'39 lc 32'6 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.2( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.3( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.d( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.d( v 32'39 lc 32'7 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.f( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.f( v 32'39 lc 32'1 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.7( v 32'39 lc 32'11 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:09 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Nov 25 23:34:09 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Nov 25 23:34:10 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v101: 177 pgs: 177 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:34:10 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} v 0) v1
Nov 25 23:34:10 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 25 23:34:10 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Nov 25 23:34:10 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 25 23:34:10 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Nov 25 23:34:10 compute-0 ceph-mon[75654]: 2.c scrub starts
Nov 25 23:34:10 compute-0 ceph-mon[75654]: 2.c scrub ok
Nov 25 23:34:10 compute-0 ceph-mon[75654]: osdmap e43: 3 total, 3 up, 3 in
Nov 25 23:34:10 compute-0 ceph-mon[75654]: 4.3 scrub starts
Nov 25 23:34:10 compute-0 ceph-mon[75654]: 4.3 scrub ok
Nov 25 23:34:10 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 25 23:34:10 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 44 pg[6.2( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=10.510518074s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 active pruub 89.130073547s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:10 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 44 pg[6.2( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=10.510458946s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.130073547s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:10 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 44 pg[6.6( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=10.509833336s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 active pruub 89.129592896s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:10 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 44 pg[6.6( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=10.509752274s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.129592896s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:10 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 44 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=10.505555153s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 active pruub 89.125503540s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:10 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 44 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=10.505515099s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.125503540s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:10 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 44 pg[6.e( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=10.509609222s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 active pruub 89.129646301s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:10 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 44 pg[6.e( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=10.509572029s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.129646301s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:10 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Nov 25 23:34:10 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.2( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:10 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.6( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:10 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.e( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:10 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:10 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:34:10 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 4.6 deep-scrub starts
Nov 25 23:34:10 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 4.6 deep-scrub ok
Nov 25 23:34:10 compute-0 sudo[102902]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imkkpatxvjwpsqdcfeljemtvhjyeibrv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113650.21983-32-89908756410421/AnsiballZ_command.py'
Nov 25 23:34:10 compute-0 sudo[102902]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:34:10 compute-0 python3.9[102904]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                             pushd /var/tmp
                                             curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                             pushd repo-setup-main
                                             python3 -m venv ./venv
                                             PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                             ./venv/bin/repo-setup current-podified -b antelope
                                             popd
                                             rm -rf repo-setup-main
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:34:11 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Nov 25 23:34:11 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Nov 25 23:34:11 compute-0 ceph-mgr[75954]: [progress INFO root] Completed event 39061ea4-a72e-4426-a139-8eb608550a46 (Global Recovery Event) in 10 seconds
Nov 25 23:34:11 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Nov 25 23:34:11 compute-0 ceph-mon[75654]: pgmap v101: 177 pgs: 177 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:34:11 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 25 23:34:11 compute-0 ceph-mon[75654]: osdmap e44: 3 total, 3 up, 3 in
Nov 25 23:34:11 compute-0 ceph-mon[75654]: 4.6 deep-scrub starts
Nov 25 23:34:11 compute-0 ceph-mon[75654]: 4.6 deep-scrub ok
Nov 25 23:34:11 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:11 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.e( v 32'39 lc 32'10 (0'0,32'39] local-lis/les=44/45 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:11 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.2( v 32'39 (0'0,32'39] local-lis/les=44/45 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:11 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.6( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=44/45 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:12 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v104: 177 pgs: 1 active+recovering, 176 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1/24 objects misplaced (4.167%); 199 B/s, 2 keys/s, 2 objects/s recovering
Nov 25 23:34:12 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} v 0) v1
Nov 25 23:34:12 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 25 23:34:12 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Nov 25 23:34:12 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 25 23:34:12 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Nov 25 23:34:12 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Nov 25 23:34:12 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=12.976484299s) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 32'39 active pruub 89.309509277s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:12 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=12.976387978s) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 89.309509277s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:12 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.7( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=12.976278305s) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 32'39 active pruub 89.309593201s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:12 compute-0 ceph-mon[75654]: osdmap e45: 3 total, 3 up, 3 in
Nov 25 23:34:12 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 25 23:34:12 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.7( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=12.976189613s) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 89.309593201s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:12 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.3( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=12.976170540s) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 32'39 active pruub 89.309707642s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:12 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.3( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=12.976113319s) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 89.309707642s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:12 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=12.976114273s) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 32'39 active pruub 89.309745789s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:12 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=12.976060867s) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 89.309745789s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:12 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 46 pg[6.7( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:12 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 46 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:12 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 46 pg[6.3( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:12 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 46 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:13 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Nov 25 23:34:13 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Nov 25 23:34:13 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Nov 25 23:34:13 compute-0 ceph-mon[75654]: pgmap v104: 177 pgs: 1 active+recovering, 176 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1/24 objects misplaced (4.167%); 199 B/s, 2 keys/s, 2 objects/s recovering
Nov 25 23:34:13 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 25 23:34:13 compute-0 ceph-mon[75654]: osdmap e46: 3 total, 3 up, 3 in
Nov 25 23:34:13 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 47 pg[6.b( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:13 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 47 pg[6.7( v 32'39 lc 32'11 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:13 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 47 pg[6.f( v 32'39 lc 32'1 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:13 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 47 pg[6.3( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=46/47 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:13 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 2.e scrub starts
Nov 25 23:34:13 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 2.e scrub ok
Nov 25 23:34:14 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v107: 177 pgs: 4 peering, 1 active+recovering, 172 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1/24 objects misplaced (4.167%); 199 B/s, 2 keys/s, 2 objects/s recovering
Nov 25 23:34:14 compute-0 ceph-mon[75654]: osdmap e47: 3 total, 3 up, 3 in
Nov 25 23:34:14 compute-0 ceph-mon[75654]: 2.e scrub starts
Nov 25 23:34:14 compute-0 ceph-mon[75654]: 2.e scrub ok
Nov 25 23:34:15 compute-0 ceph-mon[75654]: pgmap v107: 177 pgs: 4 peering, 1 active+recovering, 172 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1/24 objects misplaced (4.167%); 199 B/s, 2 keys/s, 2 objects/s recovering
Nov 25 23:34:15 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:34:15 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 4.b scrub starts
Nov 25 23:34:15 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 4.b scrub ok
Nov 25 23:34:15 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 3.b scrub starts
Nov 25 23:34:15 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 3.b scrub ok
Nov 25 23:34:15 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 2.10 scrub starts
Nov 25 23:34:15 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 2.10 scrub ok
Nov 25 23:34:16 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v108: 177 pgs: 4 peering, 1 active+recovering, 172 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1/24 objects misplaced (4.167%); 135 B/s, 1 keys/s, 2 objects/s recovering
Nov 25 23:34:16 compute-0 ceph-mgr[75954]: [progress INFO root] Writing back 11 completed events
Nov 25 23:34:16 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 25 23:34:16 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:34:16 compute-0 ceph-mon[75654]: 4.b scrub starts
Nov 25 23:34:16 compute-0 ceph-mon[75654]: 4.b scrub ok
Nov 25 23:34:16 compute-0 ceph-mon[75654]: 3.b scrub starts
Nov 25 23:34:16 compute-0 ceph-mon[75654]: 3.b scrub ok
Nov 25 23:34:16 compute-0 ceph-mon[75654]: 2.10 scrub starts
Nov 25 23:34:16 compute-0 ceph-mon[75654]: 2.10 scrub ok
Nov 25 23:34:16 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:34:16 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 3.d scrub starts
Nov 25 23:34:16 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 3.d scrub ok
Nov 25 23:34:17 compute-0 ceph-mon[75654]: pgmap v108: 177 pgs: 4 peering, 1 active+recovering, 172 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1/24 objects misplaced (4.167%); 135 B/s, 1 keys/s, 2 objects/s recovering
Nov 25 23:34:17 compute-0 ceph-mon[75654]: 3.d scrub starts
Nov 25 23:34:17 compute-0 ceph-mon[75654]: 3.d scrub ok
Nov 25 23:34:17 compute-0 sudo[102902]: pam_unix(sudo:session): session closed for user root
Nov 25 23:34:18 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v109: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 208 B/s, 2 keys/s, 2 objects/s recovering
Nov 25 23:34:18 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} v 0) v1
Nov 25 23:34:18 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]: dispatch
Nov 25 23:34:18 compute-0 sshd-session[102536]: Connection closed by 192.168.122.30 port 37356
Nov 25 23:34:18 compute-0 sshd-session[102533]: pam_unix(sshd:session): session closed for user zuul
Nov 25 23:34:18 compute-0 systemd[1]: session-34.scope: Deactivated successfully.
Nov 25 23:34:18 compute-0 systemd[1]: session-34.scope: Consumed 8.725s CPU time.
Nov 25 23:34:18 compute-0 systemd-logind[789]: Session 34 logged out. Waiting for processes to exit.
Nov 25 23:34:18 compute-0 systemd-logind[789]: Removed session 34.
Nov 25 23:34:18 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Nov 25 23:34:18 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]: dispatch
Nov 25 23:34:18 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Nov 25 23:34:18 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Nov 25 23:34:18 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Nov 25 23:34:18 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 4.c scrub starts
Nov 25 23:34:18 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 4.c scrub ok
Nov 25 23:34:18 compute-0 sshd-session[102934]: Invalid user 1 from 171.244.140.22 port 44986
Nov 25 23:34:18 compute-0 sshd-session[102934]: Received disconnect from 171.244.140.22 port 44986:11: Bye Bye [preauth]
Nov 25 23:34:18 compute-0 sshd-session[102934]: Disconnected from invalid user 1 171.244.140.22 port 44986 [preauth]
Nov 25 23:34:19 compute-0 ceph-mon[75654]: pgmap v109: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 208 B/s, 2 keys/s, 2 objects/s recovering
Nov 25 23:34:19 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Nov 25 23:34:19 compute-0 ceph-mon[75654]: osdmap e48: 3 total, 3 up, 3 in
Nov 25 23:34:19 compute-0 ceph-mon[75654]: 4.c scrub starts
Nov 25 23:34:19 compute-0 ceph-mon[75654]: 4.c scrub ok
Nov 25 23:34:19 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 48 pg[6.c( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48 pruub=9.281056404s) [1] r=-1 lpr=48 pi=[39,48)/1 crt=32'39 lcod 0'0 mlcod 0'0 active pruub 97.130294800s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:19 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 48 pg[6.c( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48 pruub=9.281001091s) [1] r=-1 lpr=48 pi=[39,48)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 97.130294800s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:19 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 48 pg[6.4( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48 pruub=9.279815674s) [1] r=-1 lpr=48 pi=[39,48)/1 crt=32'39 lcod 0'0 mlcod 0'0 active pruub 97.129615784s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:19 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 48 pg[6.c( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:19 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 48 pg[6.4( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48 pruub=9.279746056s) [1] r=-1 lpr=48 pi=[39,48)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 97.129615784s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:19 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 48 pg[6.4( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:20 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v111: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 80 B/s, 1 keys/s, 1 objects/s recovering
Nov 25 23:34:20 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} v 0) v1
Nov 25 23:34:20 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 25 23:34:20 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:34:20 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 4.15 deep-scrub starts
Nov 25 23:34:20 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 3.10 deep-scrub starts
Nov 25 23:34:20 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 3.10 deep-scrub ok
Nov 25 23:34:20 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Nov 25 23:34:20 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 4.15 deep-scrub ok
Nov 25 23:34:20 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 25 23:34:20 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 25 23:34:20 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Nov 25 23:34:20 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Nov 25 23:34:20 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.5( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49 pruub=12.182587624s) [0] r=-1 lpr=49 pi=[42,49)/1 crt=32'39 mlcod 32'39 active pruub 97.309936523s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:20 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.5( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49 pruub=12.182506561s) [0] r=-1 lpr=49 pi=[42,49)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 97.309936523s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:20 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49 pruub=12.182564735s) [0] r=-1 lpr=49 pi=[42,49)/1 crt=32'39 mlcod 32'39 active pruub 97.310157776s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:20 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49 pruub=12.182506561s) [0] r=-1 lpr=49 pi=[42,49)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 97.310157776s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:20 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 49 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=0 lpr=49 pi=[42,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:20 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 49 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=0 lpr=49 pi=[42,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:20 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.c( v 32'39 lc 32'9 (0'0,32'39] local-lis/les=48/49 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:20 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.4( v 32'39 lc 32'8 (0'0,32'39] local-lis/les=48/49 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=4 mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:21 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 3.13 deep-scrub starts
Nov 25 23:34:21 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 3.13 deep-scrub ok
Nov 25 23:34:21 compute-0 ceph-mon[75654]: pgmap v111: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 80 B/s, 1 keys/s, 1 objects/s recovering
Nov 25 23:34:21 compute-0 ceph-mon[75654]: 4.15 deep-scrub starts
Nov 25 23:34:21 compute-0 ceph-mon[75654]: 3.10 deep-scrub starts
Nov 25 23:34:21 compute-0 ceph-mon[75654]: 3.10 deep-scrub ok
Nov 25 23:34:21 compute-0 ceph-mon[75654]: 4.15 deep-scrub ok
Nov 25 23:34:21 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 25 23:34:21 compute-0 ceph-mon[75654]: osdmap e49: 3 total, 3 up, 3 in
Nov 25 23:34:21 compute-0 ceph-mon[75654]: 3.13 deep-scrub starts
Nov 25 23:34:21 compute-0 ceph-mon[75654]: 3.13 deep-scrub ok
Nov 25 23:34:21 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Nov 25 23:34:21 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Nov 25 23:34:21 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Nov 25 23:34:21 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 50 pg[6.d( v 32'39 lc 32'7 (0'0,32'39] local-lis/les=49/50 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=0 lpr=49 pi=[42,49)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:21 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 50 pg[6.5( v 32'39 lc 32'6 (0'0,32'39] local-lis/les=49/50 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=0 lpr=49 pi=[42,49)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:22 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v114: 177 pgs: 1 active+recovering, 176 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1/24 objects misplaced (4.167%); 218 B/s, 1 keys/s, 1 objects/s recovering
Nov 25 23:34:22 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} v 0) v1
Nov 25 23:34:22 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]: dispatch
Nov 25 23:34:22 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Nov 25 23:34:22 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Nov 25 23:34:22 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Nov 25 23:34:22 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Nov 25 23:34:22 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Nov 25 23:34:22 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Nov 25 23:34:22 compute-0 ceph-mon[75654]: osdmap e50: 3 total, 3 up, 3 in
Nov 25 23:34:22 compute-0 ceph-mon[75654]: pgmap v114: 177 pgs: 1 active+recovering, 176 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1/24 objects misplaced (4.167%); 218 B/s, 1 keys/s, 1 objects/s recovering
Nov 25 23:34:22 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]: dispatch
Nov 25 23:34:22 compute-0 ceph-mon[75654]: 3.14 scrub starts
Nov 25 23:34:22 compute-0 ceph-mon[75654]: 3.14 scrub ok
Nov 25 23:34:23 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Nov 25 23:34:23 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Nov 25 23:34:23 compute-0 sshd-session[102963]: Invalid user user1 from 202.83.162.167 port 40094
Nov 25 23:34:23 compute-0 sshd-session[102963]: Received disconnect from 202.83.162.167 port 40094:11: Bye Bye [preauth]
Nov 25 23:34:23 compute-0 sshd-session[102963]: Disconnected from invalid user user1 202.83.162.167 port 40094 [preauth]
Nov 25 23:34:23 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 2.12 scrub starts
Nov 25 23:34:23 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 2.12 scrub ok
Nov 25 23:34:23 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Nov 25 23:34:23 compute-0 ceph-mon[75654]: osdmap e51: 3 total, 3 up, 3 in
Nov 25 23:34:23 compute-0 ceph-mon[75654]: 3.19 scrub starts
Nov 25 23:34:23 compute-0 ceph-mon[75654]: 3.19 scrub ok
Nov 25 23:34:24 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v116: 177 pgs: 1 active+recovering, 176 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1/24 objects misplaced (4.167%); 144 B/s, 1 objects/s recovering
Nov 25 23:34:24 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} v 0) v1
Nov 25 23:34:24 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]: dispatch
Nov 25 23:34:24 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 2.14 deep-scrub starts
Nov 25 23:34:24 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 2.14 deep-scrub ok
Nov 25 23:34:24 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Nov 25 23:34:24 compute-0 ceph-mon[75654]: 2.12 scrub starts
Nov 25 23:34:24 compute-0 ceph-mon[75654]: 2.12 scrub ok
Nov 25 23:34:24 compute-0 ceph-mon[75654]: pgmap v116: 177 pgs: 1 active+recovering, 176 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1/24 objects misplaced (4.167%); 144 B/s, 1 objects/s recovering
Nov 25 23:34:24 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]: dispatch
Nov 25 23:34:24 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Nov 25 23:34:24 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Nov 25 23:34:24 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Nov 25 23:34:25 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:34:25 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Nov 25 23:34:25 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Nov 25 23:34:25 compute-0 ceph-mon[75654]: 2.14 deep-scrub starts
Nov 25 23:34:25 compute-0 ceph-mon[75654]: 2.14 deep-scrub ok
Nov 25 23:34:25 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Nov 25 23:34:25 compute-0 ceph-mon[75654]: osdmap e52: 3 total, 3 up, 3 in
Nov 25 23:34:25 compute-0 ceph-mon[75654]: 4.16 scrub starts
Nov 25 23:34:25 compute-0 ceph-mon[75654]: 4.16 scrub ok
Nov 25 23:34:26 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v118: 177 pgs: 1 active+recovering, 176 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1/24 objects misplaced (4.167%); 163 B/s, 1 objects/s recovering
Nov 25 23:34:26 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} v 0) v1
Nov 25 23:34:26 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]: dispatch
Nov 25 23:34:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:34:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:34:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:34:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:34:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:34:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:34:26 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Nov 25 23:34:26 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Nov 25 23:34:26 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Nov 25 23:34:26 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Nov 25 23:34:26 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Nov 25 23:34:26 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Nov 25 23:34:27 compute-0 ceph-mon[75654]: pgmap v118: 177 pgs: 1 active+recovering, 176 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1/24 objects misplaced (4.167%); 163 B/s, 1 objects/s recovering
Nov 25 23:34:27 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]: dispatch
Nov 25 23:34:27 compute-0 ceph-mon[75654]: 4.17 scrub starts
Nov 25 23:34:27 compute-0 ceph-mon[75654]: 4.17 scrub ok
Nov 25 23:34:27 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Nov 25 23:34:27 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Nov 25 23:34:28 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Nov 25 23:34:28 compute-0 ceph-mon[75654]: osdmap e53: 3 total, 3 up, 3 in
Nov 25 23:34:28 compute-0 ceph-mon[75654]: 2.1a scrub starts
Nov 25 23:34:28 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v120: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 175 B/s, 0 objects/s recovering
Nov 25 23:34:28 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} v 0) v1
Nov 25 23:34:28 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 25 23:34:28 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 4.19 deep-scrub starts
Nov 25 23:34:28 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 4.19 deep-scrub ok
Nov 25 23:34:29 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Nov 25 23:34:29 compute-0 ceph-mon[75654]: 2.1a scrub ok
Nov 25 23:34:29 compute-0 ceph-mon[75654]: pgmap v120: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 175 B/s, 0 objects/s recovering
Nov 25 23:34:29 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 25 23:34:29 compute-0 ceph-mon[75654]: 4.19 deep-scrub starts
Nov 25 23:34:29 compute-0 ceph-mon[75654]: 4.19 deep-scrub ok
Nov 25 23:34:29 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Nov 25 23:34:29 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Nov 25 23:34:29 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Nov 25 23:34:29 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 54 pg[6.9( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=54 pruub=12.088303566s) [0] r=-1 lpr=54 pi=[42,54)/1 crt=32'39 lcod 0'0 mlcod 0'0 active pruub 105.309837341s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:29 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 54 pg[6.9( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=54 pruub=12.087999344s) [0] r=-1 lpr=54 pi=[42,54)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 105.309837341s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:29 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 54 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:29 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 53 pg[6.8( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=53 pruub=15.581254959s) [2] r=-1 lpr=53 pi=[39,53)/1 crt=32'39 lcod 0'0 mlcod 0'0 active pruub 113.130058289s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:29 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 54 pg[6.8( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=53 pruub=15.581200600s) [2] r=-1 lpr=53 pi=[39,53)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.130058289s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:29 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 54 pg[6.8( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=53) [2] r=0 lpr=54 pi=[39,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:29 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 3.1a deep-scrub starts
Nov 25 23:34:29 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 3.1a deep-scrub ok
Nov 25 23:34:30 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v122: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 148 B/s, 0 objects/s recovering
Nov 25 23:34:30 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} v 0) v1
Nov 25 23:34:30 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 25 23:34:30 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Nov 25 23:34:30 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Nov 25 23:34:30 compute-0 ceph-mon[75654]: osdmap e54: 3 total, 3 up, 3 in
Nov 25 23:34:30 compute-0 ceph-mon[75654]: 3.1a deep-scrub starts
Nov 25 23:34:30 compute-0 ceph-mon[75654]: 3.1a deep-scrub ok
Nov 25 23:34:30 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 25 23:34:30 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Nov 25 23:34:30 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Nov 25 23:34:30 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Nov 25 23:34:30 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 55 pg[6.8( v 32'39 (0'0,32'39] local-lis/les=53/55 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=53) [2] r=0 lpr=54 pi=[39,53)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:30 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 55 pg[6.9( v 32'39 (0'0,32'39] local-lis/les=54/55 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:30 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 4.1d scrub starts
Nov 25 23:34:30 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 4.1d scrub ok
Nov 25 23:34:30 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e55 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:34:30 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Nov 25 23:34:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 55 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=39/22 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=12.590789795s) [0] r=-1 lpr=55 pi=[44,55)/1 crt=32'39 lcod 0'0 mlcod 0'0 active pruub 107.316719055s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 55 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=39/22 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=12.590709686s) [0] r=-1 lpr=55 pi=[44,55)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 107.316719055s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:30 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 55 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:30 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Nov 25 23:34:31 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Nov 25 23:34:31 compute-0 ceph-mon[75654]: pgmap v122: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 148 B/s, 0 objects/s recovering
Nov 25 23:34:31 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Nov 25 23:34:31 compute-0 ceph-mon[75654]: osdmap e55: 3 total, 3 up, 3 in
Nov 25 23:34:31 compute-0 ceph-mon[75654]: 4.1d scrub starts
Nov 25 23:34:31 compute-0 ceph-mon[75654]: 4.1d scrub ok
Nov 25 23:34:31 compute-0 ceph-mon[75654]: 3.1c scrub starts
Nov 25 23:34:31 compute-0 ceph-mon[75654]: 3.1c scrub ok
Nov 25 23:34:31 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Nov 25 23:34:31 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Nov 25 23:34:31 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 56 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=55/56 n=1 ec=39/22 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:31 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Nov 25 23:34:31 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Nov 25 23:34:31 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Nov 25 23:34:31 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Nov 25 23:34:32 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v125: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:34:32 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} v 0) v1
Nov 25 23:34:32 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 25 23:34:32 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Nov 25 23:34:32 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Nov 25 23:34:32 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Nov 25 23:34:32 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Nov 25 23:34:32 compute-0 ceph-mon[75654]: osdmap e56: 3 total, 3 up, 3 in
Nov 25 23:34:32 compute-0 ceph-mon[75654]: 4.1e scrub starts
Nov 25 23:34:32 compute-0 ceph-mon[75654]: 4.1e scrub ok
Nov 25 23:34:32 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 25 23:34:33 compute-0 ceph-mon[75654]: 2.1e scrub starts
Nov 25 23:34:33 compute-0 ceph-mon[75654]: 2.1e scrub ok
Nov 25 23:34:33 compute-0 ceph-mon[75654]: pgmap v125: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:34:33 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Nov 25 23:34:33 compute-0 ceph-mon[75654]: osdmap e57: 3 total, 3 up, 3 in
Nov 25 23:34:33 compute-0 sshd-session[102965]: Accepted publickey for zuul from 192.168.122.30 port 54026 ssh2: ECDSA SHA256:7fdySOwLsSW4/MY3qn2D8coSBpPDcuftesbV+ivPlmY
Nov 25 23:34:33 compute-0 systemd-logind[789]: New session 35 of user zuul.
Nov 25 23:34:33 compute-0 systemd[1]: Started Session 35 of User zuul.
Nov 25 23:34:33 compute-0 sshd-session[102965]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 23:34:33 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 57 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=57 pruub=11.247831345s) [1] r=-1 lpr=57 pi=[46,57)/1 crt=32'39 mlcod 32'39 active pruub 113.683471680s@ mbc={255={}}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:33 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 57 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=57 pruub=11.247759819s) [1] r=-1 lpr=57 pi=[46,57)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 113.683471680s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:33 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 57 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:34 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v127: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:34:34 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} v 0) v1
Nov 25 23:34:34 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]: dispatch
Nov 25 23:34:34 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Nov 25 23:34:34 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]: dispatch
Nov 25 23:34:34 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Nov 25 23:34:34 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Nov 25 23:34:34 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Nov 25 23:34:34 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 58 pg[6.b( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=57/58 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=32'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:34 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Nov 25 23:34:34 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Nov 25 23:34:34 compute-0 python3.9[103120]: ansible-ansible.legacy.ping Invoked with data=pong
Nov 25 23:34:34 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Nov 25 23:34:34 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Nov 25 23:34:35 compute-0 ceph-mon[75654]: pgmap v127: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:34:35 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Nov 25 23:34:35 compute-0 ceph-mon[75654]: osdmap e58: 3 total, 3 up, 3 in
Nov 25 23:34:35 compute-0 ceph-mon[75654]: 4.1f scrub starts
Nov 25 23:34:35 compute-0 ceph-mon[75654]: 4.1f scrub ok
Nov 25 23:34:35 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e58 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:34:35 compute-0 sshd-session[103021]: Invalid user test3 from 68.183.82.237 port 34722
Nov 25 23:34:35 compute-0 sshd-session[103021]: Received disconnect from 68.183.82.237 port 34722:11: Bye Bye [preauth]
Nov 25 23:34:35 compute-0 sshd-session[103021]: Disconnected from invalid user test3 68.183.82.237 port 34722 [preauth]
Nov 25 23:34:35 compute-0 python3.9[103294]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 23:34:36 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v129: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:34:36 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} v 0) v1
Nov 25 23:34:36 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 25 23:34:36 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Nov 25 23:34:36 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 25 23:34:36 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Nov 25 23:34:36 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Nov 25 23:34:36 compute-0 ceph-mon[75654]: 5.6 scrub starts
Nov 25 23:34:36 compute-0 ceph-mon[75654]: 5.6 scrub ok
Nov 25 23:34:36 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 25 23:34:36 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 59 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=49/50 n=1 ec=39/22 lis/c=49/49 les/c/f=50/50/0 sis=59 pruub=9.230822563s) [1] r=-1 lpr=59 pi=[49,59)/1 crt=32'39 mlcod 32'39 active pruub 114.456108093s@ mbc={255={}}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:36 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 59 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=49/50 n=1 ec=39/22 lis/c=49/49 les/c/f=50/50/0 sis=59 pruub=9.230740547s) [1] r=-1 lpr=59 pi=[49,59)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 114.456108093s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:36 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 59 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:36 compute-0 sudo[103448]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rpcbjltdfyumddrfoviaxueqalhervpz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113676.368749-45-271809754136897/AnsiballZ_command.py'
Nov 25 23:34:36 compute-0 sudo[103448]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:34:36 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.8 deep-scrub starts
Nov 25 23:34:36 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.8 deep-scrub ok
Nov 25 23:34:37 compute-0 python3.9[103450]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:34:37 compute-0 sudo[103448]: pam_unix(sudo:session): session closed for user root
Nov 25 23:34:37 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Nov 25 23:34:37 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Nov 25 23:34:37 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Nov 25 23:34:37 compute-0 ceph-mon[75654]: pgmap v129: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:34:37 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 25 23:34:37 compute-0 ceph-mon[75654]: osdmap e59: 3 total, 3 up, 3 in
Nov 25 23:34:37 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Nov 25 23:34:37 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Nov 25 23:34:37 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 60 pg[6.d( v 32'39 lc 32'7 (0'0,32'39] local-lis/les=59/60 n=1 ec=39/22 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:38 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v132: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Nov 25 23:34:38 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} v 0) v1
Nov 25 23:34:38 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 25 23:34:38 compute-0 sudo[103601]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvbidujwwgreuhsjygwuemukugttyvkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113677.48416-57-54599678066633/AnsiballZ_stat.py'
Nov 25 23:34:38 compute-0 sudo[103601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:34:38 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Nov 25 23:34:38 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 25 23:34:38 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Nov 25 23:34:38 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Nov 25 23:34:38 compute-0 ceph-mon[75654]: 5.8 deep-scrub starts
Nov 25 23:34:38 compute-0 ceph-mon[75654]: 5.8 deep-scrub ok
Nov 25 23:34:38 compute-0 ceph-mon[75654]: 5.1e scrub starts
Nov 25 23:34:38 compute-0 ceph-mon[75654]: 5.1e scrub ok
Nov 25 23:34:38 compute-0 ceph-mon[75654]: osdmap e60: 3 total, 3 up, 3 in
Nov 25 23:34:38 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 25 23:34:38 compute-0 python3.9[103603]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 23:34:38 compute-0 sudo[103601]: pam_unix(sudo:session): session closed for user root
Nov 25 23:34:38 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Nov 25 23:34:38 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Nov 25 23:34:39 compute-0 sudo[103755]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oodnwxijckdujttetnpfjvslpowubffd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113678.5825946-68-75445125879758/AnsiballZ_file.py'
Nov 25 23:34:39 compute-0 sudo[103755]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:34:39 compute-0 ceph-mon[75654]: pgmap v132: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Nov 25 23:34:39 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 25 23:34:39 compute-0 ceph-mon[75654]: osdmap e61: 3 total, 3 up, 3 in
Nov 25 23:34:39 compute-0 ceph-mon[75654]: 7.7 scrub starts
Nov 25 23:34:39 compute-0 ceph-mon[75654]: 7.7 scrub ok
Nov 25 23:34:39 compute-0 python3.9[103757]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:34:39 compute-0 sudo[103755]: pam_unix(sudo:session): session closed for user root
Nov 25 23:34:40 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v134: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Nov 25 23:34:40 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} v 0) v1
Nov 25 23:34:40 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 25 23:34:40 compute-0 sudo[103907]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dslavfnkuwbvdweyzeboctxowhsniclo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113679.6200523-77-170672216281391/AnsiballZ_file.py'
Nov 25 23:34:40 compute-0 sudo[103907]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:34:40 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 2.13 scrub starts
Nov 25 23:34:40 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 2.13 scrub ok
Nov 25 23:34:40 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Nov 25 23:34:40 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 25 23:34:40 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Nov 25 23:34:40 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Nov 25 23:34:40 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 25 23:34:40 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:34:40 compute-0 python3.9[103909]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:34:40 compute-0 sudo[103907]: pam_unix(sudo:session): session closed for user root
Nov 25 23:34:40 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 62 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=62 pruub=12.850687027s) [2] r=-1 lpr=62 pi=[46,62)/1 crt=32'39 mlcod 32'39 active pruub 121.687095642s@ mbc={255={}}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:34:40 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 62 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=62 pruub=12.850586891s) [2] r=-1 lpr=62 pi=[46,62)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 121.687095642s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:34:40 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 62 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=62) [2] r=0 lpr=62 pi=[46,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:34:40 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.a scrub starts
Nov 25 23:34:40 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.a scrub ok
Nov 25 23:34:41 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Nov 25 23:34:41 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Nov 25 23:34:41 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Nov 25 23:34:41 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Nov 25 23:34:41 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Nov 25 23:34:41 compute-0 ceph-mon[75654]: pgmap v134: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Nov 25 23:34:41 compute-0 ceph-mon[75654]: 2.13 scrub starts
Nov 25 23:34:41 compute-0 ceph-mon[75654]: 2.13 scrub ok
Nov 25 23:34:41 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 25 23:34:41 compute-0 ceph-mon[75654]: osdmap e62: 3 total, 3 up, 3 in
Nov 25 23:34:41 compute-0 ceph-mon[75654]: 5.a scrub starts
Nov 25 23:34:41 compute-0 ceph-mon[75654]: 5.a scrub ok
Nov 25 23:34:41 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 63 pg[6.f( v 32'39 lc 32'1 (0'0,32'39] local-lis/les=62/63 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=62) [2] r=0 lpr=62 pi=[46,62)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:34:41 compute-0 python3.9[104059]: ansible-ansible.builtin.service_facts Invoked
Nov 25 23:34:41 compute-0 network[104076]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 25 23:34:41 compute-0 network[104077]: 'network-scripts' will be removed from distribution in near future.
Nov 25 23:34:41 compute-0 network[104078]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 25 23:34:41 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.b scrub starts
Nov 25 23:34:41 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.b scrub ok
Nov 25 23:34:42 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v137: 177 pgs: 1 active+recovering, 176 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1/24 objects misplaced (4.167%); 18 B/s, 0 objects/s recovering
Nov 25 23:34:42 compute-0 ceph-mon[75654]: 2.16 scrub starts
Nov 25 23:34:42 compute-0 ceph-mon[75654]: 2.16 scrub ok
Nov 25 23:34:42 compute-0 ceph-mon[75654]: osdmap e63: 3 total, 3 up, 3 in
Nov 25 23:34:42 compute-0 ceph-mon[75654]: 5.b scrub starts
Nov 25 23:34:42 compute-0 ceph-mon[75654]: 5.b scrub ok
Nov 25 23:34:42 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 7.b scrub starts
Nov 25 23:34:42 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 7.b scrub ok
Nov 25 23:34:43 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Nov 25 23:34:43 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Nov 25 23:34:43 compute-0 ceph-mon[75654]: pgmap v137: 177 pgs: 1 active+recovering, 176 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1/24 objects misplaced (4.167%); 18 B/s, 0 objects/s recovering
Nov 25 23:34:43 compute-0 ceph-mon[75654]: 7.b scrub starts
Nov 25 23:34:43 compute-0 ceph-mon[75654]: 7.b scrub ok
Nov 25 23:34:43 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 7.d scrub starts
Nov 25 23:34:43 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 7.d scrub ok
Nov 25 23:34:44 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v138: 177 pgs: 1 active+recovering, 176 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1/24 objects misplaced (4.167%); 15 B/s, 0 objects/s recovering
Nov 25 23:34:44 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Nov 25 23:34:44 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Nov 25 23:34:44 compute-0 ceph-mon[75654]: 5.14 scrub starts
Nov 25 23:34:44 compute-0 ceph-mon[75654]: 5.14 scrub ok
Nov 25 23:34:44 compute-0 ceph-mon[75654]: 7.d scrub starts
Nov 25 23:34:44 compute-0 ceph-mon[75654]: 7.d scrub ok
Nov 25 23:34:44 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Nov 25 23:34:44 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Nov 25 23:34:45 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:34:45 compute-0 ceph-mon[75654]: pgmap v138: 177 pgs: 1 active+recovering, 176 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1/24 objects misplaced (4.167%); 15 B/s, 0 objects/s recovering
Nov 25 23:34:45 compute-0 ceph-mon[75654]: 3.17 scrub starts
Nov 25 23:34:45 compute-0 ceph-mon[75654]: 3.17 scrub ok
Nov 25 23:34:45 compute-0 ceph-mon[75654]: 7.10 scrub starts
Nov 25 23:34:45 compute-0 ceph-mon[75654]: 7.10 scrub ok
Nov 25 23:34:45 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 7.12 deep-scrub starts
Nov 25 23:34:45 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 7.12 deep-scrub ok
Nov 25 23:34:46 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v139: 177 pgs: 1 active+recovering, 176 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1/24 objects misplaced (4.167%); 11 B/s, 0 objects/s recovering
Nov 25 23:34:46 compute-0 python3.9[104338]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:34:46 compute-0 ceph-mon[75654]: 7.12 deep-scrub starts
Nov 25 23:34:46 compute-0 ceph-mon[75654]: 7.12 deep-scrub ok
Nov 25 23:34:46 compute-0 python3.9[104488]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 23:34:47 compute-0 ceph-mon[75654]: pgmap v139: 177 pgs: 1 active+recovering, 176 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1/24 objects misplaced (4.167%); 11 B/s, 0 objects/s recovering
Nov 25 23:34:47 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Nov 25 23:34:47 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Nov 25 23:34:47 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.d scrub starts
Nov 25 23:34:47 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.d scrub ok
Nov 25 23:34:48 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v140: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 83 B/s, 0 objects/s recovering
Nov 25 23:34:48 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Nov 25 23:34:48 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Nov 25 23:34:48 compute-0 ceph-mon[75654]: 7.14 scrub starts
Nov 25 23:34:48 compute-0 ceph-mon[75654]: 7.14 scrub ok
Nov 25 23:34:48 compute-0 python3.9[104642]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 23:34:49 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 3.f scrub starts
Nov 25 23:34:49 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 3.f scrub ok
Nov 25 23:34:49 compute-0 ceph-mon[75654]: 5.d scrub starts
Nov 25 23:34:49 compute-0 ceph-mon[75654]: 5.d scrub ok
Nov 25 23:34:49 compute-0 ceph-mon[75654]: pgmap v140: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 83 B/s, 0 objects/s recovering
Nov 25 23:34:49 compute-0 ceph-mon[75654]: 7.13 scrub starts
Nov 25 23:34:49 compute-0 ceph-mon[75654]: 7.13 scrub ok
Nov 25 23:34:49 compute-0 sudo[104798]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-foppwkxyccwrhnfoerrtsqnbigimoart ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113688.9277773-125-128401749588665/AnsiballZ_setup.py'
Nov 25 23:34:49 compute-0 sudo[104798]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:34:49 compute-0 python3.9[104800]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 23:34:49 compute-0 sudo[104798]: pam_unix(sudo:session): session closed for user root
Nov 25 23:34:50 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 3.c scrub starts
Nov 25 23:34:50 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 3.c scrub ok
Nov 25 23:34:50 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v141: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 68 B/s, 0 objects/s recovering
Nov 25 23:34:50 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:34:50 compute-0 ceph-mon[75654]: 3.f scrub starts
Nov 25 23:34:50 compute-0 ceph-mon[75654]: 3.f scrub ok
Nov 25 23:34:50 compute-0 sudo[104882]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbuwavzvfmchfrbhxttmeqbuqkehkpnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113688.9277773-125-128401749588665/AnsiballZ_dnf.py'
Nov 25 23:34:50 compute-0 sudo[104882]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:34:50 compute-0 python3.9[104884]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 23:34:51 compute-0 ceph-mon[75654]: 3.c scrub starts
Nov 25 23:34:51 compute-0 ceph-mon[75654]: 3.c scrub ok
Nov 25 23:34:51 compute-0 ceph-mon[75654]: pgmap v141: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 68 B/s, 0 objects/s recovering
Nov 25 23:34:51 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Nov 25 23:34:51 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Nov 25 23:34:52 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v142: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 61 B/s, 0 objects/s recovering
Nov 25 23:34:52 compute-0 ceph-mon[75654]: 7.16 scrub starts
Nov 25 23:34:52 compute-0 ceph-mon[75654]: 7.16 scrub ok
Nov 25 23:34:52 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.e scrub starts
Nov 25 23:34:52 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.e scrub ok
Nov 25 23:34:53 compute-0 ceph-mon[75654]: pgmap v142: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 61 B/s, 0 objects/s recovering
Nov 25 23:34:53 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Nov 25 23:34:53 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Nov 25 23:34:54 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v143: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 48 B/s, 0 objects/s recovering
Nov 25 23:34:54 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Nov 25 23:34:54 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Nov 25 23:34:54 compute-0 ceph-mon[75654]: 5.e scrub starts
Nov 25 23:34:54 compute-0 ceph-mon[75654]: 5.e scrub ok
Nov 25 23:34:54 compute-0 ceph-mon[75654]: 7.17 scrub starts
Nov 25 23:34:54 compute-0 ceph-mon[75654]: 7.17 scrub ok
Nov 25 23:34:54 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Nov 25 23:34:54 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Nov 25 23:34:54 compute-0 sshd-session[104953]: banner exchange: Connection from 91.238.181.92 port 65451: invalid format
Nov 25 23:34:55 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Nov 25 23:34:55 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Nov 25 23:34:55 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:34:55 compute-0 ceph-mon[75654]: pgmap v143: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 48 B/s, 0 objects/s recovering
Nov 25 23:34:55 compute-0 ceph-mon[75654]: 2.11 scrub starts
Nov 25 23:34:55 compute-0 ceph-mon[75654]: 2.11 scrub ok
Nov 25 23:34:55 compute-0 ceph-mon[75654]: 7.19 scrub starts
Nov 25 23:34:55 compute-0 ceph-mon[75654]: 7.19 scrub ok
Nov 25 23:34:55 compute-0 sudo[104954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:34:55 compute-0 sudo[104954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:34:55 compute-0 sudo[104954]: pam_unix(sudo:session): session closed for user root
Nov 25 23:34:55 compute-0 sudo[104979]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:34:55 compute-0 sudo[104979]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:34:55 compute-0 sudo[104979]: pam_unix(sudo:session): session closed for user root
Nov 25 23:34:55 compute-0 sudo[105004]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:34:55 compute-0 sudo[105004]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:34:55 compute-0 sudo[105004]: pam_unix(sudo:session): session closed for user root
Nov 25 23:34:55 compute-0 sudo[105029]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 25 23:34:55 compute-0 sudo[105029]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:34:56 compute-0 ceph-mgr[75954]: [balancer INFO root] Optimize plan auto_2025-11-25_23:34:55
Nov 25 23:34:56 compute-0 ceph-mgr[75954]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 23:34:56 compute-0 ceph-mgr[75954]: [balancer INFO root] do_upmap
Nov 25 23:34:56 compute-0 ceph-mgr[75954]: [balancer INFO root] pools ['.mgr', 'volumes', 'vms', 'cephfs.cephfs.meta', 'images', 'cephfs.cephfs.data', 'backups']
Nov 25 23:34:56 compute-0 ceph-mgr[75954]: [balancer INFO root] prepared 0/10 changes
Nov 25 23:34:56 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v144: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 48 B/s, 0 objects/s recovering
Nov 25 23:34:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:34:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:34:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 23:34:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 23:34:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 23:34:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 23:34:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 23:34:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 23:34:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 23:34:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 23:34:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 23:34:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 23:34:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:34:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:34:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:34:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:34:56 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 7.f scrub starts
Nov 25 23:34:56 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 7.f scrub ok
Nov 25 23:34:56 compute-0 podman[105128]: 2025-11-25 23:34:56.273922012 +0000 UTC m=+0.078201837 container exec 42789e176a5dd28a3c234ac87642cbe4e8e4b6a7a43641214b6da8a496b18af2 (image=quay.io/ceph/ceph:v18, name=ceph-101922db-575f-58e2-980f-928050464f69-mon-compute-0, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:34:56 compute-0 ceph-mon[75654]: 3.12 scrub starts
Nov 25 23:34:56 compute-0 ceph-mon[75654]: 3.12 scrub ok
Nov 25 23:34:56 compute-0 podman[105128]: 2025-11-25 23:34:56.429861869 +0000 UTC m=+0.234141634 container exec_died 42789e176a5dd28a3c234ac87642cbe4e8e4b6a7a43641214b6da8a496b18af2 (image=quay.io/ceph/ceph:v18, name=ceph-101922db-575f-58e2-980f-928050464f69-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True)
Nov 25 23:34:56 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Nov 25 23:34:56 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Nov 25 23:34:57 compute-0 sudo[105029]: pam_unix(sudo:session): session closed for user root
Nov 25 23:34:57 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 23:34:57 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:34:57 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 23:34:57 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:34:57 compute-0 sudo[105271]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:34:57 compute-0 sudo[105271]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:34:57 compute-0 sudo[105271]: pam_unix(sudo:session): session closed for user root
Nov 25 23:34:57 compute-0 sudo[105296]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:34:57 compute-0 sudo[105296]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:34:57 compute-0 sudo[105296]: pam_unix(sudo:session): session closed for user root
Nov 25 23:34:57 compute-0 sudo[105321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:34:57 compute-0 sudo[105321]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:34:57 compute-0 sudo[105321]: pam_unix(sudo:session): session closed for user root
Nov 25 23:34:57 compute-0 sudo[105346]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 23:34:57 compute-0 sudo[105346]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:34:57 compute-0 ceph-mon[75654]: pgmap v144: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 48 B/s, 0 objects/s recovering
Nov 25 23:34:57 compute-0 ceph-mon[75654]: 7.f scrub starts
Nov 25 23:34:57 compute-0 ceph-mon[75654]: 7.f scrub ok
Nov 25 23:34:57 compute-0 ceph-mon[75654]: 7.1d scrub starts
Nov 25 23:34:57 compute-0 ceph-mon[75654]: 7.1d scrub ok
Nov 25 23:34:57 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:34:57 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:34:57 compute-0 sudo[105346]: pam_unix(sudo:session): session closed for user root
Nov 25 23:34:57 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 23:34:57 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:34:57 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 23:34:57 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 23:34:57 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 23:34:57 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:34:57 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev 0b1cd114-9a7c-4f30-98a6-0c20fd854f0a does not exist
Nov 25 23:34:57 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev 54c11424-98eb-44a0-8721-7fe25d7c96c0 does not exist
Nov 25 23:34:57 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev 6f792355-2392-407c-9ef3-3e440e441a57 does not exist
Nov 25 23:34:57 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 23:34:57 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 23:34:57 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 23:34:57 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 23:34:57 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 23:34:57 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:34:57 compute-0 sudo[105401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:34:57 compute-0 sudo[105401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:34:57 compute-0 sudo[105401]: pam_unix(sudo:session): session closed for user root
Nov 25 23:34:58 compute-0 sudo[105426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:34:58 compute-0 sudo[105426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:34:58 compute-0 sudo[105426]: pam_unix(sudo:session): session closed for user root
Nov 25 23:34:58 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v145: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 48 B/s, 0 objects/s recovering
Nov 25 23:34:58 compute-0 sudo[105451]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:34:58 compute-0 sudo[105451]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:34:58 compute-0 sudo[105451]: pam_unix(sudo:session): session closed for user root
Nov 25 23:34:58 compute-0 sudo[105476]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 23:34:58 compute-0 sudo[105476]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:34:58 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:34:58 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 23:34:58 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:34:58 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 23:34:58 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 23:34:58 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:34:58 compute-0 podman[105540]: 2025-11-25 23:34:58.448322417 +0000 UTC m=+0.046757085 container create 8418778d9cb54eb8d11a66ce079d2cee8a3265065465f197d081727f812d14fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_heyrovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:34:58 compute-0 systemd[77281]: Starting Mark boot as successful...
Nov 25 23:34:58 compute-0 systemd[77281]: Finished Mark boot as successful.
Nov 25 23:34:58 compute-0 systemd[1]: Started libpod-conmon-8418778d9cb54eb8d11a66ce079d2cee8a3265065465f197d081727f812d14fd.scope.
Nov 25 23:34:58 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:34:58 compute-0 podman[105540]: 2025-11-25 23:34:58.513878002 +0000 UTC m=+0.112312700 container init 8418778d9cb54eb8d11a66ce079d2cee8a3265065465f197d081727f812d14fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_heyrovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:34:58 compute-0 podman[105540]: 2025-11-25 23:34:58.423630105 +0000 UTC m=+0.022064833 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:34:58 compute-0 podman[105540]: 2025-11-25 23:34:58.520256814 +0000 UTC m=+0.118691512 container start 8418778d9cb54eb8d11a66ce079d2cee8a3265065465f197d081727f812d14fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_heyrovsky, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 25 23:34:58 compute-0 inspiring_heyrovsky[105558]: 167 167
Nov 25 23:34:58 compute-0 systemd[1]: libpod-8418778d9cb54eb8d11a66ce079d2cee8a3265065465f197d081727f812d14fd.scope: Deactivated successfully.
Nov 25 23:34:58 compute-0 conmon[105558]: conmon 8418778d9cb54eb8d11a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8418778d9cb54eb8d11a66ce079d2cee8a3265065465f197d081727f812d14fd.scope/container/memory.events
Nov 25 23:34:58 compute-0 podman[105540]: 2025-11-25 23:34:58.526347826 +0000 UTC m=+0.124782504 container attach 8418778d9cb54eb8d11a66ce079d2cee8a3265065465f197d081727f812d14fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_heyrovsky, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True)
Nov 25 23:34:58 compute-0 podman[105540]: 2025-11-25 23:34:58.526623195 +0000 UTC m=+0.125057843 container died 8418778d9cb54eb8d11a66ce079d2cee8a3265065465f197d081727f812d14fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_heyrovsky, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 25 23:34:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-c75270cd039dd80ed94ee712bd78803d7773915d22a9db77e2dbffe771a45565-merged.mount: Deactivated successfully.
Nov 25 23:34:58 compute-0 podman[105540]: 2025-11-25 23:34:58.559824393 +0000 UTC m=+0.158259051 container remove 8418778d9cb54eb8d11a66ce079d2cee8a3265065465f197d081727f812d14fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_heyrovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:34:58 compute-0 systemd[1]: libpod-conmon-8418778d9cb54eb8d11a66ce079d2cee8a3265065465f197d081727f812d14fd.scope: Deactivated successfully.
Nov 25 23:34:58 compute-0 podman[105581]: 2025-11-25 23:34:58.75751314 +0000 UTC m=+0.060121982 container create ec114bad703d557b877583a8fc42eca09e03857a1c5b39af07fc572e134e937f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_matsumoto, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:34:58 compute-0 systemd[1]: Started libpod-conmon-ec114bad703d557b877583a8fc42eca09e03857a1c5b39af07fc572e134e937f.scope.
Nov 25 23:34:58 compute-0 podman[105581]: 2025-11-25 23:34:58.732275053 +0000 UTC m=+0.034883865 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:34:58 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:34:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6235981576a35f4ebc7f903a5f56ec4f00393e6380e7a2e2d7bc0ddf1950092a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 23:34:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6235981576a35f4ebc7f903a5f56ec4f00393e6380e7a2e2d7bc0ddf1950092a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:34:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6235981576a35f4ebc7f903a5f56ec4f00393e6380e7a2e2d7bc0ddf1950092a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:34:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6235981576a35f4ebc7f903a5f56ec4f00393e6380e7a2e2d7bc0ddf1950092a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:34:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6235981576a35f4ebc7f903a5f56ec4f00393e6380e7a2e2d7bc0ddf1950092a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 23:34:58 compute-0 podman[105581]: 2025-11-25 23:34:58.877842603 +0000 UTC m=+0.180451475 container init ec114bad703d557b877583a8fc42eca09e03857a1c5b39af07fc572e134e937f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_matsumoto, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:34:58 compute-0 podman[105581]: 2025-11-25 23:34:58.886278589 +0000 UTC m=+0.188887391 container start ec114bad703d557b877583a8fc42eca09e03857a1c5b39af07fc572e134e937f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 23:34:58 compute-0 podman[105581]: 2025-11-25 23:34:58.889688651 +0000 UTC m=+0.192297493 container attach ec114bad703d557b877583a8fc42eca09e03857a1c5b39af07fc572e134e937f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_matsumoto, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:34:59 compute-0 ceph-mon[75654]: pgmap v145: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 48 B/s, 0 objects/s recovering
Nov 25 23:34:59 compute-0 quirky_matsumoto[105597]: --> passed data devices: 0 physical, 3 LVM
Nov 25 23:34:59 compute-0 quirky_matsumoto[105597]: --> relative data size: 1.0
Nov 25 23:34:59 compute-0 quirky_matsumoto[105597]: --> All data devices are unavailable
Nov 25 23:34:59 compute-0 systemd[1]: libpod-ec114bad703d557b877583a8fc42eca09e03857a1c5b39af07fc572e134e937f.scope: Deactivated successfully.
Nov 25 23:34:59 compute-0 podman[105581]: 2025-11-25 23:34:59.895427126 +0000 UTC m=+1.198035958 container died ec114bad703d557b877583a8fc42eca09e03857a1c5b39af07fc572e134e937f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_matsumoto, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True)
Nov 25 23:34:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-6235981576a35f4ebc7f903a5f56ec4f00393e6380e7a2e2d7bc0ddf1950092a-merged.mount: Deactivated successfully.
Nov 25 23:34:59 compute-0 podman[105581]: 2025-11-25 23:34:59.958629119 +0000 UTC m=+1.261237931 container remove ec114bad703d557b877583a8fc42eca09e03857a1c5b39af07fc572e134e937f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_matsumoto, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 25 23:34:59 compute-0 systemd[1]: libpod-conmon-ec114bad703d557b877583a8fc42eca09e03857a1c5b39af07fc572e134e937f.scope: Deactivated successfully.
Nov 25 23:34:59 compute-0 sudo[105476]: pam_unix(sudo:session): session closed for user root
Nov 25 23:35:00 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v146: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:35:00 compute-0 sudo[105637]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:35:00 compute-0 sudo[105637]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:35:00 compute-0 sudo[105637]: pam_unix(sudo:session): session closed for user root
Nov 25 23:35:00 compute-0 sudo[105662]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:35:00 compute-0 sudo[105662]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:35:00 compute-0 sudo[105662]: pam_unix(sudo:session): session closed for user root
Nov 25 23:35:00 compute-0 sudo[105687]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:35:00 compute-0 sudo[105687]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:35:00 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:35:00 compute-0 sudo[105687]: pam_unix(sudo:session): session closed for user root
Nov 25 23:35:00 compute-0 sudo[105712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 -- lvm list --format json
Nov 25 23:35:00 compute-0 sudo[105712]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:35:00 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Nov 25 23:35:00 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Nov 25 23:35:00 compute-0 podman[105775]: 2025-11-25 23:35:00.612162679 +0000 UTC m=+0.048643265 container create aa32b278331998736ea87f23ebf5d8697fcf04b082abdebe531b676c563744dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_cartwright, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:35:00 compute-0 systemd[1]: Started libpod-conmon-aa32b278331998736ea87f23ebf5d8697fcf04b082abdebe531b676c563744dc.scope.
Nov 25 23:35:00 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:35:00 compute-0 podman[105775]: 2025-11-25 23:35:00.591724061 +0000 UTC m=+0.028204747 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:35:00 compute-0 podman[105775]: 2025-11-25 23:35:00.697567497 +0000 UTC m=+0.134048163 container init aa32b278331998736ea87f23ebf5d8697fcf04b082abdebe531b676c563744dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_cartwright, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 25 23:35:00 compute-0 podman[105775]: 2025-11-25 23:35:00.704930404 +0000 UTC m=+0.141410990 container start aa32b278331998736ea87f23ebf5d8697fcf04b082abdebe531b676c563744dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_cartwright, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2)
Nov 25 23:35:00 compute-0 podman[105775]: 2025-11-25 23:35:00.708543501 +0000 UTC m=+0.145024087 container attach aa32b278331998736ea87f23ebf5d8697fcf04b082abdebe531b676c563744dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_cartwright, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 25 23:35:00 compute-0 nifty_cartwright[105792]: 167 167
Nov 25 23:35:00 compute-0 systemd[1]: libpod-aa32b278331998736ea87f23ebf5d8697fcf04b082abdebe531b676c563744dc.scope: Deactivated successfully.
Nov 25 23:35:00 compute-0 podman[105775]: 2025-11-25 23:35:00.71188753 +0000 UTC m=+0.148368126 container died aa32b278331998736ea87f23ebf5d8697fcf04b082abdebe531b676c563744dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_cartwright, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:35:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-a1ce747fbd83d0beb939caa1c03324c8a642f5080d14399fa8aae36d9c2e5f79-merged.mount: Deactivated successfully.
Nov 25 23:35:00 compute-0 podman[105775]: 2025-11-25 23:35:00.750150206 +0000 UTC m=+0.186630792 container remove aa32b278331998736ea87f23ebf5d8697fcf04b082abdebe531b676c563744dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 23:35:00 compute-0 systemd[1]: libpod-conmon-aa32b278331998736ea87f23ebf5d8697fcf04b082abdebe531b676c563744dc.scope: Deactivated successfully.
Nov 25 23:35:00 compute-0 podman[105817]: 2025-11-25 23:35:00.926376686 +0000 UTC m=+0.064353035 container create 2580ff58d3d606d6d72c70927f0e0273e1d8df78d96d5d1fd799004ab72fd637 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_cori, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 23:35:00 compute-0 systemd[1]: Started libpod-conmon-2580ff58d3d606d6d72c70927f0e0273e1d8df78d96d5d1fd799004ab72fd637.scope.
Nov 25 23:35:00 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:35:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14db278946e792be15d916a8574410d74fedda95cb5b959c59433116faa1fcaa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 23:35:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14db278946e792be15d916a8574410d74fedda95cb5b959c59433116faa1fcaa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:35:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14db278946e792be15d916a8574410d74fedda95cb5b959c59433116faa1fcaa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:35:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14db278946e792be15d916a8574410d74fedda95cb5b959c59433116faa1fcaa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:35:00 compute-0 podman[105817]: 2025-11-25 23:35:00.902395304 +0000 UTC m=+0.040371643 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:35:01 compute-0 podman[105817]: 2025-11-25 23:35:01.01270471 +0000 UTC m=+0.150681079 container init 2580ff58d3d606d6d72c70927f0e0273e1d8df78d96d5d1fd799004ab72fd637 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_cori, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:35:01 compute-0 podman[105817]: 2025-11-25 23:35:01.021273519 +0000 UTC m=+0.159249848 container start 2580ff58d3d606d6d72c70927f0e0273e1d8df78d96d5d1fd799004ab72fd637 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_cori, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:35:01 compute-0 podman[105817]: 2025-11-25 23:35:01.024777533 +0000 UTC m=+0.162753892 container attach 2580ff58d3d606d6d72c70927f0e0273e1d8df78d96d5d1fd799004ab72fd637 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_cori, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:35:01 compute-0 ceph-mon[75654]: pgmap v146: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:35:01 compute-0 ceph-mon[75654]: 7.1e scrub starts
Nov 25 23:35:01 compute-0 ceph-mon[75654]: 7.1e scrub ok
Nov 25 23:35:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 23:35:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:35:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 23:35:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:35:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:35:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:35:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:35:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:35:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:35:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:35:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:35:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:35:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 23:35:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:35:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:35:01 compute-0 priceless_cori[105837]: {
Nov 25 23:35:01 compute-0 priceless_cori[105837]:     "0": [
Nov 25 23:35:01 compute-0 priceless_cori[105837]:         {
Nov 25 23:35:01 compute-0 priceless_cori[105837]:             "devices": [
Nov 25 23:35:01 compute-0 priceless_cori[105837]:                 "/dev/loop3"
Nov 25 23:35:01 compute-0 priceless_cori[105837]:             ],
Nov 25 23:35:01 compute-0 priceless_cori[105837]:             "lv_name": "ceph_lv0",
Nov 25 23:35:01 compute-0 priceless_cori[105837]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 23:35:01 compute-0 priceless_cori[105837]:             "lv_size": "21470642176",
Nov 25 23:35:01 compute-0 priceless_cori[105837]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c5ddf08a-6193-41e0-8332-60b5083aa62e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 23:35:01 compute-0 priceless_cori[105837]:             "lv_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 23:35:01 compute-0 priceless_cori[105837]:             "name": "ceph_lv0",
Nov 25 23:35:01 compute-0 priceless_cori[105837]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 23:35:01 compute-0 priceless_cori[105837]:             "tags": {
Nov 25 23:35:01 compute-0 priceless_cori[105837]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 23:35:01 compute-0 priceless_cori[105837]:                 "ceph.block_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 23:35:01 compute-0 priceless_cori[105837]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 23:35:01 compute-0 priceless_cori[105837]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:35:01 compute-0 priceless_cori[105837]:                 "ceph.cluster_name": "ceph",
Nov 25 23:35:01 compute-0 priceless_cori[105837]:                 "ceph.crush_device_class": "",
Nov 25 23:35:01 compute-0 priceless_cori[105837]:                 "ceph.encrypted": "0",
Nov 25 23:35:01 compute-0 priceless_cori[105837]:                 "ceph.osd_fsid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 23:35:01 compute-0 priceless_cori[105837]:                 "ceph.osd_id": "0",
Nov 25 23:35:01 compute-0 priceless_cori[105837]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 23:35:01 compute-0 priceless_cori[105837]:                 "ceph.type": "block",
Nov 25 23:35:01 compute-0 priceless_cori[105837]:                 "ceph.vdo": "0"
Nov 25 23:35:01 compute-0 priceless_cori[105837]:             },
Nov 25 23:35:01 compute-0 priceless_cori[105837]:             "type": "block",
Nov 25 23:35:01 compute-0 priceless_cori[105837]:             "vg_name": "ceph_vg0"
Nov 25 23:35:01 compute-0 priceless_cori[105837]:         }
Nov 25 23:35:01 compute-0 priceless_cori[105837]:     ],
Nov 25 23:35:01 compute-0 priceless_cori[105837]:     "1": [
Nov 25 23:35:01 compute-0 priceless_cori[105837]:         {
Nov 25 23:35:01 compute-0 priceless_cori[105837]:             "devices": [
Nov 25 23:35:01 compute-0 priceless_cori[105837]:                 "/dev/loop4"
Nov 25 23:35:01 compute-0 priceless_cori[105837]:             ],
Nov 25 23:35:01 compute-0 priceless_cori[105837]:             "lv_name": "ceph_lv1",
Nov 25 23:35:01 compute-0 priceless_cori[105837]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 23:35:01 compute-0 priceless_cori[105837]:             "lv_size": "21470642176",
Nov 25 23:35:01 compute-0 priceless_cori[105837]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21bbab34-bea3-466b-8bf7-812749fcef47,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 23:35:01 compute-0 priceless_cori[105837]:             "lv_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 23:35:01 compute-0 priceless_cori[105837]:             "name": "ceph_lv1",
Nov 25 23:35:01 compute-0 priceless_cori[105837]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 23:35:01 compute-0 priceless_cori[105837]:             "tags": {
Nov 25 23:35:01 compute-0 priceless_cori[105837]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 23:35:01 compute-0 priceless_cori[105837]:                 "ceph.block_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 23:35:01 compute-0 priceless_cori[105837]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 23:35:01 compute-0 priceless_cori[105837]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:35:01 compute-0 priceless_cori[105837]:                 "ceph.cluster_name": "ceph",
Nov 25 23:35:01 compute-0 priceless_cori[105837]:                 "ceph.crush_device_class": "",
Nov 25 23:35:01 compute-0 priceless_cori[105837]:                 "ceph.encrypted": "0",
Nov 25 23:35:01 compute-0 priceless_cori[105837]:                 "ceph.osd_fsid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 23:35:01 compute-0 priceless_cori[105837]:                 "ceph.osd_id": "1",
Nov 25 23:35:01 compute-0 priceless_cori[105837]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 23:35:01 compute-0 priceless_cori[105837]:                 "ceph.type": "block",
Nov 25 23:35:01 compute-0 priceless_cori[105837]:                 "ceph.vdo": "0"
Nov 25 23:35:01 compute-0 priceless_cori[105837]:             },
Nov 25 23:35:01 compute-0 priceless_cori[105837]:             "type": "block",
Nov 25 23:35:01 compute-0 priceless_cori[105837]:             "vg_name": "ceph_vg1"
Nov 25 23:35:01 compute-0 priceless_cori[105837]:         }
Nov 25 23:35:01 compute-0 priceless_cori[105837]:     ],
Nov 25 23:35:01 compute-0 priceless_cori[105837]:     "2": [
Nov 25 23:35:01 compute-0 priceless_cori[105837]:         {
Nov 25 23:35:01 compute-0 priceless_cori[105837]:             "devices": [
Nov 25 23:35:01 compute-0 priceless_cori[105837]:                 "/dev/loop5"
Nov 25 23:35:01 compute-0 priceless_cori[105837]:             ],
Nov 25 23:35:01 compute-0 priceless_cori[105837]:             "lv_name": "ceph_lv2",
Nov 25 23:35:01 compute-0 priceless_cori[105837]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 23:35:01 compute-0 priceless_cori[105837]:             "lv_size": "21470642176",
Nov 25 23:35:01 compute-0 priceless_cori[105837]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=019d967b-1a56-4e90-8682-a890da577e20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 23:35:01 compute-0 priceless_cori[105837]:             "lv_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 23:35:01 compute-0 priceless_cori[105837]:             "name": "ceph_lv2",
Nov 25 23:35:01 compute-0 priceless_cori[105837]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 23:35:01 compute-0 priceless_cori[105837]:             "tags": {
Nov 25 23:35:01 compute-0 priceless_cori[105837]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 23:35:01 compute-0 priceless_cori[105837]:                 "ceph.block_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 23:35:01 compute-0 priceless_cori[105837]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 23:35:01 compute-0 priceless_cori[105837]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:35:01 compute-0 priceless_cori[105837]:                 "ceph.cluster_name": "ceph",
Nov 25 23:35:01 compute-0 priceless_cori[105837]:                 "ceph.crush_device_class": "",
Nov 25 23:35:01 compute-0 priceless_cori[105837]:                 "ceph.encrypted": "0",
Nov 25 23:35:01 compute-0 priceless_cori[105837]:                 "ceph.osd_fsid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 23:35:01 compute-0 priceless_cori[105837]:                 "ceph.osd_id": "2",
Nov 25 23:35:01 compute-0 priceless_cori[105837]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 23:35:01 compute-0 priceless_cori[105837]:                 "ceph.type": "block",
Nov 25 23:35:01 compute-0 priceless_cori[105837]:                 "ceph.vdo": "0"
Nov 25 23:35:01 compute-0 priceless_cori[105837]:             },
Nov 25 23:35:01 compute-0 priceless_cori[105837]:             "type": "block",
Nov 25 23:35:01 compute-0 priceless_cori[105837]:             "vg_name": "ceph_vg2"
Nov 25 23:35:01 compute-0 priceless_cori[105837]:         }
Nov 25 23:35:01 compute-0 priceless_cori[105837]:     ]
Nov 25 23:35:01 compute-0 priceless_cori[105837]: }
Nov 25 23:35:01 compute-0 systemd[1]: libpod-2580ff58d3d606d6d72c70927f0e0273e1d8df78d96d5d1fd799004ab72fd637.scope: Deactivated successfully.
Nov 25 23:35:01 compute-0 conmon[105837]: conmon 2580ff58d3d606d6d72c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2580ff58d3d606d6d72c70927f0e0273e1d8df78d96d5d1fd799004ab72fd637.scope/container/memory.events
Nov 25 23:35:01 compute-0 podman[105817]: 2025-11-25 23:35:01.785224517 +0000 UTC m=+0.923200856 container died 2580ff58d3d606d6d72c70927f0e0273e1d8df78d96d5d1fd799004ab72fd637 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_cori, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 23:35:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-14db278946e792be15d916a8574410d74fedda95cb5b959c59433116faa1fcaa-merged.mount: Deactivated successfully.
Nov 25 23:35:01 compute-0 podman[105817]: 2025-11-25 23:35:01.844271368 +0000 UTC m=+0.982247677 container remove 2580ff58d3d606d6d72c70927f0e0273e1d8df78d96d5d1fd799004ab72fd637 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_cori, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3)
Nov 25 23:35:01 compute-0 systemd[1]: libpod-conmon-2580ff58d3d606d6d72c70927f0e0273e1d8df78d96d5d1fd799004ab72fd637.scope: Deactivated successfully.
Nov 25 23:35:01 compute-0 sudo[105712]: pam_unix(sudo:session): session closed for user root
Nov 25 23:35:01 compute-0 sudo[105861]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:35:01 compute-0 sudo[105861]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:35:01 compute-0 sudo[105861]: pam_unix(sudo:session): session closed for user root
Nov 25 23:35:02 compute-0 sudo[105886]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:35:02 compute-0 sudo[105886]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:35:02 compute-0 sudo[105886]: pam_unix(sudo:session): session closed for user root
Nov 25 23:35:02 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v147: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:35:02 compute-0 sudo[105911]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:35:02 compute-0 sudo[105911]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:35:02 compute-0 sudo[105911]: pam_unix(sudo:session): session closed for user root
Nov 25 23:35:02 compute-0 sudo[105936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 -- raw list --format json
Nov 25 23:35:02 compute-0 sudo[105936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:35:02 compute-0 podman[106001]: 2025-11-25 23:35:02.612438449 +0000 UTC m=+0.056631139 container create b370704ec8691d05e7c1ed8c688cd19e9d4e3064ecdcd9a4f253ddea7b4f23cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_gagarin, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:35:02 compute-0 systemd[1]: Started libpod-conmon-b370704ec8691d05e7c1ed8c688cd19e9d4e3064ecdcd9a4f253ddea7b4f23cf.scope.
Nov 25 23:35:02 compute-0 podman[106001]: 2025-11-25 23:35:02.585783705 +0000 UTC m=+0.029976445 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:35:02 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:35:02 compute-0 podman[106001]: 2025-11-25 23:35:02.72262306 +0000 UTC m=+0.166815810 container init b370704ec8691d05e7c1ed8c688cd19e9d4e3064ecdcd9a4f253ddea7b4f23cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_gagarin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:35:02 compute-0 podman[106001]: 2025-11-25 23:35:02.733091921 +0000 UTC m=+0.177284601 container start b370704ec8691d05e7c1ed8c688cd19e9d4e3064ecdcd9a4f253ddea7b4f23cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_gagarin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 25 23:35:02 compute-0 podman[106001]: 2025-11-25 23:35:02.737316075 +0000 UTC m=+0.181508765 container attach b370704ec8691d05e7c1ed8c688cd19e9d4e3064ecdcd9a4f253ddea7b4f23cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_gagarin, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 23:35:02 compute-0 keen_gagarin[106018]: 167 167
Nov 25 23:35:02 compute-0 systemd[1]: libpod-b370704ec8691d05e7c1ed8c688cd19e9d4e3064ecdcd9a4f253ddea7b4f23cf.scope: Deactivated successfully.
Nov 25 23:35:02 compute-0 podman[106001]: 2025-11-25 23:35:02.743771907 +0000 UTC m=+0.187964597 container died b370704ec8691d05e7c1ed8c688cd19e9d4e3064ecdcd9a4f253ddea7b4f23cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_gagarin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:35:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-4c28586afa73800240d7230627d9c214f9e6f9c5cad4f2d9439ff75d93eee594-merged.mount: Deactivated successfully.
Nov 25 23:35:02 compute-0 podman[106001]: 2025-11-25 23:35:02.796291584 +0000 UTC m=+0.240484284 container remove b370704ec8691d05e7c1ed8c688cd19e9d4e3064ecdcd9a4f253ddea7b4f23cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_gagarin, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 23:35:02 compute-0 systemd[1]: libpod-conmon-b370704ec8691d05e7c1ed8c688cd19e9d4e3064ecdcd9a4f253ddea7b4f23cf.scope: Deactivated successfully.
Nov 25 23:35:03 compute-0 podman[106043]: 2025-11-25 23:35:03.042648155 +0000 UTC m=+0.075916265 container create 55d8e83f76146ae39aa68a104af9f0f38715dad1d24869258d5b1993cc54e7f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 25 23:35:03 compute-0 systemd[1]: Started libpod-conmon-55d8e83f76146ae39aa68a104af9f0f38715dad1d24869258d5b1993cc54e7f2.scope.
Nov 25 23:35:03 compute-0 podman[106043]: 2025-11-25 23:35:03.012904718 +0000 UTC m=+0.046172888 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:35:03 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:35:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10a4b7c9147076c25452bfe891b65754241574246277f94410d848c1a8f429ca/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 23:35:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10a4b7c9147076c25452bfe891b65754241574246277f94410d848c1a8f429ca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:35:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10a4b7c9147076c25452bfe891b65754241574246277f94410d848c1a8f429ca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:35:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10a4b7c9147076c25452bfe891b65754241574246277f94410d848c1a8f429ca/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:35:03 compute-0 podman[106043]: 2025-11-25 23:35:03.187855725 +0000 UTC m=+0.221123905 container init 55d8e83f76146ae39aa68a104af9f0f38715dad1d24869258d5b1993cc54e7f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_thompson, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 25 23:35:03 compute-0 podman[106043]: 2025-11-25 23:35:03.201427828 +0000 UTC m=+0.234695948 container start 55d8e83f76146ae39aa68a104af9f0f38715dad1d24869258d5b1993cc54e7f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_thompson, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:35:03 compute-0 podman[106043]: 2025-11-25 23:35:03.205566889 +0000 UTC m=+0.238834999 container attach 55d8e83f76146ae39aa68a104af9f0f38715dad1d24869258d5b1993cc54e7f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_thompson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:35:03 compute-0 ceph-mon[75654]: pgmap v147: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:35:04 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v148: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:35:04 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Nov 25 23:35:04 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Nov 25 23:35:04 compute-0 naughty_thompson[106059]: {
Nov 25 23:35:04 compute-0 naughty_thompson[106059]:     "019d967b-1a56-4e90-8682-a890da577e20": {
Nov 25 23:35:04 compute-0 naughty_thompson[106059]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:35:04 compute-0 naughty_thompson[106059]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 23:35:04 compute-0 naughty_thompson[106059]:         "osd_id": 2,
Nov 25 23:35:04 compute-0 naughty_thompson[106059]:         "osd_uuid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 23:35:04 compute-0 naughty_thompson[106059]:         "type": "bluestore"
Nov 25 23:35:04 compute-0 naughty_thompson[106059]:     },
Nov 25 23:35:04 compute-0 naughty_thompson[106059]:     "21bbab34-bea3-466b-8bf7-812749fcef47": {
Nov 25 23:35:04 compute-0 naughty_thompson[106059]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:35:04 compute-0 naughty_thompson[106059]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 23:35:04 compute-0 naughty_thompson[106059]:         "osd_id": 1,
Nov 25 23:35:04 compute-0 naughty_thompson[106059]:         "osd_uuid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 23:35:04 compute-0 naughty_thompson[106059]:         "type": "bluestore"
Nov 25 23:35:04 compute-0 naughty_thompson[106059]:     },
Nov 25 23:35:04 compute-0 naughty_thompson[106059]:     "c5ddf08a-6193-41e0-8332-60b5083aa62e": {
Nov 25 23:35:04 compute-0 naughty_thompson[106059]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:35:04 compute-0 naughty_thompson[106059]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 23:35:04 compute-0 naughty_thompson[106059]:         "osd_id": 0,
Nov 25 23:35:04 compute-0 naughty_thompson[106059]:         "osd_uuid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 23:35:04 compute-0 naughty_thompson[106059]:         "type": "bluestore"
Nov 25 23:35:04 compute-0 naughty_thompson[106059]:     }
Nov 25 23:35:04 compute-0 naughty_thompson[106059]: }
Nov 25 23:35:04 compute-0 systemd[1]: libpod-55d8e83f76146ae39aa68a104af9f0f38715dad1d24869258d5b1993cc54e7f2.scope: Deactivated successfully.
Nov 25 23:35:04 compute-0 podman[106043]: 2025-11-25 23:35:04.255087508 +0000 UTC m=+1.288355598 container died 55d8e83f76146ae39aa68a104af9f0f38715dad1d24869258d5b1993cc54e7f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_thompson, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 23:35:04 compute-0 systemd[1]: libpod-55d8e83f76146ae39aa68a104af9f0f38715dad1d24869258d5b1993cc54e7f2.scope: Consumed 1.064s CPU time.
Nov 25 23:35:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-10a4b7c9147076c25452bfe891b65754241574246277f94410d848c1a8f429ca-merged.mount: Deactivated successfully.
Nov 25 23:35:04 compute-0 podman[106043]: 2025-11-25 23:35:04.313791081 +0000 UTC m=+1.347059171 container remove 55d8e83f76146ae39aa68a104af9f0f38715dad1d24869258d5b1993cc54e7f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_thompson, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:35:04 compute-0 systemd[1]: libpod-conmon-55d8e83f76146ae39aa68a104af9f0f38715dad1d24869258d5b1993cc54e7f2.scope: Deactivated successfully.
Nov 25 23:35:04 compute-0 sudo[105936]: pam_unix(sudo:session): session closed for user root
Nov 25 23:35:04 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 23:35:04 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:35:04 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 23:35:04 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:35:04 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev ec9b7432-ea76-4cdd-a465-d453686928b7 does not exist
Nov 25 23:35:04 compute-0 sudo[106118]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:35:04 compute-0 sudo[106118]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:35:04 compute-0 sudo[106118]: pam_unix(sudo:session): session closed for user root
Nov 25 23:35:04 compute-0 sudo[106143]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 23:35:04 compute-0 sudo[106143]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:35:04 compute-0 sudo[106143]: pam_unix(sudo:session): session closed for user root
Nov 25 23:35:05 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Nov 25 23:35:05 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Nov 25 23:35:05 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:35:05 compute-0 ceph-mon[75654]: pgmap v148: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:35:05 compute-0 ceph-mon[75654]: 5.15 scrub starts
Nov 25 23:35:05 compute-0 ceph-mon[75654]: 5.15 scrub ok
Nov 25 23:35:05 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:35:05 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:35:05 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.f scrub starts
Nov 25 23:35:05 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.f scrub ok
Nov 25 23:35:05 compute-0 sshd-session[106068]: Invalid user controlm from 45.78.216.132 port 50828
Nov 25 23:35:05 compute-0 sshd-session[106068]: Received disconnect from 45.78.216.132 port 50828:11: Bye Bye [preauth]
Nov 25 23:35:05 compute-0 sshd-session[106068]: Disconnected from invalid user controlm 45.78.216.132 port 50828 [preauth]
Nov 25 23:35:06 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v149: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:35:06 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.10 deep-scrub starts
Nov 25 23:35:06 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.10 deep-scrub ok
Nov 25 23:35:06 compute-0 ceph-mon[75654]: 3.3 scrub starts
Nov 25 23:35:06 compute-0 ceph-mon[75654]: 3.3 scrub ok
Nov 25 23:35:06 compute-0 ceph-mon[75654]: 4.f scrub starts
Nov 25 23:35:06 compute-0 ceph-mon[75654]: 4.f scrub ok
Nov 25 23:35:07 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Nov 25 23:35:07 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Nov 25 23:35:07 compute-0 ceph-mon[75654]: pgmap v149: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:35:07 compute-0 ceph-mon[75654]: 5.10 deep-scrub starts
Nov 25 23:35:07 compute-0 ceph-mon[75654]: 5.10 deep-scrub ok
Nov 25 23:35:08 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v150: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:35:08 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Nov 25 23:35:08 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Nov 25 23:35:08 compute-0 ceph-mon[75654]: 3.6 scrub starts
Nov 25 23:35:08 compute-0 ceph-mon[75654]: 3.6 scrub ok
Nov 25 23:35:09 compute-0 ceph-mon[75654]: pgmap v150: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:35:09 compute-0 ceph-mon[75654]: 5.17 scrub starts
Nov 25 23:35:09 compute-0 ceph-mon[75654]: 5.17 scrub ok
Nov 25 23:35:10 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v151: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:35:10 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:35:10 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Nov 25 23:35:10 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Nov 25 23:35:11 compute-0 ceph-mon[75654]: pgmap v151: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:35:12 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v152: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:35:12 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 2.f scrub starts
Nov 25 23:35:12 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 2.f scrub ok
Nov 25 23:35:12 compute-0 ceph-mon[75654]: 5.1b scrub starts
Nov 25 23:35:12 compute-0 ceph-mon[75654]: 5.1b scrub ok
Nov 25 23:35:12 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.d scrub starts
Nov 25 23:35:12 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.d scrub ok
Nov 25 23:35:13 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Nov 25 23:35:13 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Nov 25 23:35:13 compute-0 ceph-mon[75654]: pgmap v152: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:35:13 compute-0 ceph-mon[75654]: 2.f scrub starts
Nov 25 23:35:13 compute-0 ceph-mon[75654]: 2.f scrub ok
Nov 25 23:35:13 compute-0 ceph-mon[75654]: 4.d scrub starts
Nov 25 23:35:13 compute-0 ceph-mon[75654]: 4.d scrub ok
Nov 25 23:35:14 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Nov 25 23:35:14 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Nov 25 23:35:14 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v153: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:35:14 compute-0 ceph-mon[75654]: 2.18 scrub starts
Nov 25 23:35:14 compute-0 ceph-mon[75654]: 2.18 scrub ok
Nov 25 23:35:15 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Nov 25 23:35:15 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Nov 25 23:35:15 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:35:15 compute-0 ceph-mon[75654]: 7.9 scrub starts
Nov 25 23:35:15 compute-0 ceph-mon[75654]: 7.9 scrub ok
Nov 25 23:35:15 compute-0 ceph-mon[75654]: pgmap v153: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:35:16 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.1f deep-scrub starts
Nov 25 23:35:16 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.1f deep-scrub ok
Nov 25 23:35:16 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v154: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:35:16 compute-0 ceph-mon[75654]: 5.1c scrub starts
Nov 25 23:35:16 compute-0 ceph-mon[75654]: 5.1c scrub ok
Nov 25 23:35:17 compute-0 ceph-mon[75654]: 5.1f deep-scrub starts
Nov 25 23:35:17 compute-0 ceph-mon[75654]: 5.1f deep-scrub ok
Nov 25 23:35:17 compute-0 ceph-mon[75654]: pgmap v154: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:35:18 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v155: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:35:18 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Nov 25 23:35:18 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Nov 25 23:35:19 compute-0 ceph-mon[75654]: pgmap v155: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:35:19 compute-0 ceph-mon[75654]: 4.2 scrub starts
Nov 25 23:35:19 compute-0 ceph-mon[75654]: 4.2 scrub ok
Nov 25 23:35:20 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v156: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:35:20 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:35:20 compute-0 sshd-session[106227]: Received disconnect from 58.82.169.249 port 34120:11:  [preauth]
Nov 25 23:35:20 compute-0 sshd-session[106227]: Disconnected from authenticating user root 58.82.169.249 port 34120 [preauth]
Nov 25 23:35:20 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Nov 25 23:35:20 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Nov 25 23:35:21 compute-0 ceph-mon[75654]: pgmap v156: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:35:21 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Nov 25 23:35:21 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Nov 25 23:35:22 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v157: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:35:22 compute-0 ceph-mon[75654]: 7.3 scrub starts
Nov 25 23:35:22 compute-0 ceph-mon[75654]: 7.3 scrub ok
Nov 25 23:35:23 compute-0 ceph-mon[75654]: 7.4 scrub starts
Nov 25 23:35:23 compute-0 ceph-mon[75654]: 7.4 scrub ok
Nov 25 23:35:23 compute-0 ceph-mon[75654]: pgmap v157: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:35:24 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v158: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:35:24 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Nov 25 23:35:24 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Nov 25 23:35:24 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Nov 25 23:35:24 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Nov 25 23:35:25 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Nov 25 23:35:25 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Nov 25 23:35:25 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:35:25 compute-0 ceph-mon[75654]: pgmap v158: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:35:25 compute-0 ceph-mon[75654]: 4.18 scrub starts
Nov 25 23:35:25 compute-0 ceph-mon[75654]: 4.18 scrub ok
Nov 25 23:35:25 compute-0 ceph-mon[75654]: 4.4 scrub starts
Nov 25 23:35:25 compute-0 ceph-mon[75654]: 4.4 scrub ok
Nov 25 23:35:26 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v159: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:35:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:35:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:35:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:35:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:35:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:35:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:35:26 compute-0 ceph-mon[75654]: 4.1a scrub starts
Nov 25 23:35:26 compute-0 ceph-mon[75654]: 4.1a scrub ok
Nov 25 23:35:26 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Nov 25 23:35:26 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Nov 25 23:35:26 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 3.1b scrub starts
Nov 25 23:35:26 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 3.1b scrub ok
Nov 25 23:35:27 compute-0 ceph-mon[75654]: pgmap v159: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:35:27 compute-0 ceph-mon[75654]: 6.1 scrub starts
Nov 25 23:35:27 compute-0 ceph-mon[75654]: 6.1 scrub ok
Nov 25 23:35:28 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v160: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:35:28 compute-0 ceph-mon[75654]: 3.1b scrub starts
Nov 25 23:35:28 compute-0 ceph-mon[75654]: 3.1b scrub ok
Nov 25 23:35:28 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Nov 25 23:35:28 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Nov 25 23:35:29 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Nov 25 23:35:29 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Nov 25 23:35:29 compute-0 sudo[104882]: pam_unix(sudo:session): session closed for user root
Nov 25 23:35:29 compute-0 ceph-mon[75654]: pgmap v160: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:35:29 compute-0 ceph-mon[75654]: 4.7 scrub starts
Nov 25 23:35:29 compute-0 ceph-mon[75654]: 4.7 scrub ok
Nov 25 23:35:29 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Nov 25 23:35:29 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Nov 25 23:35:29 compute-0 sudo[106378]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdhxqonxszswrbguzmfhhiuahaxwcuxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113729.6364446-137-9341735737600/AnsiballZ_command.py'
Nov 25 23:35:29 compute-0 sudo[106378]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:35:30 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v161: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:35:30 compute-0 python3.9[106380]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:35:30 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.e scrub starts
Nov 25 23:35:30 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.e scrub ok
Nov 25 23:35:30 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:35:30 compute-0 ceph-mon[75654]: 4.1b scrub starts
Nov 25 23:35:30 compute-0 ceph-mon[75654]: 4.1b scrub ok
Nov 25 23:35:30 compute-0 ceph-mon[75654]: 7.6 scrub starts
Nov 25 23:35:30 compute-0 ceph-mon[75654]: 7.6 scrub ok
Nov 25 23:35:30 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Nov 25 23:35:30 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Nov 25 23:35:30 compute-0 sudo[106378]: pam_unix(sudo:session): session closed for user root
Nov 25 23:35:31 compute-0 ceph-mon[75654]: pgmap v161: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:35:31 compute-0 ceph-mon[75654]: 4.e scrub starts
Nov 25 23:35:31 compute-0 ceph-mon[75654]: 4.e scrub ok
Nov 25 23:35:31 compute-0 ceph-mon[75654]: 4.5 scrub starts
Nov 25 23:35:31 compute-0 ceph-mon[75654]: 4.5 scrub ok
Nov 25 23:35:31 compute-0 sudo[106665]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihtsruazryskpwloydspzvamypeydper ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113731.1469188-145-72703191263556/AnsiballZ_selinux.py'
Nov 25 23:35:31 compute-0 sudo[106665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:35:32 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v162: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:35:32 compute-0 python3.9[106667]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Nov 25 23:35:32 compute-0 sudo[106665]: pam_unix(sudo:session): session closed for user root
Nov 25 23:35:32 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Nov 25 23:35:32 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Nov 25 23:35:33 compute-0 sudo[106817]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rakzxiuzftelxaffuodyctefpaxtbfch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113732.7250159-156-85849628718684/AnsiballZ_command.py'
Nov 25 23:35:33 compute-0 sudo[106817]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:35:33 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.a scrub starts
Nov 25 23:35:33 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.a scrub ok
Nov 25 23:35:33 compute-0 python3.9[106819]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Nov 25 23:35:33 compute-0 sudo[106817]: pam_unix(sudo:session): session closed for user root
Nov 25 23:35:33 compute-0 ceph-mon[75654]: pgmap v162: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:35:33 compute-0 ceph-mon[75654]: 5.1d scrub starts
Nov 25 23:35:33 compute-0 ceph-mon[75654]: 5.1d scrub ok
Nov 25 23:35:33 compute-0 sudo[106969]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cahronxoomsjmehiqngkaxljuxjxvbqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113733.5292275-164-8342307137276/AnsiballZ_file.py'
Nov 25 23:35:33 compute-0 sudo[106969]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:35:34 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v163: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:35:34 compute-0 python3.9[106971]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:35:34 compute-0 sudo[106969]: pam_unix(sudo:session): session closed for user root
Nov 25 23:35:34 compute-0 ceph-mon[75654]: 4.a scrub starts
Nov 25 23:35:34 compute-0 ceph-mon[75654]: 4.a scrub ok
Nov 25 23:35:34 compute-0 sudo[107121]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iyiemedmayinjotjqfkiewldarhwvnlz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113734.4013112-172-27870677572665/AnsiballZ_mount.py'
Nov 25 23:35:34 compute-0 sudo[107121]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:35:35 compute-0 python3.9[107123]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Nov 25 23:35:35 compute-0 sudo[107121]: pam_unix(sudo:session): session closed for user root
Nov 25 23:35:35 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:35:35 compute-0 ceph-mon[75654]: pgmap v163: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:35:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 3.9 deep-scrub starts
Nov 25 23:35:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 3.9 deep-scrub ok
Nov 25 23:35:36 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v164: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:35:36 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Nov 25 23:35:36 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Nov 25 23:35:36 compute-0 sudo[107273]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvhcxjojjsadrcxdssyignfecbymxkoe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113735.9648194-200-129445805400668/AnsiballZ_file.py'
Nov 25 23:35:36 compute-0 sudo[107273]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:35:36 compute-0 python3.9[107275]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:35:36 compute-0 sudo[107273]: pam_unix(sudo:session): session closed for user root
Nov 25 23:35:36 compute-0 ceph-mon[75654]: 3.9 deep-scrub starts
Nov 25 23:35:36 compute-0 ceph-mon[75654]: 3.9 deep-scrub ok
Nov 25 23:35:36 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Nov 25 23:35:36 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Nov 25 23:35:37 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.1c deep-scrub starts
Nov 25 23:35:37 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.1c deep-scrub ok
Nov 25 23:35:37 compute-0 sudo[107425]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqopjifkalfqsuawaxcevzzmdsfkcrvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113736.8112116-208-188578682845066/AnsiballZ_stat.py'
Nov 25 23:35:37 compute-0 sudo[107425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:35:37 compute-0 python3.9[107427]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:35:37 compute-0 sudo[107425]: pam_unix(sudo:session): session closed for user root
Nov 25 23:35:37 compute-0 ceph-mon[75654]: pgmap v164: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:35:37 compute-0 ceph-mon[75654]: 4.13 scrub starts
Nov 25 23:35:37 compute-0 ceph-mon[75654]: 4.13 scrub ok
Nov 25 23:35:37 compute-0 ceph-mon[75654]: 7.1f scrub starts
Nov 25 23:35:37 compute-0 ceph-mon[75654]: 7.1f scrub ok
Nov 25 23:35:37 compute-0 sudo[107503]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcdilkmemwkaextprtwsynvcqxgkedzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113736.8112116-208-188578682845066/AnsiballZ_file.py'
Nov 25 23:35:37 compute-0 sudo[107503]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:35:38 compute-0 python3.9[107505]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:35:38 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v165: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:35:38 compute-0 sudo[107503]: pam_unix(sudo:session): session closed for user root
Nov 25 23:35:38 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Nov 25 23:35:38 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Nov 25 23:35:38 compute-0 ceph-mon[75654]: 4.1c deep-scrub starts
Nov 25 23:35:38 compute-0 ceph-mon[75654]: 4.1c deep-scrub ok
Nov 25 23:35:39 compute-0 sudo[107655]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbzlbqnvmkrmrpxegzrywtaefehjlyad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113738.6869547-229-219264309385127/AnsiballZ_stat.py'
Nov 25 23:35:39 compute-0 sudo[107655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:35:39 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.1c deep-scrub starts
Nov 25 23:35:39 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.1c deep-scrub ok
Nov 25 23:35:39 compute-0 python3.9[107657]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 23:35:39 compute-0 sudo[107655]: pam_unix(sudo:session): session closed for user root
Nov 25 23:35:39 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Nov 25 23:35:39 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Nov 25 23:35:39 compute-0 ceph-mon[75654]: pgmap v165: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:35:39 compute-0 ceph-mon[75654]: 4.11 scrub starts
Nov 25 23:35:39 compute-0 ceph-mon[75654]: 4.11 scrub ok
Nov 25 23:35:40 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v166: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:35:40 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:35:40 compute-0 sudo[107809]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-syiqrzupgpejfwzycypxfggcnkqjmrsx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113739.8715718-242-226793894062299/AnsiballZ_getent.py'
Nov 25 23:35:40 compute-0 sudo[107809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:35:40 compute-0 ceph-mon[75654]: 7.1c deep-scrub starts
Nov 25 23:35:40 compute-0 ceph-mon[75654]: 7.1c deep-scrub ok
Nov 25 23:35:40 compute-0 ceph-mon[75654]: 4.9 scrub starts
Nov 25 23:35:40 compute-0 ceph-mon[75654]: 4.9 scrub ok
Nov 25 23:35:40 compute-0 ceph-mon[75654]: pgmap v166: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:35:40 compute-0 python3.9[107811]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Nov 25 23:35:40 compute-0 sudo[107809]: pam_unix(sudo:session): session closed for user root
Nov 25 23:35:41 compute-0 sudo[107962]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwnxqyzaysbqqozevaqeqostzlilqcyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113741.0111673-252-34279647168595/AnsiballZ_getent.py'
Nov 25 23:35:41 compute-0 sudo[107962]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:35:41 compute-0 python3.9[107964]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Nov 25 23:35:41 compute-0 sudo[107962]: pam_unix(sudo:session): session closed for user root
Nov 25 23:35:42 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v167: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:35:42 compute-0 sudo[108115]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdyjgxhcgkuaenscnyhhhblyakfduzde ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113741.9129195-260-252410736194776/AnsiballZ_group.py'
Nov 25 23:35:42 compute-0 sudo[108115]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:35:42 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Nov 25 23:35:42 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Nov 25 23:35:42 compute-0 python3.9[108117]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 25 23:35:42 compute-0 sudo[108115]: pam_unix(sudo:session): session closed for user root
Nov 25 23:35:43 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Nov 25 23:35:43 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Nov 25 23:35:43 compute-0 ceph-mon[75654]: pgmap v167: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:35:43 compute-0 ceph-mon[75654]: 4.8 scrub starts
Nov 25 23:35:43 compute-0 ceph-mon[75654]: 4.8 scrub ok
Nov 25 23:35:43 compute-0 sudo[108267]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emczbxeyncjbxykxvwymfllxvvvskiwl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113742.8912013-269-15331410692454/AnsiballZ_file.py'
Nov 25 23:35:43 compute-0 sudo[108267]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:35:43 compute-0 python3.9[108269]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Nov 25 23:35:43 compute-0 sudo[108267]: pam_unix(sudo:session): session closed for user root
Nov 25 23:35:44 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v168: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:35:44 compute-0 ceph-mon[75654]: 3.18 scrub starts
Nov 25 23:35:44 compute-0 ceph-mon[75654]: 3.18 scrub ok
Nov 25 23:35:44 compute-0 sudo[108419]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhydvxdzvbhzkbpktrhhxoxvhpdhsqjd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113743.7805514-280-249645301252926/AnsiballZ_dnf.py'
Nov 25 23:35:44 compute-0 sudo[108419]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:35:44 compute-0 python3.9[108421]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 23:35:44 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Nov 25 23:35:44 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Nov 25 23:35:44 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Nov 25 23:35:44 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Nov 25 23:35:45 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Nov 25 23:35:45 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Nov 25 23:35:45 compute-0 ceph-mon[75654]: pgmap v168: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:35:45 compute-0 ceph-mon[75654]: 4.14 scrub starts
Nov 25 23:35:45 compute-0 ceph-mon[75654]: 4.14 scrub ok
Nov 25 23:35:45 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:35:45 compute-0 sudo[108419]: pam_unix(sudo:session): session closed for user root
Nov 25 23:35:46 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v169: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:35:46 compute-0 ceph-mon[75654]: 7.18 scrub starts
Nov 25 23:35:46 compute-0 ceph-mon[75654]: 7.18 scrub ok
Nov 25 23:35:46 compute-0 ceph-mon[75654]: 7.11 scrub starts
Nov 25 23:35:46 compute-0 ceph-mon[75654]: 7.11 scrub ok
Nov 25 23:35:46 compute-0 sudo[108574]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozwdroduqldajyydoxindqcpflqvkixr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113745.8865857-288-57939071416023/AnsiballZ_file.py'
Nov 25 23:35:46 compute-0 sudo[108574]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:35:46 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Nov 25 23:35:46 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Nov 25 23:35:46 compute-0 sshd-session[108423]: Received disconnect from 124.18.141.70 port 32890:11: Bye Bye [preauth]
Nov 25 23:35:46 compute-0 sshd-session[108423]: Disconnected from authenticating user root 124.18.141.70 port 32890 [preauth]
Nov 25 23:35:46 compute-0 python3.9[108576]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:35:46 compute-0 sudo[108574]: pam_unix(sudo:session): session closed for user root
Nov 25 23:35:47 compute-0 sudo[108726]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjbaxpbokjssqxnfqqnxiahtuxuuhauw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113746.7732794-296-216455600050736/AnsiballZ_stat.py'
Nov 25 23:35:47 compute-0 sudo[108726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:35:47 compute-0 ceph-mon[75654]: pgmap v169: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:35:47 compute-0 ceph-mon[75654]: 4.12 scrub starts
Nov 25 23:35:47 compute-0 ceph-mon[75654]: 4.12 scrub ok
Nov 25 23:35:47 compute-0 python3.9[108728]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:35:47 compute-0 sudo[108726]: pam_unix(sudo:session): session closed for user root
Nov 25 23:35:47 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Nov 25 23:35:47 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Nov 25 23:35:47 compute-0 sudo[108806]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxkxwgedwcukzeqxorovogmwvhbxcjzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113746.7732794-296-216455600050736/AnsiballZ_file.py'
Nov 25 23:35:47 compute-0 sudo[108806]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:35:47 compute-0 python3.9[108808]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:35:47 compute-0 sudo[108806]: pam_unix(sudo:session): session closed for user root
Nov 25 23:35:48 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v170: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:35:48 compute-0 ceph-mon[75654]: 4.10 scrub starts
Nov 25 23:35:48 compute-0 ceph-mon[75654]: 4.10 scrub ok
Nov 25 23:35:48 compute-0 sudo[108958]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uitzacamukssnliyphhzrbduwrzqfhha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113748.1073968-309-123610976883579/AnsiballZ_stat.py'
Nov 25 23:35:48 compute-0 sudo[108958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:35:48 compute-0 python3.9[108960]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:35:48 compute-0 sudo[108958]: pam_unix(sudo:session): session closed for user root
Nov 25 23:35:48 compute-0 sshd-session[108759]: Invalid user vyos from 68.183.82.237 port 41398
Nov 25 23:35:48 compute-0 sudo[109036]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekxmmsvluvdnjdpbgywuknnxvfforlrh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113748.1073968-309-123610976883579/AnsiballZ_file.py'
Nov 25 23:35:48 compute-0 sudo[109036]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:35:49 compute-0 sshd-session[108759]: Received disconnect from 68.183.82.237 port 41398:11: Bye Bye [preauth]
Nov 25 23:35:49 compute-0 sshd-session[108759]: Disconnected from invalid user vyos 68.183.82.237 port 41398 [preauth]
Nov 25 23:35:49 compute-0 python3.9[109038]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:35:49 compute-0 sudo[109036]: pam_unix(sudo:session): session closed for user root
Nov 25 23:35:49 compute-0 ceph-mon[75654]: pgmap v170: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:35:49 compute-0 sudo[109188]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uiogzqclataegtoqqinwpcheqrjgcwaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113749.5865765-324-215131868858883/AnsiballZ_dnf.py'
Nov 25 23:35:49 compute-0 sudo[109188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:35:50 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v171: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:35:50 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:35:50 compute-0 python3.9[109190]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 23:35:51 compute-0 ceph-mon[75654]: pgmap v171: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:35:51 compute-0 sudo[109188]: pam_unix(sudo:session): session closed for user root
Nov 25 23:35:51 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Nov 25 23:35:51 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Nov 25 23:35:52 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v172: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:35:52 compute-0 ceph-mon[75654]: 2.1b scrub starts
Nov 25 23:35:52 compute-0 ceph-mon[75654]: 2.1b scrub ok
Nov 25 23:35:52 compute-0 python3.9[109341]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 23:35:53 compute-0 ceph-mon[75654]: pgmap v172: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:35:53 compute-0 python3.9[109495]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Nov 25 23:35:53 compute-0 sshd-session[109368]: Invalid user deploy from 202.83.162.167 port 40176
Nov 25 23:35:54 compute-0 sshd-session[109368]: Received disconnect from 202.83.162.167 port 40176:11: Bye Bye [preauth]
Nov 25 23:35:54 compute-0 sshd-session[109368]: Disconnected from invalid user deploy 202.83.162.167 port 40176 [preauth]
Nov 25 23:35:54 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v173: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:35:54 compute-0 python3.9[109645]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 23:35:55 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:35:55 compute-0 ceph-mon[75654]: pgmap v173: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:35:55 compute-0 sudo[109795]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjmsxmhmbkelnikmtwzpotzhcbtaupde ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113754.7342777-365-227814423852115/AnsiballZ_systemd.py'
Nov 25 23:35:55 compute-0 sudo[109795]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:35:55 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.17 deep-scrub starts
Nov 25 23:35:55 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.17 deep-scrub ok
Nov 25 23:35:55 compute-0 python3.9[109797]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 23:35:55 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Nov 25 23:35:55 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Nov 25 23:35:55 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Nov 25 23:35:55 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 25 23:35:56 compute-0 ceph-mgr[75954]: [balancer INFO root] Optimize plan auto_2025-11-25_23:35:56
Nov 25 23:35:56 compute-0 ceph-mgr[75954]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 23:35:56 compute-0 ceph-mgr[75954]: [balancer INFO root] do_upmap
Nov 25 23:35:56 compute-0 ceph-mgr[75954]: [balancer INFO root] pools ['cephfs.cephfs.data', 'backups', 'cephfs.cephfs.meta', 'volumes', '.mgr', 'vms', 'images']
Nov 25 23:35:56 compute-0 ceph-mgr[75954]: [balancer INFO root] prepared 0/10 changes
Nov 25 23:35:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:35:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:35:56 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v174: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:35:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 23:35:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 23:35:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 23:35:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 23:35:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 23:35:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 23:35:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 23:35:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 23:35:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 23:35:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 23:35:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:35:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:35:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:35:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:35:56 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 25 23:35:56 compute-0 sudo[109795]: pam_unix(sudo:session): session closed for user root
Nov 25 23:35:56 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Nov 25 23:35:56 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Nov 25 23:35:56 compute-0 ceph-mon[75654]: 2.17 deep-scrub starts
Nov 25 23:35:56 compute-0 ceph-mon[75654]: 2.17 deep-scrub ok
Nov 25 23:35:56 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Nov 25 23:35:56 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Nov 25 23:35:57 compute-0 python3.9[109959]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Nov 25 23:35:57 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.a scrub starts
Nov 25 23:35:57 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.a scrub ok
Nov 25 23:35:57 compute-0 ceph-mon[75654]: pgmap v174: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:35:57 compute-0 ceph-mon[75654]: 3.16 scrub starts
Nov 25 23:35:57 compute-0 ceph-mon[75654]: 3.16 scrub ok
Nov 25 23:35:57 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Nov 25 23:35:57 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Nov 25 23:35:58 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v175: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:35:58 compute-0 ceph-mon[75654]: 7.1b scrub starts
Nov 25 23:35:58 compute-0 ceph-mon[75654]: 7.1b scrub ok
Nov 25 23:35:58 compute-0 ceph-mon[75654]: 7.a scrub starts
Nov 25 23:35:58 compute-0 ceph-mon[75654]: 7.a scrub ok
Nov 25 23:35:58 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Nov 25 23:35:58 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Nov 25 23:35:59 compute-0 ceph-mon[75654]: 3.1f scrub starts
Nov 25 23:35:59 compute-0 ceph-mon[75654]: 3.1f scrub ok
Nov 25 23:35:59 compute-0 ceph-mon[75654]: pgmap v175: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:35:59 compute-0 ceph-mon[75654]: 5.11 scrub starts
Nov 25 23:35:59 compute-0 ceph-mon[75654]: 5.11 scrub ok
Nov 25 23:35:59 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.15 deep-scrub starts
Nov 25 23:35:59 compute-0 sudo[110109]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejsptifkjivitiqxllieflzyaxufkgby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113759.0452888-422-156581689807570/AnsiballZ_systemd.py'
Nov 25 23:35:59 compute-0 sudo[110109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:35:59 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.15 deep-scrub ok
Nov 25 23:35:59 compute-0 python3.9[110111]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 23:35:59 compute-0 sudo[110109]: pam_unix(sudo:session): session closed for user root
Nov 25 23:36:00 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v176: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:36:00 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Nov 25 23:36:00 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Nov 25 23:36:00 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:36:00 compute-0 ceph-mon[75654]: 2.15 deep-scrub starts
Nov 25 23:36:00 compute-0 ceph-mon[75654]: 2.15 deep-scrub ok
Nov 25 23:36:00 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Nov 25 23:36:00 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Nov 25 23:36:00 compute-0 sudo[110263]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axeipmwudeisqodqkxkxlxchvlkqdjno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113760.0723577-422-167179764701958/AnsiballZ_systemd.py'
Nov 25 23:36:00 compute-0 sudo[110263]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:36:00 compute-0 python3.9[110265]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 23:36:00 compute-0 sudo[110263]: pam_unix(sudo:session): session closed for user root
Nov 25 23:36:01 compute-0 ceph-mon[75654]: pgmap v176: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:36:01 compute-0 ceph-mon[75654]: 3.11 scrub starts
Nov 25 23:36:01 compute-0 ceph-mon[75654]: 3.11 scrub ok
Nov 25 23:36:01 compute-0 ceph-mon[75654]: 5.12 scrub starts
Nov 25 23:36:01 compute-0 ceph-mon[75654]: 5.12 scrub ok
Nov 25 23:36:01 compute-0 sshd-session[102968]: Connection closed by 192.168.122.30 port 54026
Nov 25 23:36:01 compute-0 sshd-session[102965]: pam_unix(sshd:session): session closed for user zuul
Nov 25 23:36:01 compute-0 systemd[1]: session-35.scope: Deactivated successfully.
Nov 25 23:36:01 compute-0 systemd[1]: session-35.scope: Consumed 1min 6.656s CPU time.
Nov 25 23:36:01 compute-0 systemd-logind[789]: Session 35 logged out. Waiting for processes to exit.
Nov 25 23:36:01 compute-0 systemd-logind[789]: Removed session 35.
Nov 25 23:36:01 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Nov 25 23:36:01 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Nov 25 23:36:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 23:36:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:36:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 23:36:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:36:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:36:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:36:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:36:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:36:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:36:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:36:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:36:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:36:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 23:36:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:36:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:36:02 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v177: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:36:02 compute-0 ceph-mon[75654]: 5.13 scrub starts
Nov 25 23:36:02 compute-0 ceph-mon[75654]: 5.13 scrub ok
Nov 25 23:36:02 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Nov 25 23:36:02 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Nov 25 23:36:03 compute-0 ceph-mon[75654]: pgmap v177: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:36:03 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 3.a scrub starts
Nov 25 23:36:03 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 3.a scrub ok
Nov 25 23:36:04 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v178: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:36:04 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.8 deep-scrub starts
Nov 25 23:36:04 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.8 deep-scrub ok
Nov 25 23:36:04 compute-0 ceph-mon[75654]: 3.15 scrub starts
Nov 25 23:36:04 compute-0 ceph-mon[75654]: 3.15 scrub ok
Nov 25 23:36:04 compute-0 sudo[110292]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:36:04 compute-0 sudo[110292]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:36:04 compute-0 sudo[110292]: pam_unix(sudo:session): session closed for user root
Nov 25 23:36:04 compute-0 sudo[110317]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:36:04 compute-0 sudo[110317]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:36:04 compute-0 sudo[110317]: pam_unix(sudo:session): session closed for user root
Nov 25 23:36:04 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Nov 25 23:36:04 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Nov 25 23:36:04 compute-0 sudo[110342]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:36:04 compute-0 sudo[110342]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:36:04 compute-0 sudo[110342]: pam_unix(sudo:session): session closed for user root
Nov 25 23:36:04 compute-0 sudo[110367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 23:36:04 compute-0 sudo[110367]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:36:05 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.15 deep-scrub starts
Nov 25 23:36:05 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.15 deep-scrub ok
Nov 25 23:36:05 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:36:05 compute-0 sudo[110367]: pam_unix(sudo:session): session closed for user root
Nov 25 23:36:05 compute-0 ceph-mon[75654]: 3.a scrub starts
Nov 25 23:36:05 compute-0 ceph-mon[75654]: 3.a scrub ok
Nov 25 23:36:05 compute-0 ceph-mon[75654]: pgmap v178: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:36:05 compute-0 ceph-mon[75654]: 7.8 deep-scrub starts
Nov 25 23:36:05 compute-0 ceph-mon[75654]: 7.8 deep-scrub ok
Nov 25 23:36:05 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 23:36:05 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:36:05 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 23:36:05 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 23:36:05 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 23:36:05 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:36:05 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev 9f554c3b-47e2-4a5f-b767-c65c437e5e4b does not exist
Nov 25 23:36:05 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev 47e23d0d-6715-4914-ab8e-93815f9b9ee1 does not exist
Nov 25 23:36:05 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev 11febb9f-6c7e-4ca9-bfca-045a47236623 does not exist
Nov 25 23:36:05 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 23:36:05 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 23:36:05 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 23:36:05 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 23:36:05 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 23:36:05 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:36:05 compute-0 sudo[110423]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:36:05 compute-0 sudo[110423]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:36:05 compute-0 sudo[110423]: pam_unix(sudo:session): session closed for user root
Nov 25 23:36:05 compute-0 sudo[110448]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:36:05 compute-0 sudo[110448]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:36:05 compute-0 sudo[110448]: pam_unix(sudo:session): session closed for user root
Nov 25 23:36:05 compute-0 sudo[110473]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:36:05 compute-0 sudo[110473]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:36:05 compute-0 sudo[110473]: pam_unix(sudo:session): session closed for user root
Nov 25 23:36:05 compute-0 sudo[110498]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 23:36:05 compute-0 sudo[110498]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:36:06 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v179: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:36:06 compute-0 podman[110563]: 2025-11-25 23:36:06.12989105 +0000 UTC m=+0.074392931 container create bc511c6cc446af5c13d53248bd1ed370994b60305612f0435ba12b4810fcbdbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_shirley, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True)
Nov 25 23:36:06 compute-0 systemd[1]: Started libpod-conmon-bc511c6cc446af5c13d53248bd1ed370994b60305612f0435ba12b4810fcbdbb.scope.
Nov 25 23:36:06 compute-0 podman[110563]: 2025-11-25 23:36:06.099081907 +0000 UTC m=+0.043583838 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:36:06 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:36:06 compute-0 podman[110563]: 2025-11-25 23:36:06.239401798 +0000 UTC m=+0.183903709 container init bc511c6cc446af5c13d53248bd1ed370994b60305612f0435ba12b4810fcbdbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_shirley, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:36:06 compute-0 podman[110563]: 2025-11-25 23:36:06.252414035 +0000 UTC m=+0.196915917 container start bc511c6cc446af5c13d53248bd1ed370994b60305612f0435ba12b4810fcbdbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_shirley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 23:36:06 compute-0 podman[110563]: 2025-11-25 23:36:06.257027229 +0000 UTC m=+0.201529160 container attach bc511c6cc446af5c13d53248bd1ed370994b60305612f0435ba12b4810fcbdbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_shirley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:36:06 compute-0 exciting_shirley[110582]: 167 167
Nov 25 23:36:06 compute-0 systemd[1]: libpod-bc511c6cc446af5c13d53248bd1ed370994b60305612f0435ba12b4810fcbdbb.scope: Deactivated successfully.
Nov 25 23:36:06 compute-0 podman[110563]: 2025-11-25 23:36:06.262785072 +0000 UTC m=+0.207286993 container died bc511c6cc446af5c13d53248bd1ed370994b60305612f0435ba12b4810fcbdbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_shirley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:36:06 compute-0 sshd-session[110578]: Accepted publickey for zuul from 192.168.122.30 port 33210 ssh2: ECDSA SHA256:7fdySOwLsSW4/MY3qn2D8coSBpPDcuftesbV+ivPlmY
Nov 25 23:36:06 compute-0 systemd-logind[789]: New session 36 of user zuul.
Nov 25 23:36:06 compute-0 systemd[1]: Started Session 36 of User zuul.
Nov 25 23:36:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-81640ac529e1c138cda3fcd1ab721c959a39bbb0aeb001e5b16ada2b81414c9f-merged.mount: Deactivated successfully.
Nov 25 23:36:06 compute-0 sshd-session[110578]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 23:36:06 compute-0 podman[110563]: 2025-11-25 23:36:06.324336098 +0000 UTC m=+0.268837979 container remove bc511c6cc446af5c13d53248bd1ed370994b60305612f0435ba12b4810fcbdbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_shirley, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 23:36:06 compute-0 ceph-mon[75654]: 5.7 scrub starts
Nov 25 23:36:06 compute-0 ceph-mon[75654]: 5.7 scrub ok
Nov 25 23:36:06 compute-0 ceph-mon[75654]: 7.15 deep-scrub starts
Nov 25 23:36:06 compute-0 ceph-mon[75654]: 7.15 deep-scrub ok
Nov 25 23:36:06 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:36:06 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 23:36:06 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:36:06 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 23:36:06 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 23:36:06 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:36:06 compute-0 systemd[1]: libpod-conmon-bc511c6cc446af5c13d53248bd1ed370994b60305612f0435ba12b4810fcbdbb.scope: Deactivated successfully.
Nov 25 23:36:06 compute-0 podman[110647]: 2025-11-25 23:36:06.548633906 +0000 UTC m=+0.066696965 container create 299c5a69da69fe4cf49c519d0b47c6abea05b02d18fde74fb3a6cf95caba83cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_bose, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 23:36:06 compute-0 systemd[1]: Started libpod-conmon-299c5a69da69fe4cf49c519d0b47c6abea05b02d18fde74fb3a6cf95caba83cc.scope.
Nov 25 23:36:06 compute-0 podman[110647]: 2025-11-25 23:36:06.521613913 +0000 UTC m=+0.039677062 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:36:06 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:36:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/951fd33670c62c3697a272c7cb7f3bfc8f3e2f94043dc69d077097005cae1aee/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 23:36:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/951fd33670c62c3697a272c7cb7f3bfc8f3e2f94043dc69d077097005cae1aee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:36:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/951fd33670c62c3697a272c7cb7f3bfc8f3e2f94043dc69d077097005cae1aee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:36:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/951fd33670c62c3697a272c7cb7f3bfc8f3e2f94043dc69d077097005cae1aee/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:36:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/951fd33670c62c3697a272c7cb7f3bfc8f3e2f94043dc69d077097005cae1aee/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 23:36:06 compute-0 podman[110647]: 2025-11-25 23:36:06.666009643 +0000 UTC m=+0.184072772 container init 299c5a69da69fe4cf49c519d0b47c6abea05b02d18fde74fb3a6cf95caba83cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_bose, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:36:06 compute-0 podman[110647]: 2025-11-25 23:36:06.681774955 +0000 UTC m=+0.199838014 container start 299c5a69da69fe4cf49c519d0b47c6abea05b02d18fde74fb3a6cf95caba83cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_bose, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 23:36:06 compute-0 podman[110647]: 2025-11-25 23:36:06.685418832 +0000 UTC m=+0.203481911 container attach 299c5a69da69fe4cf49c519d0b47c6abea05b02d18fde74fb3a6cf95caba83cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_bose, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:36:07 compute-0 ceph-mon[75654]: pgmap v179: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:36:07 compute-0 python3.9[110777]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 23:36:07 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Nov 25 23:36:07 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Nov 25 23:36:07 compute-0 elegant_bose[110675]: --> passed data devices: 0 physical, 3 LVM
Nov 25 23:36:07 compute-0 elegant_bose[110675]: --> relative data size: 1.0
Nov 25 23:36:07 compute-0 elegant_bose[110675]: --> All data devices are unavailable
Nov 25 23:36:07 compute-0 systemd[1]: libpod-299c5a69da69fe4cf49c519d0b47c6abea05b02d18fde74fb3a6cf95caba83cc.scope: Deactivated successfully.
Nov 25 23:36:07 compute-0 systemd[1]: libpod-299c5a69da69fe4cf49c519d0b47c6abea05b02d18fde74fb3a6cf95caba83cc.scope: Consumed 1.035s CPU time.
Nov 25 23:36:07 compute-0 podman[110647]: 2025-11-25 23:36:07.760942987 +0000 UTC m=+1.279006116 container died 299c5a69da69fe4cf49c519d0b47c6abea05b02d18fde74fb3a6cf95caba83cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_bose, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:36:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-951fd33670c62c3697a272c7cb7f3bfc8f3e2f94043dc69d077097005cae1aee-merged.mount: Deactivated successfully.
Nov 25 23:36:07 compute-0 podman[110647]: 2025-11-25 23:36:07.825657128 +0000 UTC m=+1.343720177 container remove 299c5a69da69fe4cf49c519d0b47c6abea05b02d18fde74fb3a6cf95caba83cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_bose, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 23:36:07 compute-0 systemd[1]: libpod-conmon-299c5a69da69fe4cf49c519d0b47c6abea05b02d18fde74fb3a6cf95caba83cc.scope: Deactivated successfully.
Nov 25 23:36:07 compute-0 sudo[110498]: pam_unix(sudo:session): session closed for user root
Nov 25 23:36:07 compute-0 sudo[110844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:36:07 compute-0 sudo[110844]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:36:07 compute-0 sudo[110844]: pam_unix(sudo:session): session closed for user root
Nov 25 23:36:07 compute-0 sudo[110869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:36:07 compute-0 sudo[110869]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:36:07 compute-0 sudo[110869]: pam_unix(sudo:session): session closed for user root
Nov 25 23:36:08 compute-0 sudo[110894]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:36:08 compute-0 sudo[110894]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:36:08 compute-0 sudo[110894]: pam_unix(sudo:session): session closed for user root
Nov 25 23:36:08 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v180: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:36:08 compute-0 sudo[110942]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 -- lvm list --format json
Nov 25 23:36:08 compute-0 sudo[110942]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:36:08 compute-0 ceph-mon[75654]: 5.9 scrub starts
Nov 25 23:36:08 compute-0 ceph-mon[75654]: 5.9 scrub ok
Nov 25 23:36:08 compute-0 podman[111074]: 2025-11-25 23:36:08.558747377 +0000 UTC m=+0.062829311 container create 030f461b2754d6e2994f5cddcd2d28212c149ab15ea2280aa2e70b300d394faf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 25 23:36:08 compute-0 systemd[1]: Started libpod-conmon-030f461b2754d6e2994f5cddcd2d28212c149ab15ea2280aa2e70b300d394faf.scope.
Nov 25 23:36:08 compute-0 sudo[111123]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqfybeilyjuojdzadluiaqukydzfzjco ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113768.0889118-36-270400758911670/AnsiballZ_getent.py'
Nov 25 23:36:08 compute-0 sudo[111123]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:36:08 compute-0 podman[111074]: 2025-11-25 23:36:08.530515342 +0000 UTC m=+0.034597256 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:36:08 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:36:08 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Nov 25 23:36:08 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Nov 25 23:36:08 compute-0 podman[111074]: 2025-11-25 23:36:08.684952761 +0000 UTC m=+0.189034655 container init 030f461b2754d6e2994f5cddcd2d28212c149ab15ea2280aa2e70b300d394faf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_brown, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:36:08 compute-0 podman[111074]: 2025-11-25 23:36:08.70175679 +0000 UTC m=+0.205838684 container start 030f461b2754d6e2994f5cddcd2d28212c149ab15ea2280aa2e70b300d394faf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 23:36:08 compute-0 podman[111074]: 2025-11-25 23:36:08.704601986 +0000 UTC m=+0.208683880 container attach 030f461b2754d6e2994f5cddcd2d28212c149ab15ea2280aa2e70b300d394faf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_brown, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 25 23:36:08 compute-0 nervous_brown[111125]: 167 167
Nov 25 23:36:08 compute-0 systemd[1]: libpod-030f461b2754d6e2994f5cddcd2d28212c149ab15ea2280aa2e70b300d394faf.scope: Deactivated successfully.
Nov 25 23:36:08 compute-0 podman[111074]: 2025-11-25 23:36:08.711371147 +0000 UTC m=+0.215453041 container died 030f461b2754d6e2994f5cddcd2d28212c149ab15ea2280aa2e70b300d394faf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_brown, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:36:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-b8978721606b9f609951ca55866adff388c1c246a7105b015b7a2c2de398c5c0-merged.mount: Deactivated successfully.
Nov 25 23:36:08 compute-0 podman[111074]: 2025-11-25 23:36:08.756243517 +0000 UTC m=+0.260325411 container remove 030f461b2754d6e2994f5cddcd2d28212c149ab15ea2280aa2e70b300d394faf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_brown, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:36:08 compute-0 systemd[1]: libpod-conmon-030f461b2754d6e2994f5cddcd2d28212c149ab15ea2280aa2e70b300d394faf.scope: Deactivated successfully.
Nov 25 23:36:08 compute-0 python3.9[111127]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Nov 25 23:36:08 compute-0 sudo[111123]: pam_unix(sudo:session): session closed for user root
Nov 25 23:36:08 compute-0 podman[111151]: 2025-11-25 23:36:08.937686939 +0000 UTC m=+0.047411659 container create 2f739c507046757f9d548e33649d2f8ef9741fdb97bdf304d9baf0cf7e2a96fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_swanson, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:36:08 compute-0 systemd[1]: Started libpod-conmon-2f739c507046757f9d548e33649d2f8ef9741fdb97bdf304d9baf0cf7e2a96fe.scope.
Nov 25 23:36:09 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:36:09 compute-0 podman[111151]: 2025-11-25 23:36:08.919848771 +0000 UTC m=+0.029573511 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:36:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32282179f07503ef902c5ac0073ea7c86630365b39a5e685d16b8fc7c8e95c84/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 23:36:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32282179f07503ef902c5ac0073ea7c86630365b39a5e685d16b8fc7c8e95c84/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:36:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32282179f07503ef902c5ac0073ea7c86630365b39a5e685d16b8fc7c8e95c84/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:36:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32282179f07503ef902c5ac0073ea7c86630365b39a5e685d16b8fc7c8e95c84/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:36:09 compute-0 podman[111151]: 2025-11-25 23:36:09.035054151 +0000 UTC m=+0.144778921 container init 2f739c507046757f9d548e33649d2f8ef9741fdb97bdf304d9baf0cf7e2a96fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_swanson, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:36:09 compute-0 podman[111151]: 2025-11-25 23:36:09.047323989 +0000 UTC m=+0.157048719 container start 2f739c507046757f9d548e33649d2f8ef9741fdb97bdf304d9baf0cf7e2a96fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_swanson, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:36:09 compute-0 podman[111151]: 2025-11-25 23:36:09.051426229 +0000 UTC m=+0.161151029 container attach 2f739c507046757f9d548e33649d2f8ef9741fdb97bdf304d9baf0cf7e2a96fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_swanson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 23:36:09 compute-0 ceph-mon[75654]: pgmap v180: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:36:09 compute-0 sudo[111324]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yobkcaxfpvmspcboknrkrjzzncmsqhju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113769.2836413-48-276238126718947/AnsiballZ_setup.py'
Nov 25 23:36:09 compute-0 sudo[111324]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:36:09 compute-0 brave_swanson[111192]: {
Nov 25 23:36:09 compute-0 brave_swanson[111192]:     "0": [
Nov 25 23:36:09 compute-0 brave_swanson[111192]:         {
Nov 25 23:36:09 compute-0 brave_swanson[111192]:             "devices": [
Nov 25 23:36:09 compute-0 brave_swanson[111192]:                 "/dev/loop3"
Nov 25 23:36:09 compute-0 brave_swanson[111192]:             ],
Nov 25 23:36:09 compute-0 brave_swanson[111192]:             "lv_name": "ceph_lv0",
Nov 25 23:36:09 compute-0 brave_swanson[111192]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 23:36:09 compute-0 brave_swanson[111192]:             "lv_size": "21470642176",
Nov 25 23:36:09 compute-0 brave_swanson[111192]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c5ddf08a-6193-41e0-8332-60b5083aa62e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 23:36:09 compute-0 brave_swanson[111192]:             "lv_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 23:36:09 compute-0 brave_swanson[111192]:             "name": "ceph_lv0",
Nov 25 23:36:09 compute-0 brave_swanson[111192]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 23:36:09 compute-0 brave_swanson[111192]:             "tags": {
Nov 25 23:36:09 compute-0 brave_swanson[111192]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 23:36:09 compute-0 brave_swanson[111192]:                 "ceph.block_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 23:36:09 compute-0 brave_swanson[111192]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 23:36:09 compute-0 brave_swanson[111192]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:36:09 compute-0 brave_swanson[111192]:                 "ceph.cluster_name": "ceph",
Nov 25 23:36:09 compute-0 brave_swanson[111192]:                 "ceph.crush_device_class": "",
Nov 25 23:36:09 compute-0 brave_swanson[111192]:                 "ceph.encrypted": "0",
Nov 25 23:36:09 compute-0 brave_swanson[111192]:                 "ceph.osd_fsid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 23:36:09 compute-0 brave_swanson[111192]:                 "ceph.osd_id": "0",
Nov 25 23:36:09 compute-0 brave_swanson[111192]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 23:36:09 compute-0 brave_swanson[111192]:                 "ceph.type": "block",
Nov 25 23:36:09 compute-0 brave_swanson[111192]:                 "ceph.vdo": "0"
Nov 25 23:36:09 compute-0 brave_swanson[111192]:             },
Nov 25 23:36:09 compute-0 brave_swanson[111192]:             "type": "block",
Nov 25 23:36:09 compute-0 brave_swanson[111192]:             "vg_name": "ceph_vg0"
Nov 25 23:36:09 compute-0 brave_swanson[111192]:         }
Nov 25 23:36:09 compute-0 brave_swanson[111192]:     ],
Nov 25 23:36:09 compute-0 brave_swanson[111192]:     "1": [
Nov 25 23:36:09 compute-0 brave_swanson[111192]:         {
Nov 25 23:36:09 compute-0 brave_swanson[111192]:             "devices": [
Nov 25 23:36:09 compute-0 brave_swanson[111192]:                 "/dev/loop4"
Nov 25 23:36:09 compute-0 brave_swanson[111192]:             ],
Nov 25 23:36:09 compute-0 brave_swanson[111192]:             "lv_name": "ceph_lv1",
Nov 25 23:36:09 compute-0 brave_swanson[111192]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 23:36:09 compute-0 brave_swanson[111192]:             "lv_size": "21470642176",
Nov 25 23:36:09 compute-0 brave_swanson[111192]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21bbab34-bea3-466b-8bf7-812749fcef47,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 23:36:09 compute-0 brave_swanson[111192]:             "lv_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 23:36:09 compute-0 brave_swanson[111192]:             "name": "ceph_lv1",
Nov 25 23:36:09 compute-0 brave_swanson[111192]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 23:36:09 compute-0 brave_swanson[111192]:             "tags": {
Nov 25 23:36:09 compute-0 brave_swanson[111192]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 23:36:09 compute-0 brave_swanson[111192]:                 "ceph.block_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 23:36:09 compute-0 brave_swanson[111192]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 23:36:09 compute-0 brave_swanson[111192]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:36:09 compute-0 brave_swanson[111192]:                 "ceph.cluster_name": "ceph",
Nov 25 23:36:09 compute-0 brave_swanson[111192]:                 "ceph.crush_device_class": "",
Nov 25 23:36:09 compute-0 brave_swanson[111192]:                 "ceph.encrypted": "0",
Nov 25 23:36:09 compute-0 brave_swanson[111192]:                 "ceph.osd_fsid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 23:36:09 compute-0 brave_swanson[111192]:                 "ceph.osd_id": "1",
Nov 25 23:36:09 compute-0 brave_swanson[111192]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 23:36:09 compute-0 brave_swanson[111192]:                 "ceph.type": "block",
Nov 25 23:36:09 compute-0 brave_swanson[111192]:                 "ceph.vdo": "0"
Nov 25 23:36:09 compute-0 brave_swanson[111192]:             },
Nov 25 23:36:09 compute-0 brave_swanson[111192]:             "type": "block",
Nov 25 23:36:09 compute-0 brave_swanson[111192]:             "vg_name": "ceph_vg1"
Nov 25 23:36:09 compute-0 brave_swanson[111192]:         }
Nov 25 23:36:09 compute-0 brave_swanson[111192]:     ],
Nov 25 23:36:09 compute-0 brave_swanson[111192]:     "2": [
Nov 25 23:36:09 compute-0 brave_swanson[111192]:         {
Nov 25 23:36:09 compute-0 brave_swanson[111192]:             "devices": [
Nov 25 23:36:09 compute-0 brave_swanson[111192]:                 "/dev/loop5"
Nov 25 23:36:09 compute-0 brave_swanson[111192]:             ],
Nov 25 23:36:09 compute-0 brave_swanson[111192]:             "lv_name": "ceph_lv2",
Nov 25 23:36:09 compute-0 brave_swanson[111192]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 23:36:09 compute-0 brave_swanson[111192]:             "lv_size": "21470642176",
Nov 25 23:36:09 compute-0 brave_swanson[111192]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=019d967b-1a56-4e90-8682-a890da577e20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 23:36:09 compute-0 brave_swanson[111192]:             "lv_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 23:36:09 compute-0 brave_swanson[111192]:             "name": "ceph_lv2",
Nov 25 23:36:09 compute-0 brave_swanson[111192]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 23:36:09 compute-0 brave_swanson[111192]:             "tags": {
Nov 25 23:36:09 compute-0 brave_swanson[111192]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 23:36:09 compute-0 brave_swanson[111192]:                 "ceph.block_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 23:36:09 compute-0 brave_swanson[111192]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 23:36:09 compute-0 brave_swanson[111192]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:36:09 compute-0 brave_swanson[111192]:                 "ceph.cluster_name": "ceph",
Nov 25 23:36:09 compute-0 brave_swanson[111192]:                 "ceph.crush_device_class": "",
Nov 25 23:36:09 compute-0 brave_swanson[111192]:                 "ceph.encrypted": "0",
Nov 25 23:36:09 compute-0 brave_swanson[111192]:                 "ceph.osd_fsid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 23:36:09 compute-0 brave_swanson[111192]:                 "ceph.osd_id": "2",
Nov 25 23:36:09 compute-0 brave_swanson[111192]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 23:36:09 compute-0 brave_swanson[111192]:                 "ceph.type": "block",
Nov 25 23:36:09 compute-0 brave_swanson[111192]:                 "ceph.vdo": "0"
Nov 25 23:36:09 compute-0 brave_swanson[111192]:             },
Nov 25 23:36:09 compute-0 brave_swanson[111192]:             "type": "block",
Nov 25 23:36:09 compute-0 brave_swanson[111192]:             "vg_name": "ceph_vg2"
Nov 25 23:36:09 compute-0 brave_swanson[111192]:         }
Nov 25 23:36:09 compute-0 brave_swanson[111192]:     ]
Nov 25 23:36:09 compute-0 brave_swanson[111192]: }
Nov 25 23:36:09 compute-0 systemd[1]: libpod-2f739c507046757f9d548e33649d2f8ef9741fdb97bdf304d9baf0cf7e2a96fe.scope: Deactivated successfully.
Nov 25 23:36:09 compute-0 podman[111151]: 2025-11-25 23:36:09.851094059 +0000 UTC m=+0.960818819 container died 2f739c507046757f9d548e33649d2f8ef9741fdb97bdf304d9baf0cf7e2a96fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 23:36:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-32282179f07503ef902c5ac0073ea7c86630365b39a5e685d16b8fc7c8e95c84-merged.mount: Deactivated successfully.
Nov 25 23:36:09 compute-0 podman[111151]: 2025-11-25 23:36:09.91845659 +0000 UTC m=+1.028181320 container remove 2f739c507046757f9d548e33649d2f8ef9741fdb97bdf304d9baf0cf7e2a96fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_swanson, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 23:36:09 compute-0 systemd[1]: libpod-conmon-2f739c507046757f9d548e33649d2f8ef9741fdb97bdf304d9baf0cf7e2a96fe.scope: Deactivated successfully.
Nov 25 23:36:09 compute-0 sudo[110942]: pam_unix(sudo:session): session closed for user root
Nov 25 23:36:10 compute-0 python3.9[111326]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 23:36:10 compute-0 sudo[111342]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:36:10 compute-0 sudo[111342]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:36:10 compute-0 sudo[111342]: pam_unix(sudo:session): session closed for user root
Nov 25 23:36:10 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v181: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:36:10 compute-0 sudo[111373]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:36:10 compute-0 sudo[111373]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:36:10 compute-0 sudo[111373]: pam_unix(sudo:session): session closed for user root
Nov 25 23:36:10 compute-0 sudo[111398]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:36:10 compute-0 sudo[111398]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:36:10 compute-0 sudo[111398]: pam_unix(sudo:session): session closed for user root
Nov 25 23:36:10 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:36:10 compute-0 sudo[111423]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 -- raw list --format json
Nov 25 23:36:10 compute-0 sudo[111324]: pam_unix(sudo:session): session closed for user root
Nov 25 23:36:10 compute-0 sudo[111423]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:36:10 compute-0 ceph-mon[75654]: 2.2 scrub starts
Nov 25 23:36:10 compute-0 ceph-mon[75654]: 2.2 scrub ok
Nov 25 23:36:10 compute-0 podman[111514]: 2025-11-25 23:36:10.620833029 +0000 UTC m=+0.039363754 container create 956d17324c2a4968601c0e43ddbd9bdf3e4e825370496629b664af2c4ca8d8aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_bhabha, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 23:36:10 compute-0 systemd[1]: Started libpod-conmon-956d17324c2a4968601c0e43ddbd9bdf3e4e825370496629b664af2c4ca8d8aa.scope.
Nov 25 23:36:10 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:36:10 compute-0 podman[111514]: 2025-11-25 23:36:10.606105526 +0000 UTC m=+0.024636281 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:36:10 compute-0 podman[111514]: 2025-11-25 23:36:10.703316424 +0000 UTC m=+0.121847179 container init 956d17324c2a4968601c0e43ddbd9bdf3e4e825370496629b664af2c4ca8d8aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_bhabha, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 25 23:36:10 compute-0 podman[111514]: 2025-11-25 23:36:10.713056615 +0000 UTC m=+0.131587350 container start 956d17324c2a4968601c0e43ddbd9bdf3e4e825370496629b664af2c4ca8d8aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_bhabha, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:36:10 compute-0 podman[111514]: 2025-11-25 23:36:10.716659371 +0000 UTC m=+0.135190126 container attach 956d17324c2a4968601c0e43ddbd9bdf3e4e825370496629b664af2c4ca8d8aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_bhabha, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 23:36:10 compute-0 flamboyant_bhabha[111554]: 167 167
Nov 25 23:36:10 compute-0 systemd[1]: libpod-956d17324c2a4968601c0e43ddbd9bdf3e4e825370496629b664af2c4ca8d8aa.scope: Deactivated successfully.
Nov 25 23:36:10 compute-0 podman[111514]: 2025-11-25 23:36:10.719963169 +0000 UTC m=+0.138493904 container died 956d17324c2a4968601c0e43ddbd9bdf3e4e825370496629b664af2c4ca8d8aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_bhabha, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:36:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-fa7984a19737275d72c7e17b346e6df7d5a34dfb1354fdd9924fb5bc084b83a8-merged.mount: Deactivated successfully.
Nov 25 23:36:10 compute-0 sudo[111585]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znyiyrpxfdupwnaybyxkijptvoljkrho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113769.2836413-48-276238126718947/AnsiballZ_dnf.py'
Nov 25 23:36:10 compute-0 sudo[111585]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:36:10 compute-0 podman[111514]: 2025-11-25 23:36:10.7697443 +0000 UTC m=+0.188275055 container remove 956d17324c2a4968601c0e43ddbd9bdf3e4e825370496629b664af2c4ca8d8aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_bhabha, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 23:36:10 compute-0 systemd[1]: libpod-conmon-956d17324c2a4968601c0e43ddbd9bdf3e4e825370496629b664af2c4ca8d8aa.scope: Deactivated successfully.
Nov 25 23:36:10 compute-0 python3.9[111594]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 25 23:36:11 compute-0 podman[111605]: 2025-11-25 23:36:11.015043219 +0000 UTC m=+0.068697468 container create 7a69bfe5289e79be38c7da8ca53219751d53ab5f17a1e1ceee844884f62c32dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 23:36:11 compute-0 systemd[1]: Started libpod-conmon-7a69bfe5289e79be38c7da8ca53219751d53ab5f17a1e1ceee844884f62c32dd.scope.
Nov 25 23:36:11 compute-0 podman[111605]: 2025-11-25 23:36:10.986774072 +0000 UTC m=+0.040428321 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:36:11 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:36:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af4b0a6ad317c2178872bd8b00736cd882932d8a4ada225071289117c649a00f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 23:36:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af4b0a6ad317c2178872bd8b00736cd882932d8a4ada225071289117c649a00f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:36:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af4b0a6ad317c2178872bd8b00736cd882932d8a4ada225071289117c649a00f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:36:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af4b0a6ad317c2178872bd8b00736cd882932d8a4ada225071289117c649a00f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:36:11 compute-0 podman[111605]: 2025-11-25 23:36:11.138103399 +0000 UTC m=+0.191757698 container init 7a69bfe5289e79be38c7da8ca53219751d53ab5f17a1e1ceee844884f62c32dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_haibt, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:36:11 compute-0 podman[111605]: 2025-11-25 23:36:11.150058578 +0000 UTC m=+0.203712817 container start 7a69bfe5289e79be38c7da8ca53219751d53ab5f17a1e1ceee844884f62c32dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_haibt, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 23:36:11 compute-0 podman[111605]: 2025-11-25 23:36:11.154336472 +0000 UTC m=+0.207990781 container attach 7a69bfe5289e79be38c7da8ca53219751d53ab5f17a1e1ceee844884f62c32dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_haibt, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 23:36:11 compute-0 ceph-mon[75654]: pgmap v181: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:36:12 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v182: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:36:12 compute-0 sudo[111585]: pam_unix(sudo:session): session closed for user root
Nov 25 23:36:12 compute-0 stoic_haibt[111623]: {
Nov 25 23:36:12 compute-0 stoic_haibt[111623]:     "019d967b-1a56-4e90-8682-a890da577e20": {
Nov 25 23:36:12 compute-0 stoic_haibt[111623]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:36:12 compute-0 stoic_haibt[111623]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 23:36:12 compute-0 stoic_haibt[111623]:         "osd_id": 2,
Nov 25 23:36:12 compute-0 stoic_haibt[111623]:         "osd_uuid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 23:36:12 compute-0 stoic_haibt[111623]:         "type": "bluestore"
Nov 25 23:36:12 compute-0 stoic_haibt[111623]:     },
Nov 25 23:36:12 compute-0 stoic_haibt[111623]:     "21bbab34-bea3-466b-8bf7-812749fcef47": {
Nov 25 23:36:12 compute-0 stoic_haibt[111623]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:36:12 compute-0 stoic_haibt[111623]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 23:36:12 compute-0 stoic_haibt[111623]:         "osd_id": 1,
Nov 25 23:36:12 compute-0 stoic_haibt[111623]:         "osd_uuid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 23:36:12 compute-0 stoic_haibt[111623]:         "type": "bluestore"
Nov 25 23:36:12 compute-0 stoic_haibt[111623]:     },
Nov 25 23:36:12 compute-0 stoic_haibt[111623]:     "c5ddf08a-6193-41e0-8332-60b5083aa62e": {
Nov 25 23:36:12 compute-0 stoic_haibt[111623]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:36:12 compute-0 stoic_haibt[111623]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 23:36:12 compute-0 stoic_haibt[111623]:         "osd_id": 0,
Nov 25 23:36:12 compute-0 stoic_haibt[111623]:         "osd_uuid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 23:36:12 compute-0 stoic_haibt[111623]:         "type": "bluestore"
Nov 25 23:36:12 compute-0 stoic_haibt[111623]:     }
Nov 25 23:36:12 compute-0 stoic_haibt[111623]: }
Nov 25 23:36:12 compute-0 systemd[1]: libpod-7a69bfe5289e79be38c7da8ca53219751d53ab5f17a1e1ceee844884f62c32dd.scope: Deactivated successfully.
Nov 25 23:36:12 compute-0 systemd[1]: libpod-7a69bfe5289e79be38c7da8ca53219751d53ab5f17a1e1ceee844884f62c32dd.scope: Consumed 1.038s CPU time.
Nov 25 23:36:12 compute-0 podman[111605]: 2025-11-25 23:36:12.181162445 +0000 UTC m=+1.234816694 container died 7a69bfe5289e79be38c7da8ca53219751d53ab5f17a1e1ceee844884f62c32dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_haibt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 23:36:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-af4b0a6ad317c2178872bd8b00736cd882932d8a4ada225071289117c649a00f-merged.mount: Deactivated successfully.
Nov 25 23:36:12 compute-0 podman[111605]: 2025-11-25 23:36:12.257800684 +0000 UTC m=+1.311454933 container remove 7a69bfe5289e79be38c7da8ca53219751d53ab5f17a1e1ceee844884f62c32dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_haibt, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:36:12 compute-0 systemd[1]: libpod-conmon-7a69bfe5289e79be38c7da8ca53219751d53ab5f17a1e1ceee844884f62c32dd.scope: Deactivated successfully.
Nov 25 23:36:12 compute-0 sudo[111423]: pam_unix(sudo:session): session closed for user root
Nov 25 23:36:12 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 23:36:12 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:36:12 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 23:36:12 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:36:12 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev 3492ecdb-c12f-404e-b450-adda31964ea3 does not exist
Nov 25 23:36:12 compute-0 sudo[111692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:36:12 compute-0 sudo[111692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:36:12 compute-0 sudo[111692]: pam_unix(sudo:session): session closed for user root
Nov 25 23:36:12 compute-0 sudo[111717]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 23:36:12 compute-0 sudo[111717]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:36:12 compute-0 sudo[111717]: pam_unix(sudo:session): session closed for user root
Nov 25 23:36:12 compute-0 sudo[111867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ottblpmeedczxqvogittyhvzeyrktudp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113772.4878552-62-34335131793053/AnsiballZ_dnf.py'
Nov 25 23:36:12 compute-0 sudo[111867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:36:13 compute-0 python3.9[111869]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 23:36:13 compute-0 ceph-mon[75654]: pgmap v182: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:36:13 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:36:13 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:36:13 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 5.4 scrub starts
Nov 25 23:36:13 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 5.4 scrub ok
Nov 25 23:36:14 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v183: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:36:14 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.5 deep-scrub starts
Nov 25 23:36:14 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.5 deep-scrub ok
Nov 25 23:36:14 compute-0 sudo[111867]: pam_unix(sudo:session): session closed for user root
Nov 25 23:36:15 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:36:15 compute-0 ceph-mon[75654]: 5.4 scrub starts
Nov 25 23:36:15 compute-0 ceph-mon[75654]: 5.4 scrub ok
Nov 25 23:36:15 compute-0 ceph-mon[75654]: pgmap v183: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:36:15 compute-0 ceph-mon[75654]: 3.5 deep-scrub starts
Nov 25 23:36:15 compute-0 ceph-mon[75654]: 3.5 deep-scrub ok
Nov 25 23:36:15 compute-0 sudo[112020]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxxtwppdixizdmfhohivphxeraezesmq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113774.6136835-70-32649409553097/AnsiballZ_systemd.py'
Nov 25 23:36:15 compute-0 sudo[112020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:36:15 compute-0 python3.9[112022]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 25 23:36:15 compute-0 sudo[112020]: pam_unix(sudo:session): session closed for user root
Nov 25 23:36:16 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v184: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:36:16 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.e scrub starts
Nov 25 23:36:16 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.e scrub ok
Nov 25 23:36:16 compute-0 python3.9[112177]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 23:36:16 compute-0 sshd-session[112023]: Invalid user tuan from 171.244.140.22 port 50126
Nov 25 23:36:17 compute-0 sshd-session[112023]: Received disconnect from 171.244.140.22 port 50126:11: Bye Bye [preauth]
Nov 25 23:36:17 compute-0 sshd-session[112023]: Disconnected from invalid user tuan 171.244.140.22 port 50126 [preauth]
Nov 25 23:36:17 compute-0 ceph-mon[75654]: pgmap v184: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:36:17 compute-0 ceph-mon[75654]: 3.e scrub starts
Nov 25 23:36:17 compute-0 ceph-mon[75654]: 3.e scrub ok
Nov 25 23:36:17 compute-0 sudo[112327]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjotjgqtwjlkpfjwzrisubkyqugyjijk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113776.946301-88-234553761483191/AnsiballZ_sefcontext.py'
Nov 25 23:36:17 compute-0 sudo[112327]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:36:17 compute-0 python3.9[112329]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Nov 25 23:36:17 compute-0 sudo[112327]: pam_unix(sudo:session): session closed for user root
Nov 25 23:36:18 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v185: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:36:18 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Nov 25 23:36:18 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Nov 25 23:36:18 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Nov 25 23:36:18 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Nov 25 23:36:18 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Nov 25 23:36:18 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Nov 25 23:36:18 compute-0 python3.9[112479]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 23:36:19 compute-0 ceph-mon[75654]: pgmap v185: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:36:19 compute-0 ceph-mon[75654]: 7.5 scrub starts
Nov 25 23:36:19 compute-0 ceph-mon[75654]: 7.5 scrub ok
Nov 25 23:36:19 compute-0 ceph-mon[75654]: 2.3 scrub starts
Nov 25 23:36:19 compute-0 ceph-mon[75654]: 2.3 scrub ok
Nov 25 23:36:19 compute-0 ceph-mon[75654]: 2.19 scrub starts
Nov 25 23:36:19 compute-0 ceph-mon[75654]: 2.19 scrub ok
Nov 25 23:36:19 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Nov 25 23:36:19 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Nov 25 23:36:19 compute-0 sudo[112635]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-syunnsbnemdoasvjpatlygpofijdbpom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113779.442006-106-167688758662977/AnsiballZ_dnf.py'
Nov 25 23:36:19 compute-0 sudo[112635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:36:20 compute-0 python3.9[112637]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 23:36:20 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v186: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:36:20 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.c scrub starts
Nov 25 23:36:20 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.c scrub ok
Nov 25 23:36:20 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:36:20 compute-0 ceph-mon[75654]: 5.3 scrub starts
Nov 25 23:36:20 compute-0 ceph-mon[75654]: 5.3 scrub ok
Nov 25 23:36:20 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Nov 25 23:36:20 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Nov 25 23:36:21 compute-0 sudo[112635]: pam_unix(sudo:session): session closed for user root
Nov 25 23:36:21 compute-0 ceph-mon[75654]: pgmap v186: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:36:21 compute-0 ceph-mon[75654]: 7.c scrub starts
Nov 25 23:36:21 compute-0 ceph-mon[75654]: 7.c scrub ok
Nov 25 23:36:21 compute-0 ceph-mon[75654]: 5.2 scrub starts
Nov 25 23:36:21 compute-0 ceph-mon[75654]: 5.2 scrub ok
Nov 25 23:36:21 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 2.8 deep-scrub starts
Nov 25 23:36:21 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Nov 25 23:36:21 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 2.8 deep-scrub ok
Nov 25 23:36:21 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Nov 25 23:36:22 compute-0 sudo[112789]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdlxfvjtwnuvkjtkqudfzjueybglolpt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113781.4463618-114-11909814577825/AnsiballZ_command.py'
Nov 25 23:36:22 compute-0 sudo[112789]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:36:22 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v187: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:36:22 compute-0 python3.9[112791]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:36:22 compute-0 ceph-mon[75654]: 2.8 deep-scrub starts
Nov 25 23:36:22 compute-0 ceph-mon[75654]: 2.4 scrub starts
Nov 25 23:36:22 compute-0 ceph-mon[75654]: 2.8 deep-scrub ok
Nov 25 23:36:22 compute-0 ceph-mon[75654]: 2.4 scrub ok
Nov 25 23:36:22 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.7 deep-scrub starts
Nov 25 23:36:22 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.7 deep-scrub ok
Nov 25 23:36:23 compute-0 sudo[112789]: pam_unix(sudo:session): session closed for user root
Nov 25 23:36:23 compute-0 ceph-mon[75654]: pgmap v187: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:36:23 compute-0 ceph-mon[75654]: 2.7 deep-scrub starts
Nov 25 23:36:23 compute-0 ceph-mon[75654]: 2.7 deep-scrub ok
Nov 25 23:36:23 compute-0 sudo[113076]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vujexxsmeosuizmsgenuammgrcvhuacl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113783.3049784-122-211641766526079/AnsiballZ_file.py'
Nov 25 23:36:23 compute-0 sudo[113076]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:36:24 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v188: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:36:24 compute-0 python3.9[113078]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 25 23:36:24 compute-0 sudo[113076]: pam_unix(sudo:session): session closed for user root
Nov 25 23:36:25 compute-0 python3.9[113228]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 23:36:25 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:36:25 compute-0 ceph-mon[75654]: pgmap v188: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:36:25 compute-0 sudo[113380]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yoakxwebmkawkabaqfttbyxqwfrmwqgb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113785.3889732-138-62363761946822/AnsiballZ_dnf.py'
Nov 25 23:36:25 compute-0 sudo[113380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:36:25 compute-0 python3.9[113382]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 23:36:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:36:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:36:26 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v189: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:36:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:36:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:36:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:36:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:36:27 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Nov 25 23:36:27 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Nov 25 23:36:27 compute-0 sudo[113380]: pam_unix(sudo:session): session closed for user root
Nov 25 23:36:27 compute-0 ceph-mon[75654]: pgmap v189: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:36:27 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Nov 25 23:36:27 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Nov 25 23:36:27 compute-0 sudo[113533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-giobommiojgnekbcaggbriweaezqfeqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113787.420132-147-141295452690454/AnsiballZ_dnf.py'
Nov 25 23:36:27 compute-0 sudo[113533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:36:28 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v190: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:36:28 compute-0 python3.9[113535]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 23:36:28 compute-0 ceph-mon[75654]: 3.8 scrub starts
Nov 25 23:36:28 compute-0 ceph-mon[75654]: 3.8 scrub ok
Nov 25 23:36:28 compute-0 ceph-mon[75654]: 2.6 scrub starts
Nov 25 23:36:28 compute-0 ceph-mon[75654]: 2.6 scrub ok
Nov 25 23:36:29 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.e scrub starts
Nov 25 23:36:29 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.e scrub ok
Nov 25 23:36:29 compute-0 sudo[113533]: pam_unix(sudo:session): session closed for user root
Nov 25 23:36:29 compute-0 ceph-mon[75654]: pgmap v190: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:36:30 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v191: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:36:30 compute-0 sudo[113686]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvpgtqliumauhjjuhlhminodvnqtoxtb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113789.7370298-159-178598769755003/AnsiballZ_stat.py'
Nov 25 23:36:30 compute-0 sudo[113686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:36:30 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:36:30 compute-0 python3.9[113688]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 23:36:30 compute-0 sudo[113686]: pam_unix(sudo:session): session closed for user root
Nov 25 23:36:30 compute-0 ceph-mon[75654]: 7.e scrub starts
Nov 25 23:36:30 compute-0 ceph-mon[75654]: 7.e scrub ok
Nov 25 23:36:30 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.d scrub starts
Nov 25 23:36:30 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.d scrub ok
Nov 25 23:36:31 compute-0 sudo[113840]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onciiatzboabayywdhdzjlibdqiwbyox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113790.535898-167-27904479250246/AnsiballZ_slurp.py'
Nov 25 23:36:31 compute-0 sudo[113840]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:36:31 compute-0 python3.9[113842]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Nov 25 23:36:31 compute-0 sudo[113840]: pam_unix(sudo:session): session closed for user root
Nov 25 23:36:31 compute-0 ceph-mon[75654]: pgmap v191: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:36:31 compute-0 ceph-mon[75654]: 2.d scrub starts
Nov 25 23:36:31 compute-0 ceph-mon[75654]: 2.d scrub ok
Nov 25 23:36:31 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 2.b scrub starts
Nov 25 23:36:31 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 2.b scrub ok
Nov 25 23:36:32 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v192: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:36:32 compute-0 sshd-session[110599]: Connection closed by 192.168.122.30 port 33210
Nov 25 23:36:32 compute-0 sshd-session[110578]: pam_unix(sshd:session): session closed for user zuul
Nov 25 23:36:32 compute-0 systemd-logind[789]: Session 36 logged out. Waiting for processes to exit.
Nov 25 23:36:32 compute-0 systemd[1]: session-36.scope: Deactivated successfully.
Nov 25 23:36:32 compute-0 systemd[1]: session-36.scope: Consumed 19.476s CPU time.
Nov 25 23:36:32 compute-0 systemd-logind[789]: Removed session 36.
Nov 25 23:36:32 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Nov 25 23:36:32 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Nov 25 23:36:33 compute-0 ceph-mon[75654]: 2.b scrub starts
Nov 25 23:36:33 compute-0 ceph-mon[75654]: 2.b scrub ok
Nov 25 23:36:33 compute-0 ceph-mon[75654]: pgmap v192: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:36:33 compute-0 ceph-mon[75654]: 2.1d scrub starts
Nov 25 23:36:33 compute-0 ceph-mon[75654]: 2.1d scrub ok
Nov 25 23:36:33 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Nov 25 23:36:33 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Nov 25 23:36:33 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Nov 25 23:36:33 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Nov 25 23:36:34 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v193: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:36:34 compute-0 ceph-mon[75654]: 5.5 scrub starts
Nov 25 23:36:34 compute-0 ceph-mon[75654]: 5.5 scrub ok
Nov 25 23:36:35 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:36:35 compute-0 ceph-mon[75654]: 3.1d scrub starts
Nov 25 23:36:35 compute-0 ceph-mon[75654]: 3.1d scrub ok
Nov 25 23:36:35 compute-0 ceph-mon[75654]: pgmap v193: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:36:36 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v194: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:36:36 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.16 deep-scrub starts
Nov 25 23:36:36 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.16 deep-scrub ok
Nov 25 23:36:37 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Nov 25 23:36:37 compute-0 ceph-mon[75654]: pgmap v194: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:36:37 compute-0 ceph-mon[75654]: 5.16 deep-scrub starts
Nov 25 23:36:37 compute-0 ceph-mon[75654]: 5.16 deep-scrub ok
Nov 25 23:36:37 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Nov 25 23:36:37 compute-0 sshd-session[113867]: Accepted publickey for zuul from 192.168.122.30 port 44740 ssh2: ECDSA SHA256:7fdySOwLsSW4/MY3qn2D8coSBpPDcuftesbV+ivPlmY
Nov 25 23:36:37 compute-0 systemd-logind[789]: New session 37 of user zuul.
Nov 25 23:36:37 compute-0 systemd[1]: Started Session 37 of User zuul.
Nov 25 23:36:37 compute-0 sshd-session[113867]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 23:36:38 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v195: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:36:38 compute-0 ceph-mon[75654]: 2.1c scrub starts
Nov 25 23:36:38 compute-0 ceph-mon[75654]: 2.1c scrub ok
Nov 25 23:36:38 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Nov 25 23:36:38 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Nov 25 23:36:39 compute-0 python3.9[114020]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 23:36:39 compute-0 ceph-mon[75654]: pgmap v195: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:36:39 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.a deep-scrub starts
Nov 25 23:36:39 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.a deep-scrub ok
Nov 25 23:36:40 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v196: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:36:40 compute-0 python3.9[114174]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 23:36:40 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:36:40 compute-0 ceph-mon[75654]: 7.2 scrub starts
Nov 25 23:36:40 compute-0 ceph-mon[75654]: 7.2 scrub ok
Nov 25 23:36:40 compute-0 ceph-mon[75654]: 2.a deep-scrub starts
Nov 25 23:36:40 compute-0 ceph-mon[75654]: 2.a deep-scrub ok
Nov 25 23:36:41 compute-0 python3.9[114367]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:36:41 compute-0 ceph-mon[75654]: pgmap v196: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:36:41 compute-0 sshd-session[113870]: Connection closed by 192.168.122.30 port 44740
Nov 25 23:36:41 compute-0 sshd-session[113867]: pam_unix(sshd:session): session closed for user zuul
Nov 25 23:36:41 compute-0 systemd-logind[789]: Session 37 logged out. Waiting for processes to exit.
Nov 25 23:36:41 compute-0 systemd[1]: session-37.scope: Deactivated successfully.
Nov 25 23:36:41 compute-0 systemd[1]: session-37.scope: Consumed 2.858s CPU time.
Nov 25 23:36:41 compute-0 systemd-logind[789]: Removed session 37.
Nov 25 23:36:41 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Nov 25 23:36:41 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Nov 25 23:36:42 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v197: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:36:42 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Nov 25 23:36:42 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Nov 25 23:36:43 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 6.7 deep-scrub starts
Nov 25 23:36:43 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 6.7 deep-scrub ok
Nov 25 23:36:43 compute-0 ceph-mon[75654]: 3.1e scrub starts
Nov 25 23:36:43 compute-0 ceph-mon[75654]: 3.1e scrub ok
Nov 25 23:36:43 compute-0 ceph-mon[75654]: pgmap v197: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:36:43 compute-0 ceph-mon[75654]: 2.1f scrub starts
Nov 25 23:36:43 compute-0 ceph-mon[75654]: 2.1f scrub ok
Nov 25 23:36:43 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Nov 25 23:36:43 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Nov 25 23:36:44 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v198: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:36:44 compute-0 ceph-mon[75654]: 6.7 deep-scrub starts
Nov 25 23:36:44 compute-0 ceph-mon[75654]: 6.7 deep-scrub ok
Nov 25 23:36:44 compute-0 ceph-mon[75654]: 7.1 scrub starts
Nov 25 23:36:44 compute-0 ceph-mon[75654]: 7.1 scrub ok
Nov 25 23:36:45 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:36:45 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Nov 25 23:36:45 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Nov 25 23:36:45 compute-0 ceph-mon[75654]: pgmap v198: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:36:45 compute-0 ceph-mon[75654]: 6.3 scrub starts
Nov 25 23:36:45 compute-0 ceph-mon[75654]: 6.3 scrub ok
Nov 25 23:36:45 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Nov 25 23:36:45 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Nov 25 23:36:46 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v199: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:36:47 compute-0 sshd-session[114393]: Accepted publickey for zuul from 192.168.122.30 port 42550 ssh2: ECDSA SHA256:7fdySOwLsSW4/MY3qn2D8coSBpPDcuftesbV+ivPlmY
Nov 25 23:36:47 compute-0 systemd-logind[789]: New session 38 of user zuul.
Nov 25 23:36:47 compute-0 systemd[1]: Started Session 38 of User zuul.
Nov 25 23:36:47 compute-0 sshd-session[114393]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 23:36:47 compute-0 ceph-mon[75654]: 7.1a scrub starts
Nov 25 23:36:47 compute-0 ceph-mon[75654]: 7.1a scrub ok
Nov 25 23:36:47 compute-0 ceph-mon[75654]: pgmap v199: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:36:47 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.7 deep-scrub starts
Nov 25 23:36:47 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.7 deep-scrub ok
Nov 25 23:36:48 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v200: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:36:48 compute-0 python3.9[114546]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 23:36:48 compute-0 ceph-mon[75654]: 3.7 deep-scrub starts
Nov 25 23:36:48 compute-0 ceph-mon[75654]: 3.7 deep-scrub ok
Nov 25 23:36:49 compute-0 python3.9[114700]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 23:36:49 compute-0 ceph-mon[75654]: pgmap v200: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:36:50 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v201: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:36:50 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:36:50 compute-0 sudo[114854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvexdwpauaiowjyqlerxxtthfjocrfom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113810.0150125-40-59614215507808/AnsiballZ_setup.py'
Nov 25 23:36:50 compute-0 sudo[114854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:36:50 compute-0 python3.9[114856]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 23:36:51 compute-0 sudo[114854]: pam_unix(sudo:session): session closed for user root
Nov 25 23:36:51 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Nov 25 23:36:51 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Nov 25 23:36:51 compute-0 sudo[114938]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvyeaanceeillbtaibrvmrqptwwrkpfn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113810.0150125-40-59614215507808/AnsiballZ_dnf.py'
Nov 25 23:36:51 compute-0 sudo[114938]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:36:51 compute-0 ceph-mon[75654]: pgmap v201: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:36:51 compute-0 ceph-mon[75654]: 6.5 scrub starts
Nov 25 23:36:51 compute-0 ceph-mon[75654]: 6.5 scrub ok
Nov 25 23:36:51 compute-0 python3.9[114940]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 23:36:51 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Nov 25 23:36:51 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Nov 25 23:36:52 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v202: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:36:52 compute-0 ceph-mon[75654]: 6.8 scrub starts
Nov 25 23:36:52 compute-0 ceph-mon[75654]: 6.8 scrub ok
Nov 25 23:36:53 compute-0 sudo[114938]: pam_unix(sudo:session): session closed for user root
Nov 25 23:36:53 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.f scrub starts
Nov 25 23:36:53 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.f scrub ok
Nov 25 23:36:53 compute-0 ceph-mon[75654]: pgmap v202: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:36:53 compute-0 sudo[115091]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwpeogphhdodwnubcynnmmdfaozjvzcs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113813.228814-52-61019361303227/AnsiballZ_setup.py'
Nov 25 23:36:53 compute-0 sudo[115091]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:36:53 compute-0 python3.9[115093]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 23:36:54 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v203: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:36:54 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 6.9 deep-scrub starts
Nov 25 23:36:54 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 6.9 deep-scrub ok
Nov 25 23:36:54 compute-0 sudo[115091]: pam_unix(sudo:session): session closed for user root
Nov 25 23:36:54 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Nov 25 23:36:54 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Nov 25 23:36:54 compute-0 ceph-mon[75654]: 5.f scrub starts
Nov 25 23:36:54 compute-0 ceph-mon[75654]: 5.f scrub ok
Nov 25 23:36:54 compute-0 ceph-mon[75654]: 6.9 deep-scrub starts
Nov 25 23:36:54 compute-0 ceph-mon[75654]: 6.9 deep-scrub ok
Nov 25 23:36:55 compute-0 sudo[115286]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlkvlcppzuckyukpkyxvnmwsobjkcsyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113814.609517-63-163878395648426/AnsiballZ_file.py'
Nov 25 23:36:55 compute-0 sudo[115286]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:36:55 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:36:55 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 6.a scrub starts
Nov 25 23:36:55 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 6.a scrub ok
Nov 25 23:36:55 compute-0 python3.9[115288]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:36:55 compute-0 sudo[115286]: pam_unix(sudo:session): session closed for user root
Nov 25 23:36:55 compute-0 ceph-mon[75654]: pgmap v203: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:36:55 compute-0 ceph-mon[75654]: 2.5 scrub starts
Nov 25 23:36:55 compute-0 ceph-mon[75654]: 2.5 scrub ok
Nov 25 23:36:55 compute-0 ceph-mon[75654]: 6.a scrub starts
Nov 25 23:36:55 compute-0 ceph-mon[75654]: 6.a scrub ok
Nov 25 23:36:56 compute-0 ceph-mgr[75954]: [balancer INFO root] Optimize plan auto_2025-11-25_23:36:56
Nov 25 23:36:56 compute-0 ceph-mgr[75954]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 23:36:56 compute-0 ceph-mgr[75954]: [balancer INFO root] do_upmap
Nov 25 23:36:56 compute-0 ceph-mgr[75954]: [balancer INFO root] pools ['backups', 'volumes', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr', 'vms', 'images']
Nov 25 23:36:56 compute-0 ceph-mgr[75954]: [balancer INFO root] prepared 0/10 changes
Nov 25 23:36:56 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 6.f scrub starts
Nov 25 23:36:56 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 6.f scrub ok
Nov 25 23:36:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:36:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:36:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 23:36:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 23:36:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:36:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:36:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:36:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:36:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 23:36:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 23:36:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 23:36:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 23:36:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 23:36:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 23:36:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 23:36:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 23:36:56 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v204: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:36:56 compute-0 sudo[115438]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iekenaofmuxfwmkjtmlabtdqhvrstoti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113815.64127-71-266391833418894/AnsiballZ_command.py'
Nov 25 23:36:56 compute-0 sudo[115438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:36:56 compute-0 python3.9[115440]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:36:56 compute-0 sudo[115438]: pam_unix(sudo:session): session closed for user root
Nov 25 23:36:56 compute-0 ceph-mon[75654]: pgmap v204: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:36:57 compute-0 sudo[115604]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otuflmuadmshofenhnqcnjxkxvewdydi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113816.6765804-79-262770036500800/AnsiballZ_stat.py'
Nov 25 23:36:57 compute-0 sudo[115604]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:36:57 compute-0 python3.9[115606]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:36:57 compute-0 sudo[115604]: pam_unix(sudo:session): session closed for user root
Nov 25 23:36:57 compute-0 sudo[115682]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amqnyjgealvuecacmntgwbgilkdshuja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113816.6765804-79-262770036500800/AnsiballZ_file.py'
Nov 25 23:36:57 compute-0 sudo[115682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:36:57 compute-0 ceph-mon[75654]: 6.f scrub starts
Nov 25 23:36:57 compute-0 ceph-mon[75654]: 6.f scrub ok
Nov 25 23:36:57 compute-0 python3.9[115684]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:36:57 compute-0 sudo[115682]: pam_unix(sudo:session): session closed for user root
Nov 25 23:36:58 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v205: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:36:58 compute-0 sudo[115834]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkioustpcpqzxnshfgswwjwtwelxrywv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113818.1736772-91-203116062368951/AnsiballZ_stat.py'
Nov 25 23:36:58 compute-0 sudo[115834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:36:58 compute-0 python3.9[115836]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:36:58 compute-0 sudo[115834]: pam_unix(sudo:session): session closed for user root
Nov 25 23:36:58 compute-0 ceph-mon[75654]: pgmap v205: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:36:59 compute-0 sudo[115912]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvlaoyzylkllzcqyltvsfjonejkibagu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113818.1736772-91-203116062368951/AnsiballZ_file.py'
Nov 25 23:36:59 compute-0 sudo[115912]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:36:59 compute-0 python3.9[115914]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:36:59 compute-0 sudo[115912]: pam_unix(sudo:session): session closed for user root
Nov 25 23:37:00 compute-0 sudo[116064]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlrgakcioybokaykwyvfneppzouvvlhb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113819.4887598-104-100128161152647/AnsiballZ_ini_file.py'
Nov 25 23:37:00 compute-0 sudo[116064]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:37:00 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v206: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:37:00 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:37:00 compute-0 python3.9[116066]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:37:00 compute-0 sudo[116064]: pam_unix(sudo:session): session closed for user root
Nov 25 23:37:00 compute-0 sudo[116216]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkuteprvikgdoevqkeclxdgunkahlroh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113820.5218704-104-60253546206950/AnsiballZ_ini_file.py'
Nov 25 23:37:00 compute-0 sudo[116216]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:37:01 compute-0 python3.9[116218]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:37:01 compute-0 sudo[116216]: pam_unix(sudo:session): session closed for user root
Nov 25 23:37:01 compute-0 ceph-mon[75654]: pgmap v206: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:37:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 23:37:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:37:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 23:37:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:37:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:37:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:37:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:37:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:37:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:37:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:37:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:37:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:37:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 23:37:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:37:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:37:01 compute-0 sudo[116368]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utgfmqzfoncygsxubzfkskcpurufsxsh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113821.3494449-104-85737178433211/AnsiballZ_ini_file.py'
Nov 25 23:37:01 compute-0 sudo[116368]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:37:01 compute-0 python3.9[116370]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:37:01 compute-0 sudo[116368]: pam_unix(sudo:session): session closed for user root
Nov 25 23:37:02 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v207: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:37:02 compute-0 sudo[116520]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqrkhennizflkmmjkwgssmvxqgaypkqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113822.116325-104-32301365926644/AnsiballZ_ini_file.py'
Nov 25 23:37:02 compute-0 sudo[116520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:37:02 compute-0 python3.9[116522]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:37:02 compute-0 sudo[116520]: pam_unix(sudo:session): session closed for user root
Nov 25 23:37:03 compute-0 ceph-mon[75654]: pgmap v207: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:37:03 compute-0 sudo[116672]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aoqovmpzjbkwqvwfevufgussdbokjrej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113823.0322917-135-177432380806533/AnsiballZ_dnf.py'
Nov 25 23:37:03 compute-0 sudo[116672]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:37:03 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Nov 25 23:37:03 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Nov 25 23:37:03 compute-0 python3.9[116674]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 23:37:04 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v208: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:37:04 compute-0 ceph-mon[75654]: 2.9 scrub starts
Nov 25 23:37:04 compute-0 ceph-mon[75654]: 2.9 scrub ok
Nov 25 23:37:04 compute-0 sshd-session[116675]: Invalid user administrator from 68.183.82.237 port 55118
Nov 25 23:37:04 compute-0 sudo[116672]: pam_unix(sudo:session): session closed for user root
Nov 25 23:37:05 compute-0 sshd-session[116675]: Received disconnect from 68.183.82.237 port 55118:11: Bye Bye [preauth]
Nov 25 23:37:05 compute-0 sshd-session[116675]: Disconnected from invalid user administrator 68.183.82.237 port 55118 [preauth]
Nov 25 23:37:05 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:37:05 compute-0 ceph-mon[75654]: pgmap v208: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:37:05 compute-0 sudo[116827]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzbhxlolpmuznndxozhywzwxuhwuvqia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113825.3733633-146-119853763788504/AnsiballZ_setup.py'
Nov 25 23:37:05 compute-0 sudo[116827]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:37:06 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v209: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:37:06 compute-0 python3.9[116829]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 23:37:06 compute-0 sudo[116827]: pam_unix(sudo:session): session closed for user root
Nov 25 23:37:06 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Nov 25 23:37:06 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Nov 25 23:37:06 compute-0 sudo[116981]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zprbbbgzmjldreykzxmnxcdtczhfpvyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113826.3681836-154-136671082396593/AnsiballZ_stat.py'
Nov 25 23:37:06 compute-0 sudo[116981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:37:06 compute-0 python3.9[116983]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 23:37:06 compute-0 sudo[116981]: pam_unix(sudo:session): session closed for user root
Nov 25 23:37:07 compute-0 ceph-mon[75654]: pgmap v209: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:37:07 compute-0 ceph-mon[75654]: 5.1 scrub starts
Nov 25 23:37:07 compute-0 ceph-mon[75654]: 5.1 scrub ok
Nov 25 23:37:07 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.1a deep-scrub starts
Nov 25 23:37:07 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.1a deep-scrub ok
Nov 25 23:37:07 compute-0 sudo[117133]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgwzuxykbhbobfnbulkhqpaqohnuonoy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113827.1509018-163-162717446094387/AnsiballZ_stat.py'
Nov 25 23:37:07 compute-0 sudo[117133]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:37:07 compute-0 python3.9[117135]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 23:37:07 compute-0 sudo[117133]: pam_unix(sudo:session): session closed for user root
Nov 25 23:37:08 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v210: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:37:08 compute-0 sudo[117285]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulefcnbzcocbpwhhsgrexqyskrrhbjiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113827.9243183-173-24864721737578/AnsiballZ_command.py'
Nov 25 23:37:08 compute-0 sudo[117285]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:37:08 compute-0 ceph-mon[75654]: 5.1a deep-scrub starts
Nov 25 23:37:08 compute-0 ceph-mon[75654]: 5.1a deep-scrub ok
Nov 25 23:37:08 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Nov 25 23:37:08 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Nov 25 23:37:08 compute-0 python3.9[117287]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:37:08 compute-0 sudo[117285]: pam_unix(sudo:session): session closed for user root
Nov 25 23:37:09 compute-0 ceph-mon[75654]: pgmap v210: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:37:09 compute-0 ceph-mon[75654]: 5.19 scrub starts
Nov 25 23:37:09 compute-0 ceph-mon[75654]: 5.19 scrub ok
Nov 25 23:37:09 compute-0 sudo[117438]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvubzrqzefjewovhlzxfwqlnaeirosck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113828.8089218-183-156384415225378/AnsiballZ_service_facts.py'
Nov 25 23:37:09 compute-0 sudo[117438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:37:09 compute-0 python3.9[117440]: ansible-service_facts Invoked
Nov 25 23:37:09 compute-0 network[117457]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 25 23:37:09 compute-0 network[117458]: 'network-scripts' will be removed from distribution in near future.
Nov 25 23:37:09 compute-0 network[117459]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 25 23:37:10 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v211: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:37:10 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:37:10 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.c scrub starts
Nov 25 23:37:10 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.c scrub ok
Nov 25 23:37:11 compute-0 ceph-mon[75654]: pgmap v211: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:37:11 compute-0 ceph-mon[75654]: 5.c scrub starts
Nov 25 23:37:11 compute-0 ceph-mon[75654]: 5.c scrub ok
Nov 25 23:37:12 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v212: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:37:12 compute-0 sudo[117438]: pam_unix(sudo:session): session closed for user root
Nov 25 23:37:12 compute-0 sudo[117594]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:37:12 compute-0 sudo[117594]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:37:12 compute-0 sudo[117594]: pam_unix(sudo:session): session closed for user root
Nov 25 23:37:12 compute-0 sudo[117619]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:37:12 compute-0 sudo[117619]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:37:12 compute-0 sudo[117619]: pam_unix(sudo:session): session closed for user root
Nov 25 23:37:12 compute-0 sudo[117644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:37:12 compute-0 sudo[117644]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:37:12 compute-0 sudo[117644]: pam_unix(sudo:session): session closed for user root
Nov 25 23:37:12 compute-0 sudo[117669]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 23:37:12 compute-0 sudo[117669]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:37:13 compute-0 ceph-mon[75654]: pgmap v212: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:37:13 compute-0 sudo[117669]: pam_unix(sudo:session): session closed for user root
Nov 25 23:37:13 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 23:37:13 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:37:13 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 23:37:13 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 23:37:13 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 23:37:13 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:37:13 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev 220bf64d-8508-4af9-b585-0deccf424ab5 does not exist
Nov 25 23:37:13 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev 6c6add32-be32-4aa1-b6a0-04fbe6fbf5b5 does not exist
Nov 25 23:37:13 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev 272ebab4-8c82-4f61-9e92-d5d8d23e77fb does not exist
Nov 25 23:37:13 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 23:37:13 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 23:37:13 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 23:37:13 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 23:37:13 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 23:37:13 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:37:13 compute-0 sudo[117880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkaajemvtfrhrsnfnbsizijjfvuxvkra ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1764113833.0300255-198-153420007109661/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1764113833.0300255-198-153420007109661/args'
Nov 25 23:37:13 compute-0 sudo[117880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:37:13 compute-0 sudo[117867]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:37:13 compute-0 sudo[117867]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:37:13 compute-0 sudo[117867]: pam_unix(sudo:session): session closed for user root
Nov 25 23:37:13 compute-0 sudo[117880]: pam_unix(sudo:session): session closed for user root
Nov 25 23:37:13 compute-0 sudo[117905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:37:13 compute-0 sudo[117905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:37:13 compute-0 sudo[117905]: pam_unix(sudo:session): session closed for user root
Nov 25 23:37:13 compute-0 sudo[117942]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:37:13 compute-0 sudo[117942]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:37:13 compute-0 sudo[117942]: pam_unix(sudo:session): session closed for user root
Nov 25 23:37:13 compute-0 sudo[117990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 23:37:13 compute-0 sudo[117990]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:37:14 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v213: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:37:14 compute-0 podman[118109]: 2025-11-25 23:37:14.145906122 +0000 UTC m=+0.072770125 container create 7c17e052a27c06b5f6b6263096fc9ef98885e152b1e202dbd3ed5c0f5c471c9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mayer, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 23:37:14 compute-0 systemd[77281]: Created slice User Background Tasks Slice.
Nov 25 23:37:14 compute-0 systemd[77281]: Starting Cleanup of User's Temporary Files and Directories...
Nov 25 23:37:14 compute-0 systemd[1]: Started libpod-conmon-7c17e052a27c06b5f6b6263096fc9ef98885e152b1e202dbd3ed5c0f5c471c9b.scope.
Nov 25 23:37:14 compute-0 systemd[77281]: Finished Cleanup of User's Temporary Files and Directories.
Nov 25 23:37:14 compute-0 podman[118109]: 2025-11-25 23:37:14.108914008 +0000 UTC m=+0.035778021 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:37:14 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:37:14 compute-0 podman[118109]: 2025-11-25 23:37:14.253628294 +0000 UTC m=+0.180492297 container init 7c17e052a27c06b5f6b6263096fc9ef98885e152b1e202dbd3ed5c0f5c471c9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mayer, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:37:14 compute-0 podman[118109]: 2025-11-25 23:37:14.262216888 +0000 UTC m=+0.189080901 container start 7c17e052a27c06b5f6b6263096fc9ef98885e152b1e202dbd3ed5c0f5c471c9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mayer, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 23:37:14 compute-0 podman[118109]: 2025-11-25 23:37:14.265881967 +0000 UTC m=+0.192745950 container attach 7c17e052a27c06b5f6b6263096fc9ef98885e152b1e202dbd3ed5c0f5c471c9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mayer, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 23:37:14 compute-0 magical_mayer[118157]: 167 167
Nov 25 23:37:14 compute-0 systemd[1]: libpod-7c17e052a27c06b5f6b6263096fc9ef98885e152b1e202dbd3ed5c0f5c471c9b.scope: Deactivated successfully.
Nov 25 23:37:14 compute-0 podman[118109]: 2025-11-25 23:37:14.27339699 +0000 UTC m=+0.200260963 container died 7c17e052a27c06b5f6b6263096fc9ef98885e152b1e202dbd3ed5c0f5c471c9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mayer, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 23:37:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-227bea80e4be7c41e1a7779465f66e1efff767ff5c79b4b0879db2b6cbe72cc7-merged.mount: Deactivated successfully.
Nov 25 23:37:14 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:37:14 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 23:37:14 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:37:14 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 23:37:14 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 23:37:14 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:37:14 compute-0 podman[118109]: 2025-11-25 23:37:14.324890817 +0000 UTC m=+0.251754780 container remove 7c17e052a27c06b5f6b6263096fc9ef98885e152b1e202dbd3ed5c0f5c471c9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mayer, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507)
Nov 25 23:37:14 compute-0 sudo[118211]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odfibcszexpgogaytilgosdmjuzxphnf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113833.930632-209-204015597935059/AnsiballZ_dnf.py'
Nov 25 23:37:14 compute-0 sudo[118211]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:37:14 compute-0 systemd[1]: libpod-conmon-7c17e052a27c06b5f6b6263096fc9ef98885e152b1e202dbd3ed5c0f5c471c9b.scope: Deactivated successfully.
Nov 25 23:37:14 compute-0 podman[118225]: 2025-11-25 23:37:14.481240279 +0000 UTC m=+0.044386086 container create a0676da237e6cb729e156b335b1556ee283facaa08f8d191f217fb6a5b1c0826 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:37:14 compute-0 systemd[1]: Started libpod-conmon-a0676da237e6cb729e156b335b1556ee283facaa08f8d191f217fb6a5b1c0826.scope.
Nov 25 23:37:14 compute-0 python3.9[118217]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 23:37:14 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:37:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89592a278bb2631252a9acc106f1e0c1d51ffafd5826d4c06b7f7d7054a333b2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 23:37:14 compute-0 podman[118225]: 2025-11-25 23:37:14.462691326 +0000 UTC m=+0.025837173 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:37:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89592a278bb2631252a9acc106f1e0c1d51ffafd5826d4c06b7f7d7054a333b2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:37:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89592a278bb2631252a9acc106f1e0c1d51ffafd5826d4c06b7f7d7054a333b2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:37:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89592a278bb2631252a9acc106f1e0c1d51ffafd5826d4c06b7f7d7054a333b2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:37:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89592a278bb2631252a9acc106f1e0c1d51ffafd5826d4c06b7f7d7054a333b2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 23:37:14 compute-0 podman[118225]: 2025-11-25 23:37:14.572952846 +0000 UTC m=+0.136098653 container init a0676da237e6cb729e156b335b1556ee283facaa08f8d191f217fb6a5b1c0826 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_liskov, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Nov 25 23:37:14 compute-0 podman[118225]: 2025-11-25 23:37:14.58968023 +0000 UTC m=+0.152826027 container start a0676da237e6cb729e156b335b1556ee283facaa08f8d191f217fb6a5b1c0826 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_liskov, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 23:37:14 compute-0 podman[118225]: 2025-11-25 23:37:14.592848277 +0000 UTC m=+0.155994094 container attach a0676da237e6cb729e156b335b1556ee283facaa08f8d191f217fb6a5b1c0826 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_liskov, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:37:15 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:37:15 compute-0 ceph-mon[75654]: pgmap v213: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:37:15 compute-0 zen_liskov[118242]: --> passed data devices: 0 physical, 3 LVM
Nov 25 23:37:15 compute-0 zen_liskov[118242]: --> relative data size: 1.0
Nov 25 23:37:15 compute-0 zen_liskov[118242]: --> All data devices are unavailable
Nov 25 23:37:15 compute-0 systemd[1]: libpod-a0676da237e6cb729e156b335b1556ee283facaa08f8d191f217fb6a5b1c0826.scope: Deactivated successfully.
Nov 25 23:37:15 compute-0 podman[118225]: 2025-11-25 23:37:15.583588383 +0000 UTC m=+1.146734210 container died a0676da237e6cb729e156b335b1556ee283facaa08f8d191f217fb6a5b1c0826 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_liskov, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 25 23:37:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-89592a278bb2631252a9acc106f1e0c1d51ffafd5826d4c06b7f7d7054a333b2-merged.mount: Deactivated successfully.
Nov 25 23:37:15 compute-0 podman[118225]: 2025-11-25 23:37:15.631498562 +0000 UTC m=+1.194644359 container remove a0676da237e6cb729e156b335b1556ee283facaa08f8d191f217fb6a5b1c0826 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:37:15 compute-0 systemd[1]: libpod-conmon-a0676da237e6cb729e156b335b1556ee283facaa08f8d191f217fb6a5b1c0826.scope: Deactivated successfully.
Nov 25 23:37:15 compute-0 sudo[117990]: pam_unix(sudo:session): session closed for user root
Nov 25 23:37:15 compute-0 sudo[118286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:37:15 compute-0 sudo[118286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:37:15 compute-0 sudo[118286]: pam_unix(sudo:session): session closed for user root
Nov 25 23:37:15 compute-0 sudo[118311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:37:15 compute-0 sudo[118311]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:37:15 compute-0 sudo[118311]: pam_unix(sudo:session): session closed for user root
Nov 25 23:37:15 compute-0 sudo[118211]: pam_unix(sudo:session): session closed for user root
Nov 25 23:37:15 compute-0 sudo[118336]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:37:15 compute-0 sudo[118336]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:37:15 compute-0 sudo[118336]: pam_unix(sudo:session): session closed for user root
Nov 25 23:37:15 compute-0 sudo[118385]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 -- lvm list --format json
Nov 25 23:37:15 compute-0 sudo[118385]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:37:16 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v214: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:37:16 compute-0 podman[118499]: 2025-11-25 23:37:16.237800599 +0000 UTC m=+0.048013443 container create 30867d502522287df29f8c477efc1325e6ceeffa5de068aeae445d8c6da754bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_banzai, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 23:37:16 compute-0 systemd[1]: Started libpod-conmon-30867d502522287df29f8c477efc1325e6ceeffa5de068aeae445d8c6da754bd.scope.
Nov 25 23:37:16 compute-0 podman[118499]: 2025-11-25 23:37:16.218410234 +0000 UTC m=+0.028623098 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:37:16 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:37:16 compute-0 podman[118499]: 2025-11-25 23:37:16.331580363 +0000 UTC m=+0.141793287 container init 30867d502522287df29f8c477efc1325e6ceeffa5de068aeae445d8c6da754bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_banzai, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:37:16 compute-0 podman[118499]: 2025-11-25 23:37:16.339961851 +0000 UTC m=+0.150174695 container start 30867d502522287df29f8c477efc1325e6ceeffa5de068aeae445d8c6da754bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 23:37:16 compute-0 podman[118499]: 2025-11-25 23:37:16.343146668 +0000 UTC m=+0.153359542 container attach 30867d502522287df29f8c477efc1325e6ceeffa5de068aeae445d8c6da754bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_banzai, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:37:16 compute-0 fervent_banzai[118520]: 167 167
Nov 25 23:37:16 compute-0 systemd[1]: libpod-30867d502522287df29f8c477efc1325e6ceeffa5de068aeae445d8c6da754bd.scope: Deactivated successfully.
Nov 25 23:37:16 compute-0 podman[118499]: 2025-11-25 23:37:16.348108172 +0000 UTC m=+0.158321056 container died 30867d502522287df29f8c477efc1325e6ceeffa5de068aeae445d8c6da754bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 23:37:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-33752ad957791cf824ad45e96ebac8bbc5c1182b9850d5d8eb6b4677bd055405-merged.mount: Deactivated successfully.
Nov 25 23:37:16 compute-0 podman[118499]: 2025-11-25 23:37:16.402332923 +0000 UTC m=+0.212545807 container remove 30867d502522287df29f8c477efc1325e6ceeffa5de068aeae445d8c6da754bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_banzai, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2)
Nov 25 23:37:16 compute-0 systemd[1]: libpod-conmon-30867d502522287df29f8c477efc1325e6ceeffa5de068aeae445d8c6da754bd.scope: Deactivated successfully.
Nov 25 23:37:16 compute-0 podman[118546]: 2025-11-25 23:37:16.584368401 +0000 UTC m=+0.040931101 container create bbec863d3f25cd3b53b8a0f88c9c34c3ab622d5a6732f234ff8a340992259df2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_cannon, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 23:37:16 compute-0 systemd[1]: Started libpod-conmon-bbec863d3f25cd3b53b8a0f88c9c34c3ab622d5a6732f234ff8a340992259df2.scope.
Nov 25 23:37:16 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:37:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22f5128b91b261dadf6b8b44ab457cac2c30df9e7796175236c845c4b7108797/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 23:37:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22f5128b91b261dadf6b8b44ab457cac2c30df9e7796175236c845c4b7108797/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:37:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22f5128b91b261dadf6b8b44ab457cac2c30df9e7796175236c845c4b7108797/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:37:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22f5128b91b261dadf6b8b44ab457cac2c30df9e7796175236c845c4b7108797/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:37:16 compute-0 podman[118546]: 2025-11-25 23:37:16.567767491 +0000 UTC m=+0.024330201 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:37:16 compute-0 podman[118546]: 2025-11-25 23:37:16.664727521 +0000 UTC m=+0.121290231 container init bbec863d3f25cd3b53b8a0f88c9c34c3ab622d5a6732f234ff8a340992259df2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_cannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 25 23:37:16 compute-0 podman[118546]: 2025-11-25 23:37:16.673412917 +0000 UTC m=+0.129975607 container start bbec863d3f25cd3b53b8a0f88c9c34c3ab622d5a6732f234ff8a340992259df2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_cannon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:37:16 compute-0 podman[118546]: 2025-11-25 23:37:16.676394798 +0000 UTC m=+0.132957508 container attach bbec863d3f25cd3b53b8a0f88c9c34c3ab622d5a6732f234ff8a340992259df2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_cannon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:37:16 compute-0 sudo[118640]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wswcwixukpbljinjhaykrotkgzhfvwlb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113836.1056454-222-83602644106364/AnsiballZ_package_facts.py'
Nov 25 23:37:16 compute-0 sudo[118640]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:37:17 compute-0 python3.9[118642]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Nov 25 23:37:17 compute-0 sudo[118640]: pam_unix(sudo:session): session closed for user root
Nov 25 23:37:17 compute-0 admiring_cannon[118585]: {
Nov 25 23:37:17 compute-0 admiring_cannon[118585]:     "0": [
Nov 25 23:37:17 compute-0 admiring_cannon[118585]:         {
Nov 25 23:37:17 compute-0 admiring_cannon[118585]:             "devices": [
Nov 25 23:37:17 compute-0 admiring_cannon[118585]:                 "/dev/loop3"
Nov 25 23:37:17 compute-0 admiring_cannon[118585]:             ],
Nov 25 23:37:17 compute-0 admiring_cannon[118585]:             "lv_name": "ceph_lv0",
Nov 25 23:37:17 compute-0 admiring_cannon[118585]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 23:37:17 compute-0 admiring_cannon[118585]:             "lv_size": "21470642176",
Nov 25 23:37:17 compute-0 admiring_cannon[118585]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c5ddf08a-6193-41e0-8332-60b5083aa62e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 23:37:17 compute-0 admiring_cannon[118585]:             "lv_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 23:37:17 compute-0 admiring_cannon[118585]:             "name": "ceph_lv0",
Nov 25 23:37:17 compute-0 admiring_cannon[118585]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 23:37:17 compute-0 admiring_cannon[118585]:             "tags": {
Nov 25 23:37:17 compute-0 admiring_cannon[118585]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 23:37:17 compute-0 admiring_cannon[118585]:                 "ceph.block_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 23:37:17 compute-0 admiring_cannon[118585]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 23:37:17 compute-0 admiring_cannon[118585]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:37:17 compute-0 admiring_cannon[118585]:                 "ceph.cluster_name": "ceph",
Nov 25 23:37:17 compute-0 admiring_cannon[118585]:                 "ceph.crush_device_class": "",
Nov 25 23:37:17 compute-0 admiring_cannon[118585]:                 "ceph.encrypted": "0",
Nov 25 23:37:17 compute-0 admiring_cannon[118585]:                 "ceph.osd_fsid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 23:37:17 compute-0 admiring_cannon[118585]:                 "ceph.osd_id": "0",
Nov 25 23:37:17 compute-0 admiring_cannon[118585]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 23:37:17 compute-0 admiring_cannon[118585]:                 "ceph.type": "block",
Nov 25 23:37:17 compute-0 admiring_cannon[118585]:                 "ceph.vdo": "0"
Nov 25 23:37:17 compute-0 admiring_cannon[118585]:             },
Nov 25 23:37:17 compute-0 admiring_cannon[118585]:             "type": "block",
Nov 25 23:37:17 compute-0 admiring_cannon[118585]:             "vg_name": "ceph_vg0"
Nov 25 23:37:17 compute-0 admiring_cannon[118585]:         }
Nov 25 23:37:17 compute-0 admiring_cannon[118585]:     ],
Nov 25 23:37:17 compute-0 admiring_cannon[118585]:     "1": [
Nov 25 23:37:17 compute-0 admiring_cannon[118585]:         {
Nov 25 23:37:17 compute-0 admiring_cannon[118585]:             "devices": [
Nov 25 23:37:17 compute-0 admiring_cannon[118585]:                 "/dev/loop4"
Nov 25 23:37:17 compute-0 admiring_cannon[118585]:             ],
Nov 25 23:37:17 compute-0 admiring_cannon[118585]:             "lv_name": "ceph_lv1",
Nov 25 23:37:17 compute-0 admiring_cannon[118585]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 23:37:17 compute-0 admiring_cannon[118585]:             "lv_size": "21470642176",
Nov 25 23:37:17 compute-0 admiring_cannon[118585]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21bbab34-bea3-466b-8bf7-812749fcef47,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 23:37:17 compute-0 admiring_cannon[118585]:             "lv_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 23:37:17 compute-0 admiring_cannon[118585]:             "name": "ceph_lv1",
Nov 25 23:37:17 compute-0 admiring_cannon[118585]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 23:37:17 compute-0 admiring_cannon[118585]:             "tags": {
Nov 25 23:37:17 compute-0 admiring_cannon[118585]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 23:37:17 compute-0 admiring_cannon[118585]:                 "ceph.block_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 23:37:17 compute-0 admiring_cannon[118585]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 23:37:17 compute-0 admiring_cannon[118585]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:37:17 compute-0 admiring_cannon[118585]:                 "ceph.cluster_name": "ceph",
Nov 25 23:37:17 compute-0 admiring_cannon[118585]:                 "ceph.crush_device_class": "",
Nov 25 23:37:17 compute-0 admiring_cannon[118585]:                 "ceph.encrypted": "0",
Nov 25 23:37:17 compute-0 admiring_cannon[118585]:                 "ceph.osd_fsid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 23:37:17 compute-0 admiring_cannon[118585]:                 "ceph.osd_id": "1",
Nov 25 23:37:17 compute-0 admiring_cannon[118585]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 23:37:17 compute-0 admiring_cannon[118585]:                 "ceph.type": "block",
Nov 25 23:37:17 compute-0 admiring_cannon[118585]:                 "ceph.vdo": "0"
Nov 25 23:37:17 compute-0 admiring_cannon[118585]:             },
Nov 25 23:37:17 compute-0 admiring_cannon[118585]:             "type": "block",
Nov 25 23:37:17 compute-0 admiring_cannon[118585]:             "vg_name": "ceph_vg1"
Nov 25 23:37:17 compute-0 admiring_cannon[118585]:         }
Nov 25 23:37:17 compute-0 admiring_cannon[118585]:     ],
Nov 25 23:37:17 compute-0 admiring_cannon[118585]:     "2": [
Nov 25 23:37:17 compute-0 admiring_cannon[118585]:         {
Nov 25 23:37:17 compute-0 admiring_cannon[118585]:             "devices": [
Nov 25 23:37:17 compute-0 admiring_cannon[118585]:                 "/dev/loop5"
Nov 25 23:37:17 compute-0 admiring_cannon[118585]:             ],
Nov 25 23:37:17 compute-0 admiring_cannon[118585]:             "lv_name": "ceph_lv2",
Nov 25 23:37:17 compute-0 admiring_cannon[118585]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 23:37:17 compute-0 admiring_cannon[118585]:             "lv_size": "21470642176",
Nov 25 23:37:17 compute-0 admiring_cannon[118585]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=019d967b-1a56-4e90-8682-a890da577e20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 23:37:17 compute-0 admiring_cannon[118585]:             "lv_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 23:37:17 compute-0 admiring_cannon[118585]:             "name": "ceph_lv2",
Nov 25 23:37:17 compute-0 admiring_cannon[118585]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 23:37:17 compute-0 admiring_cannon[118585]:             "tags": {
Nov 25 23:37:17 compute-0 admiring_cannon[118585]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 23:37:17 compute-0 admiring_cannon[118585]:                 "ceph.block_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 23:37:17 compute-0 admiring_cannon[118585]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 23:37:17 compute-0 admiring_cannon[118585]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:37:17 compute-0 admiring_cannon[118585]:                 "ceph.cluster_name": "ceph",
Nov 25 23:37:17 compute-0 admiring_cannon[118585]:                 "ceph.crush_device_class": "",
Nov 25 23:37:17 compute-0 admiring_cannon[118585]:                 "ceph.encrypted": "0",
Nov 25 23:37:17 compute-0 admiring_cannon[118585]:                 "ceph.osd_fsid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 23:37:17 compute-0 admiring_cannon[118585]:                 "ceph.osd_id": "2",
Nov 25 23:37:17 compute-0 admiring_cannon[118585]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 23:37:17 compute-0 admiring_cannon[118585]:                 "ceph.type": "block",
Nov 25 23:37:17 compute-0 admiring_cannon[118585]:                 "ceph.vdo": "0"
Nov 25 23:37:17 compute-0 admiring_cannon[118585]:             },
Nov 25 23:37:17 compute-0 admiring_cannon[118585]:             "type": "block",
Nov 25 23:37:17 compute-0 admiring_cannon[118585]:             "vg_name": "ceph_vg2"
Nov 25 23:37:17 compute-0 admiring_cannon[118585]:         }
Nov 25 23:37:17 compute-0 admiring_cannon[118585]:     ]
Nov 25 23:37:17 compute-0 admiring_cannon[118585]: }
Nov 25 23:37:17 compute-0 systemd[1]: libpod-bbec863d3f25cd3b53b8a0f88c9c34c3ab622d5a6732f234ff8a340992259df2.scope: Deactivated successfully.
Nov 25 23:37:17 compute-0 conmon[118585]: conmon bbec863d3f25cd3b53b8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bbec863d3f25cd3b53b8a0f88c9c34c3ab622d5a6732f234ff8a340992259df2.scope/container/memory.events
Nov 25 23:37:17 compute-0 podman[118546]: 2025-11-25 23:37:17.42442809 +0000 UTC m=+0.880990820 container died bbec863d3f25cd3b53b8a0f88c9c34c3ab622d5a6732f234ff8a340992259df2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_cannon, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:37:17 compute-0 ceph-mon[75654]: pgmap v214: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:37:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-22f5128b91b261dadf6b8b44ab457cac2c30df9e7796175236c845c4b7108797-merged.mount: Deactivated successfully.
Nov 25 23:37:17 compute-0 podman[118546]: 2025-11-25 23:37:17.492861077 +0000 UTC m=+0.949423817 container remove bbec863d3f25cd3b53b8a0f88c9c34c3ab622d5a6732f234ff8a340992259df2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_cannon, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:37:17 compute-0 systemd[1]: libpod-conmon-bbec863d3f25cd3b53b8a0f88c9c34c3ab622d5a6732f234ff8a340992259df2.scope: Deactivated successfully.
Nov 25 23:37:17 compute-0 sudo[118385]: pam_unix(sudo:session): session closed for user root
Nov 25 23:37:17 compute-0 sudo[118683]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:37:17 compute-0 sudo[118683]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:37:17 compute-0 sudo[118683]: pam_unix(sudo:session): session closed for user root
Nov 25 23:37:17 compute-0 sudo[118708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:37:17 compute-0 sudo[118708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:37:17 compute-0 sudo[118708]: pam_unix(sudo:session): session closed for user root
Nov 25 23:37:17 compute-0 sudo[118733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:37:17 compute-0 sudo[118733]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:37:17 compute-0 sudo[118733]: pam_unix(sudo:session): session closed for user root
Nov 25 23:37:17 compute-0 sudo[118758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 -- raw list --format json
Nov 25 23:37:17 compute-0 sudo[118758]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:37:18 compute-0 podman[118901]: 2025-11-25 23:37:18.049074765 +0000 UTC m=+0.034773795 container create a7e06e20a87cb1f581cc47b2b4269a82d019bb831cb44859f15b32dde8c85222 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_khayyam, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:37:18 compute-0 systemd[1]: Started libpod-conmon-a7e06e20a87cb1f581cc47b2b4269a82d019bb831cb44859f15b32dde8c85222.scope.
Nov 25 23:37:18 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v215: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:37:18 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:37:18 compute-0 podman[118901]: 2025-11-25 23:37:18.033677967 +0000 UTC m=+0.019377007 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:37:18 compute-0 podman[118901]: 2025-11-25 23:37:18.137988788 +0000 UTC m=+0.123687858 container init a7e06e20a87cb1f581cc47b2b4269a82d019bb831cb44859f15b32dde8c85222 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_khayyam, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 25 23:37:18 compute-0 podman[118901]: 2025-11-25 23:37:18.144198236 +0000 UTC m=+0.129897266 container start a7e06e20a87cb1f581cc47b2b4269a82d019bb831cb44859f15b32dde8c85222 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 23:37:18 compute-0 podman[118901]: 2025-11-25 23:37:18.147495105 +0000 UTC m=+0.133194135 container attach a7e06e20a87cb1f581cc47b2b4269a82d019bb831cb44859f15b32dde8c85222 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_khayyam, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:37:18 compute-0 fervent_khayyam[118941]: 167 167
Nov 25 23:37:18 compute-0 systemd[1]: libpod-a7e06e20a87cb1f581cc47b2b4269a82d019bb831cb44859f15b32dde8c85222.scope: Deactivated successfully.
Nov 25 23:37:18 compute-0 podman[118901]: 2025-11-25 23:37:18.150109036 +0000 UTC m=+0.135808106 container died a7e06e20a87cb1f581cc47b2b4269a82d019bb831cb44859f15b32dde8c85222 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_khayyam, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Nov 25 23:37:18 compute-0 sudo[118970]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uaywtggipqgzrwmjwhpvifjpudyqjzui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113837.7958233-232-86437947745851/AnsiballZ_stat.py'
Nov 25 23:37:18 compute-0 sudo[118970]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:37:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-bcfca174ecf074617875eb89a32421a4c61b785eb2e7b1019f224e0fbbd47336-merged.mount: Deactivated successfully.
Nov 25 23:37:18 compute-0 podman[118901]: 2025-11-25 23:37:18.189267868 +0000 UTC m=+0.174966918 container remove a7e06e20a87cb1f581cc47b2b4269a82d019bb831cb44859f15b32dde8c85222 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_khayyam, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Nov 25 23:37:18 compute-0 systemd[1]: libpod-conmon-a7e06e20a87cb1f581cc47b2b4269a82d019bb831cb44859f15b32dde8c85222.scope: Deactivated successfully.
Nov 25 23:37:18 compute-0 python3.9[118975]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:37:18 compute-0 sudo[118970]: pam_unix(sudo:session): session closed for user root
Nov 25 23:37:18 compute-0 podman[118994]: 2025-11-25 23:37:18.410265093 +0000 UTC m=+0.064736557 container create 6046813d3304962b535d82e7db2eaf540514605391b30439a76c7bbbc27de2ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_franklin, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 23:37:18 compute-0 systemd[1]: Started libpod-conmon-6046813d3304962b535d82e7db2eaf540514605391b30439a76c7bbbc27de2ce.scope.
Nov 25 23:37:18 compute-0 podman[118994]: 2025-11-25 23:37:18.387576928 +0000 UTC m=+0.042048482 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:37:18 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:37:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f29ad36382586aa3ccc201d37b3cfe620ed08d292d1f5ea7e01575afa66d189/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 23:37:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f29ad36382586aa3ccc201d37b3cfe620ed08d292d1f5ea7e01575afa66d189/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:37:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f29ad36382586aa3ccc201d37b3cfe620ed08d292d1f5ea7e01575afa66d189/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:37:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f29ad36382586aa3ccc201d37b3cfe620ed08d292d1f5ea7e01575afa66d189/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:37:18 compute-0 podman[118994]: 2025-11-25 23:37:18.49971238 +0000 UTC m=+0.154183854 container init 6046813d3304962b535d82e7db2eaf540514605391b30439a76c7bbbc27de2ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_franklin, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 23:37:18 compute-0 podman[118994]: 2025-11-25 23:37:18.508359944 +0000 UTC m=+0.162831418 container start 6046813d3304962b535d82e7db2eaf540514605391b30439a76c7bbbc27de2ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_franklin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:37:18 compute-0 podman[118994]: 2025-11-25 23:37:18.51189283 +0000 UTC m=+0.166364304 container attach 6046813d3304962b535d82e7db2eaf540514605391b30439a76c7bbbc27de2ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 23:37:18 compute-0 sudo[119090]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkfnoswoarifnxmdcnhswicqdgmldxdj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113837.7958233-232-86437947745851/AnsiballZ_file.py'
Nov 25 23:37:18 compute-0 sudo[119090]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:37:18 compute-0 python3.9[119092]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:37:18 compute-0 sudo[119090]: pam_unix(sudo:session): session closed for user root
Nov 25 23:37:19 compute-0 sshd-session[118806]: Invalid user student1 from 202.83.162.167 port 34526
Nov 25 23:37:19 compute-0 sshd-session[118806]: Received disconnect from 202.83.162.167 port 34526:11: Bye Bye [preauth]
Nov 25 23:37:19 compute-0 sshd-session[118806]: Disconnected from invalid user student1 202.83.162.167 port 34526 [preauth]
Nov 25 23:37:19 compute-0 ceph-mon[75654]: pgmap v215: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:37:19 compute-0 dreamy_franklin[119032]: {
Nov 25 23:37:19 compute-0 dreamy_franklin[119032]:     "019d967b-1a56-4e90-8682-a890da577e20": {
Nov 25 23:37:19 compute-0 dreamy_franklin[119032]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:37:19 compute-0 dreamy_franklin[119032]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 23:37:19 compute-0 dreamy_franklin[119032]:         "osd_id": 2,
Nov 25 23:37:19 compute-0 dreamy_franklin[119032]:         "osd_uuid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 23:37:19 compute-0 dreamy_franklin[119032]:         "type": "bluestore"
Nov 25 23:37:19 compute-0 dreamy_franklin[119032]:     },
Nov 25 23:37:19 compute-0 dreamy_franklin[119032]:     "21bbab34-bea3-466b-8bf7-812749fcef47": {
Nov 25 23:37:19 compute-0 dreamy_franklin[119032]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:37:19 compute-0 dreamy_franklin[119032]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 23:37:19 compute-0 dreamy_franklin[119032]:         "osd_id": 1,
Nov 25 23:37:19 compute-0 dreamy_franklin[119032]:         "osd_uuid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 23:37:19 compute-0 dreamy_franklin[119032]:         "type": "bluestore"
Nov 25 23:37:19 compute-0 dreamy_franklin[119032]:     },
Nov 25 23:37:19 compute-0 dreamy_franklin[119032]:     "c5ddf08a-6193-41e0-8332-60b5083aa62e": {
Nov 25 23:37:19 compute-0 dreamy_franklin[119032]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:37:19 compute-0 dreamy_franklin[119032]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 23:37:19 compute-0 dreamy_franklin[119032]:         "osd_id": 0,
Nov 25 23:37:19 compute-0 dreamy_franklin[119032]:         "osd_uuid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 23:37:19 compute-0 dreamy_franklin[119032]:         "type": "bluestore"
Nov 25 23:37:19 compute-0 dreamy_franklin[119032]:     }
Nov 25 23:37:19 compute-0 dreamy_franklin[119032]: }
Nov 25 23:37:19 compute-0 systemd[1]: libpod-6046813d3304962b535d82e7db2eaf540514605391b30439a76c7bbbc27de2ce.scope: Deactivated successfully.
Nov 25 23:37:19 compute-0 systemd[1]: libpod-6046813d3304962b535d82e7db2eaf540514605391b30439a76c7bbbc27de2ce.scope: Consumed 1.051s CPU time.
Nov 25 23:37:19 compute-0 podman[118994]: 2025-11-25 23:37:19.552776677 +0000 UTC m=+1.207248151 container died 6046813d3304962b535d82e7db2eaf540514605391b30439a76c7bbbc27de2ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:37:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-9f29ad36382586aa3ccc201d37b3cfe620ed08d292d1f5ea7e01575afa66d189-merged.mount: Deactivated successfully.
Nov 25 23:37:19 compute-0 podman[118994]: 2025-11-25 23:37:19.604756987 +0000 UTC m=+1.259228451 container remove 6046813d3304962b535d82e7db2eaf540514605391b30439a76c7bbbc27de2ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_franklin, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 23:37:19 compute-0 systemd[1]: libpod-conmon-6046813d3304962b535d82e7db2eaf540514605391b30439a76c7bbbc27de2ce.scope: Deactivated successfully.
Nov 25 23:37:19 compute-0 sudo[118758]: pam_unix(sudo:session): session closed for user root
Nov 25 23:37:19 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 23:37:19 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:37:19 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 23:37:19 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:37:19 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev baefa044-b1f4-4dd6-be5e-d835ba20b098 does not exist
Nov 25 23:37:19 compute-0 sudo[119256]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:37:19 compute-0 sudo[119305]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfmjsnriwuxryvjylzodzqvzhiswdgjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113839.1247923-244-188579548080181/AnsiballZ_stat.py'
Nov 25 23:37:19 compute-0 sudo[119256]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:37:19 compute-0 sudo[119305]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:37:19 compute-0 sudo[119256]: pam_unix(sudo:session): session closed for user root
Nov 25 23:37:19 compute-0 sudo[119310]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 23:37:19 compute-0 sudo[119310]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:37:19 compute-0 sudo[119310]: pam_unix(sudo:session): session closed for user root
Nov 25 23:37:19 compute-0 python3.9[119308]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:37:19 compute-0 sudo[119305]: pam_unix(sudo:session): session closed for user root
Nov 25 23:37:20 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v216: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:37:20 compute-0 sudo[119410]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drxzulrlezboclvzoexhshrgmsttvlza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113839.1247923-244-188579548080181/AnsiballZ_file.py'
Nov 25 23:37:20 compute-0 sudo[119410]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:37:20 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:37:20 compute-0 python3.9[119412]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:37:20 compute-0 sudo[119410]: pam_unix(sudo:session): session closed for user root
Nov 25 23:37:20 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:37:20 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:37:20 compute-0 ceph-mon[75654]: pgmap v216: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:37:21 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.18 deep-scrub starts
Nov 25 23:37:21 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.18 deep-scrub ok
Nov 25 23:37:21 compute-0 sudo[119564]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxpugedwtfazjxpsisytpuyjpswarjcw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113841.0095878-262-37440973459460/AnsiballZ_lineinfile.py'
Nov 25 23:37:21 compute-0 sudo[119564]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:37:21 compute-0 python3.9[119566]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:37:21 compute-0 sudo[119564]: pam_unix(sudo:session): session closed for user root
Nov 25 23:37:21 compute-0 sshd-session[119414]: Received disconnect from 124.18.141.70 port 51132:11: Bye Bye [preauth]
Nov 25 23:37:21 compute-0 sshd-session[119414]: Disconnected from authenticating user root 124.18.141.70 port 51132 [preauth]
Nov 25 23:37:21 compute-0 ceph-mon[75654]: 5.18 deep-scrub starts
Nov 25 23:37:21 compute-0 ceph-mon[75654]: 5.18 deep-scrub ok
Nov 25 23:37:22 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v217: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:37:22 compute-0 sudo[119716]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhyhqafapwvlbilsdlhkovqqpkbyjdkk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113842.2854364-277-35254983442819/AnsiballZ_setup.py'
Nov 25 23:37:22 compute-0 sudo[119716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:37:22 compute-0 ceph-mon[75654]: pgmap v217: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:37:23 compute-0 python3.9[119718]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 23:37:23 compute-0 sudo[119716]: pam_unix(sudo:session): session closed for user root
Nov 25 23:37:23 compute-0 sudo[119800]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-enagojhfvoigipysrgfqqdnxjptvlaud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113842.2854364-277-35254983442819/AnsiballZ_systemd.py'
Nov 25 23:37:23 compute-0 sudo[119800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:37:24 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v218: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:37:24 compute-0 python3.9[119802]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 23:37:24 compute-0 sudo[119800]: pam_unix(sudo:session): session closed for user root
Nov 25 23:37:25 compute-0 ceph-mon[75654]: pgmap v218: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:37:25 compute-0 sshd-session[114396]: Connection closed by 192.168.122.30 port 42550
Nov 25 23:37:25 compute-0 sshd-session[114393]: pam_unix(sshd:session): session closed for user zuul
Nov 25 23:37:25 compute-0 systemd[1]: session-38.scope: Deactivated successfully.
Nov 25 23:37:25 compute-0 systemd[1]: session-38.scope: Consumed 26.181s CPU time.
Nov 25 23:37:25 compute-0 systemd-logind[789]: Session 38 logged out. Waiting for processes to exit.
Nov 25 23:37:25 compute-0 systemd-logind[789]: Removed session 38.
Nov 25 23:37:25 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:37:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:37:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:37:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:37:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:37:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:37:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:37:26 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v219: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:37:26 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.2 deep-scrub starts
Nov 25 23:37:26 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.2 deep-scrub ok
Nov 25 23:37:27 compute-0 ceph-mon[75654]: pgmap v219: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:37:27 compute-0 ceph-mon[75654]: 6.2 deep-scrub starts
Nov 25 23:37:27 compute-0 ceph-mon[75654]: 6.2 deep-scrub ok
Nov 25 23:37:28 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v220: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:37:29 compute-0 ceph-mon[75654]: pgmap v220: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:37:30 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v221: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:37:30 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:37:30 compute-0 sshd-session[119830]: Accepted publickey for zuul from 192.168.122.30 port 42236 ssh2: ECDSA SHA256:7fdySOwLsSW4/MY3qn2D8coSBpPDcuftesbV+ivPlmY
Nov 25 23:37:30 compute-0 systemd-logind[789]: New session 39 of user zuul.
Nov 25 23:37:30 compute-0 systemd[1]: Started Session 39 of User zuul.
Nov 25 23:37:30 compute-0 sshd-session[119830]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 23:37:31 compute-0 ceph-mon[75654]: pgmap v221: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:37:31 compute-0 sudo[119983]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjgmnjwkeuubgaxqykicdatolqabqbdh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113850.7961187-22-167443977899388/AnsiballZ_file.py'
Nov 25 23:37:31 compute-0 sudo[119983]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:37:31 compute-0 python3.9[119985]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:37:31 compute-0 sudo[119983]: pam_unix(sudo:session): session closed for user root
Nov 25 23:37:32 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v222: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:37:32 compute-0 sudo[120135]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-merlkanblkbyiyodsctqkltpotexgtjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113851.724693-34-183333902252062/AnsiballZ_stat.py'
Nov 25 23:37:32 compute-0 sudo[120135]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:37:32 compute-0 python3.9[120137]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:37:32 compute-0 sudo[120135]: pam_unix(sudo:session): session closed for user root
Nov 25 23:37:32 compute-0 sudo[120213]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytylffcwjaphjyznmctgggpjycllkpgv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113851.724693-34-183333902252062/AnsiballZ_file.py'
Nov 25 23:37:32 compute-0 sudo[120213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:37:33 compute-0 python3.9[120215]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:37:33 compute-0 sudo[120213]: pam_unix(sudo:session): session closed for user root
Nov 25 23:37:33 compute-0 ceph-mon[75654]: pgmap v222: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:37:33 compute-0 sshd-session[119833]: Connection closed by 192.168.122.30 port 42236
Nov 25 23:37:33 compute-0 sshd-session[119830]: pam_unix(sshd:session): session closed for user zuul
Nov 25 23:37:33 compute-0 systemd[1]: session-39.scope: Deactivated successfully.
Nov 25 23:37:33 compute-0 systemd[1]: session-39.scope: Consumed 1.772s CPU time.
Nov 25 23:37:33 compute-0 systemd-logind[789]: Session 39 logged out. Waiting for processes to exit.
Nov 25 23:37:33 compute-0 systemd-logind[789]: Removed session 39.
Nov 25 23:37:34 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v223: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:37:34 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Nov 25 23:37:34 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Nov 25 23:37:35 compute-0 ceph-mon[75654]: pgmap v223: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:37:35 compute-0 ceph-mon[75654]: 6.6 scrub starts
Nov 25 23:37:35 compute-0 ceph-mon[75654]: 6.6 scrub ok
Nov 25 23:37:35 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:37:36 compute-0 sshd-session[120240]: Invalid user tuan from 45.78.216.132 port 42342
Nov 25 23:37:36 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v224: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:37:37 compute-0 ceph-mon[75654]: pgmap v224: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:37:37 compute-0 sshd-session[120240]: Received disconnect from 45.78.216.132 port 42342:11: Bye Bye [preauth]
Nov 25 23:37:37 compute-0 sshd-session[120240]: Disconnected from invalid user tuan 45.78.216.132 port 42342 [preauth]
Nov 25 23:37:38 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v225: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:37:38 compute-0 sshd-session[120242]: Accepted publickey for zuul from 192.168.122.30 port 34864 ssh2: ECDSA SHA256:7fdySOwLsSW4/MY3qn2D8coSBpPDcuftesbV+ivPlmY
Nov 25 23:37:38 compute-0 systemd-logind[789]: New session 40 of user zuul.
Nov 25 23:37:38 compute-0 systemd[1]: Started Session 40 of User zuul.
Nov 25 23:37:38 compute-0 sshd-session[120242]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 23:37:39 compute-0 ceph-mon[75654]: pgmap v225: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:37:39 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.e scrub starts
Nov 25 23:37:39 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.e scrub ok
Nov 25 23:37:40 compute-0 python3.9[120395]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 23:37:40 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v226: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:37:40 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:37:40 compute-0 ceph-mon[75654]: 6.e scrub starts
Nov 25 23:37:40 compute-0 ceph-mon[75654]: 6.e scrub ok
Nov 25 23:37:40 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.c scrub starts
Nov 25 23:37:40 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.c scrub ok
Nov 25 23:37:41 compute-0 sudo[120549]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmogmdmcptrqeiepbijpalegrxkamsjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113860.4702394-33-28431358211460/AnsiballZ_file.py'
Nov 25 23:37:41 compute-0 sudo[120549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:37:41 compute-0 python3.9[120551]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:37:41 compute-0 ceph-mon[75654]: pgmap v226: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:37:41 compute-0 sudo[120549]: pam_unix(sudo:session): session closed for user root
Nov 25 23:37:41 compute-0 ceph-mon[75654]: 6.c scrub starts
Nov 25 23:37:41 compute-0 ceph-mon[75654]: 6.c scrub ok
Nov 25 23:37:42 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v227: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:37:42 compute-0 sudo[120724]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcfsafftqrnhggmduogavakfpxalszve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113861.490126-41-37993570976192/AnsiballZ_stat.py'
Nov 25 23:37:42 compute-0 sudo[120724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:37:42 compute-0 python3.9[120726]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:37:42 compute-0 sudo[120724]: pam_unix(sudo:session): session closed for user root
Nov 25 23:37:42 compute-0 sudo[120802]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aaubfkltqelxiwnekzrsrghuuiqkygfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113861.490126-41-37993570976192/AnsiballZ_file.py'
Nov 25 23:37:42 compute-0 sudo[120802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:37:42 compute-0 python3.9[120804]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.pv7yy4w8 recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:37:43 compute-0 sudo[120802]: pam_unix(sudo:session): session closed for user root
Nov 25 23:37:43 compute-0 ceph-mon[75654]: pgmap v227: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:37:43 compute-0 sudo[120954]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrtmhymklbrbzizdqcnshpgwbiyzhwxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113863.442054-61-133490174917650/AnsiballZ_stat.py'
Nov 25 23:37:43 compute-0 sudo[120954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:37:44 compute-0 python3.9[120956]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:37:44 compute-0 sudo[120954]: pam_unix(sudo:session): session closed for user root
Nov 25 23:37:44 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v228: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:37:44 compute-0 sudo[121032]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jsopqbebyrsynymzzslluuketxywrtib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113863.442054-61-133490174917650/AnsiballZ_file.py'
Nov 25 23:37:44 compute-0 sudo[121032]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:37:44 compute-0 python3.9[121034]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.872vas6e recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:37:44 compute-0 sudo[121032]: pam_unix(sudo:session): session closed for user root
Nov 25 23:37:45 compute-0 sudo[121184]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sejxdejfxrsqqmpjmtbmzcquibdjdmlw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113864.8442426-74-32588935034183/AnsiballZ_file.py'
Nov 25 23:37:45 compute-0 sudo[121184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:37:45 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:37:45 compute-0 ceph-mon[75654]: pgmap v228: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:37:45 compute-0 python3.9[121186]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:37:45 compute-0 sudo[121184]: pam_unix(sudo:session): session closed for user root
Nov 25 23:37:45 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Nov 25 23:37:45 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Nov 25 23:37:46 compute-0 sudo[121336]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrhcxisoxwqrpynockdydfwllcksubbg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113865.643014-82-83924346885815/AnsiballZ_stat.py'
Nov 25 23:37:46 compute-0 sudo[121336]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:37:46 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v229: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:37:46 compute-0 python3.9[121338]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:37:46 compute-0 sudo[121336]: pam_unix(sudo:session): session closed for user root
Nov 25 23:37:46 compute-0 ceph-mon[75654]: 6.4 scrub starts
Nov 25 23:37:46 compute-0 ceph-mon[75654]: 6.4 scrub ok
Nov 25 23:37:46 compute-0 sudo[121414]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugqvqiwotrixfghmkrwdtowpmivguqxd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113865.643014-82-83924346885815/AnsiballZ_file.py'
Nov 25 23:37:46 compute-0 sudo[121414]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:37:46 compute-0 python3.9[121416]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:37:46 compute-0 sudo[121414]: pam_unix(sudo:session): session closed for user root
Nov 25 23:37:47 compute-0 ceph-mon[75654]: pgmap v229: 177 pgs: 177 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:37:47 compute-0 sudo[121566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvltptrihigzvjrdyjdncjspaowhkmqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113867.0464563-82-195310573130519/AnsiballZ_stat.py'
Nov 25 23:37:47 compute-0 sudo[121566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:37:47 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.b scrub starts
Nov 25 23:37:47 compute-0 python3.9[121568]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:37:47 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.b scrub ok
Nov 25 23:37:47 compute-0 sudo[121566]: pam_unix(sudo:session): session closed for user root
Nov 25 23:37:47 compute-0 sudo[121644]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewdtbbecjyanizuckgsrdwlnmwuuatyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113867.0464563-82-195310573130519/AnsiballZ_file.py'
Nov 25 23:37:47 compute-0 sudo[121644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:37:48 compute-0 python3.9[121646]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:37:48 compute-0 sudo[121644]: pam_unix(sudo:session): session closed for user root
Nov 25 23:37:48 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v230: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:37:48 compute-0 ceph-mon[75654]: 6.b scrub starts
Nov 25 23:37:48 compute-0 ceph-mon[75654]: 6.b scrub ok
Nov 25 23:37:48 compute-0 sudo[121796]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxoaenfsihdbliaanxhmlzmatjdquhet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113868.2550144-105-19870029595318/AnsiballZ_file.py'
Nov 25 23:37:48 compute-0 sudo[121796]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:37:48 compute-0 python3.9[121798]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:37:48 compute-0 sudo[121796]: pam_unix(sudo:session): session closed for user root
Nov 25 23:37:49 compute-0 ceph-mon[75654]: pgmap v230: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:37:49 compute-0 sudo[121948]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzhkxemjfgihrcccohmplmxjudrvvvee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113869.1093967-113-185066205536973/AnsiballZ_stat.py'
Nov 25 23:37:49 compute-0 sudo[121948]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:37:49 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.d scrub starts
Nov 25 23:37:49 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.d scrub ok
Nov 25 23:37:49 compute-0 python3.9[121950]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:37:49 compute-0 sudo[121948]: pam_unix(sudo:session): session closed for user root
Nov 25 23:37:49 compute-0 sudo[122026]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbnfwhzfxadxzpezkirsmtswrxtrpcta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113869.1093967-113-185066205536973/AnsiballZ_file.py'
Nov 25 23:37:49 compute-0 sudo[122026]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:37:50 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v231: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:37:50 compute-0 python3.9[122028]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:37:50 compute-0 sudo[122026]: pam_unix(sudo:session): session closed for user root
Nov 25 23:37:50 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:37:50 compute-0 ceph-mon[75654]: 6.d scrub starts
Nov 25 23:37:50 compute-0 ceph-mon[75654]: 6.d scrub ok
Nov 25 23:37:50 compute-0 sudo[122178]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxieotfizmpspioygllsrktuxaabjpjx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113870.4008527-125-203836697884754/AnsiballZ_stat.py'
Nov 25 23:37:50 compute-0 sudo[122178]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:37:50 compute-0 python3.9[122180]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:37:51 compute-0 sudo[122178]: pam_unix(sudo:session): session closed for user root
Nov 25 23:37:51 compute-0 sudo[122256]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfncckxxowkiiqowdvdlycceekqwnqtx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113870.4008527-125-203836697884754/AnsiballZ_file.py'
Nov 25 23:37:51 compute-0 sudo[122256]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:37:51 compute-0 ceph-mon[75654]: pgmap v231: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:37:51 compute-0 python3.9[122258]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:37:51 compute-0 sudo[122256]: pam_unix(sudo:session): session closed for user root
Nov 25 23:37:52 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v232: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:37:52 compute-0 sudo[122408]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eebeashuvsfmojzuliczynxzkkjjxjez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113871.779002-137-145697715180099/AnsiballZ_systemd.py'
Nov 25 23:37:52 compute-0 sudo[122408]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:37:52 compute-0 python3.9[122410]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 23:37:52 compute-0 systemd[1]: Reloading.
Nov 25 23:37:52 compute-0 systemd-rc-local-generator[122438]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 23:37:52 compute-0 systemd-sysv-generator[122441]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 23:37:53 compute-0 sudo[122408]: pam_unix(sudo:session): session closed for user root
Nov 25 23:37:53 compute-0 ceph-mon[75654]: pgmap v232: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:37:53 compute-0 sudo[122597]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-msojqncbthfxuyzaenfgrnsecjrspdyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113873.4183946-145-261764974389546/AnsiballZ_stat.py'
Nov 25 23:37:53 compute-0 sudo[122597]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:37:53 compute-0 python3.9[122599]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:37:54 compute-0 sudo[122597]: pam_unix(sudo:session): session closed for user root
Nov 25 23:37:54 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v233: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:37:54 compute-0 sudo[122675]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-poamkslilnmfaolqupqrkudtwbjiznau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113873.4183946-145-261764974389546/AnsiballZ_file.py'
Nov 25 23:37:54 compute-0 sudo[122675]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:37:54 compute-0 python3.9[122677]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:37:54 compute-0 sudo[122675]: pam_unix(sudo:session): session closed for user root
Nov 25 23:37:55 compute-0 sudo[122827]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qszwaxjsaohdcoxgstzwdtyjpptruzrz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113874.7673078-157-146816251939787/AnsiballZ_stat.py'
Nov 25 23:37:55 compute-0 sudo[122827]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:37:55 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:37:55 compute-0 ceph-mon[75654]: pgmap v233: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:37:55 compute-0 python3.9[122829]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:37:55 compute-0 sudo[122827]: pam_unix(sudo:session): session closed for user root
Nov 25 23:37:55 compute-0 sudo[122905]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bekgsxznxtcjmavcsffohoryowcnptbm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113874.7673078-157-146816251939787/AnsiballZ_file.py'
Nov 25 23:37:55 compute-0 sudo[122905]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:37:55 compute-0 python3.9[122907]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:37:55 compute-0 sudo[122905]: pam_unix(sudo:session): session closed for user root
Nov 25 23:37:56 compute-0 ceph-mgr[75954]: [balancer INFO root] Optimize plan auto_2025-11-25_23:37:56
Nov 25 23:37:56 compute-0 ceph-mgr[75954]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 23:37:56 compute-0 ceph-mgr[75954]: [balancer INFO root] do_upmap
Nov 25 23:37:56 compute-0 ceph-mgr[75954]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.data', 'backups', '.mgr', 'cephfs.cephfs.meta', 'images', 'volumes']
Nov 25 23:37:56 compute-0 ceph-mgr[75954]: [balancer INFO root] prepared 0/10 changes
Nov 25 23:37:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:37:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:37:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:37:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:37:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:37:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:37:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 23:37:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 23:37:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 23:37:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 23:37:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 23:37:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 23:37:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 23:37:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 23:37:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 23:37:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 23:37:56 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v234: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:37:56 compute-0 sudo[123057]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acpcssnnhftuufocqxmlaqdbfxijuaoq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113876.2128096-169-56093280323870/AnsiballZ_systemd.py'
Nov 25 23:37:56 compute-0 sudo[123057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:37:56 compute-0 python3.9[123059]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 23:37:56 compute-0 systemd[1]: Reloading.
Nov 25 23:37:57 compute-0 systemd-rc-local-generator[123084]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 23:37:57 compute-0 systemd-sysv-generator[123089]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 23:37:57 compute-0 systemd[1]: Starting Create netns directory...
Nov 25 23:37:57 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 25 23:37:57 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 25 23:37:57 compute-0 systemd[1]: Finished Create netns directory.
Nov 25 23:37:57 compute-0 sudo[123057]: pam_unix(sudo:session): session closed for user root
Nov 25 23:37:57 compute-0 ceph-mon[75654]: pgmap v234: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:37:58 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v235: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:37:58 compute-0 python3.9[123250]: ansible-ansible.builtin.service_facts Invoked
Nov 25 23:37:58 compute-0 network[123267]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 25 23:37:58 compute-0 network[123268]: 'network-scripts' will be removed from distribution in near future.
Nov 25 23:37:58 compute-0 network[123269]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 25 23:37:59 compute-0 ceph-mon[75654]: pgmap v235: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:38:00 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v236: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:38:00 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:38:01 compute-0 ceph-mon[75654]: pgmap v236: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:38:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 23:38:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:38:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 23:38:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:38:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:38:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:38:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:38:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:38:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:38:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:38:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:38:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:38:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 23:38:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:38:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:38:02 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v237: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:38:03 compute-0 sudo[123529]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajnhjiswyatvmgieezyefxoobetkmnaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113882.7949767-195-72467146085812/AnsiballZ_stat.py'
Nov 25 23:38:03 compute-0 sudo[123529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:38:03 compute-0 ceph-mon[75654]: pgmap v237: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:38:03 compute-0 python3.9[123531]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:38:03 compute-0 sudo[123529]: pam_unix(sudo:session): session closed for user root
Nov 25 23:38:03 compute-0 sudo[123607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wcuqaimjqiqielbblrcqhqhyhcsdldlx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113882.7949767-195-72467146085812/AnsiballZ_file.py'
Nov 25 23:38:03 compute-0 sudo[123607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:38:04 compute-0 python3.9[123609]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:38:04 compute-0 sudo[123607]: pam_unix(sudo:session): session closed for user root
Nov 25 23:38:04 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v238: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:38:04 compute-0 sudo[123759]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvksbcyozxwdwzvvacwimjciqlccotil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113884.369604-208-55555747682212/AnsiballZ_file.py'
Nov 25 23:38:04 compute-0 sudo[123759]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:38:04 compute-0 python3.9[123761]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:38:04 compute-0 sudo[123759]: pam_unix(sudo:session): session closed for user root
Nov 25 23:38:05 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:38:05 compute-0 ceph-mon[75654]: pgmap v238: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:38:05 compute-0 sudo[123911]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icbxzprzirxqeibkryfucwctujgispok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113885.1462212-216-11994592200092/AnsiballZ_stat.py'
Nov 25 23:38:05 compute-0 sudo[123911]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:38:05 compute-0 python3.9[123913]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:38:05 compute-0 sudo[123911]: pam_unix(sudo:session): session closed for user root
Nov 25 23:38:06 compute-0 sudo[123989]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfndrfbxfvupxbliytrsdveshlxpmqbe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113885.1462212-216-11994592200092/AnsiballZ_file.py'
Nov 25 23:38:06 compute-0 sudo[123989]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:38:06 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v239: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:38:06 compute-0 python3.9[123991]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:38:06 compute-0 sudo[123989]: pam_unix(sudo:session): session closed for user root
Nov 25 23:38:07 compute-0 sudo[124141]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjgbqxutbjskzsrfwalbdqqkkgaakrmu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113886.7307775-231-61986820008046/AnsiballZ_timezone.py'
Nov 25 23:38:07 compute-0 sudo[124141]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:38:07 compute-0 ceph-mon[75654]: pgmap v239: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:38:07 compute-0 python3.9[124143]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 25 23:38:07 compute-0 systemd[1]: Starting Time & Date Service...
Nov 25 23:38:07 compute-0 systemd[1]: Started Time & Date Service.
Nov 25 23:38:07 compute-0 sudo[124141]: pam_unix(sudo:session): session closed for user root
Nov 25 23:38:08 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v240: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:38:08 compute-0 sudo[124297]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cznjnkjveftgomnnyhmzrnlhseoalluf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113888.0668516-240-249087005246581/AnsiballZ_file.py'
Nov 25 23:38:08 compute-0 sudo[124297]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:38:08 compute-0 python3.9[124299]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:38:08 compute-0 sudo[124297]: pam_unix(sudo:session): session closed for user root
Nov 25 23:38:09 compute-0 sudo[124449]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grnykoiiznthndodnrlhwsctlnfqgwlc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113888.928655-248-275241854891390/AnsiballZ_stat.py'
Nov 25 23:38:09 compute-0 sudo[124449]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:38:09 compute-0 ceph-mon[75654]: pgmap v240: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:38:09 compute-0 python3.9[124451]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:38:09 compute-0 sudo[124449]: pam_unix(sudo:session): session closed for user root
Nov 25 23:38:09 compute-0 sudo[124527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynbnatwellnbolpdrhdodolycdykjthi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113888.928655-248-275241854891390/AnsiballZ_file.py'
Nov 25 23:38:09 compute-0 sudo[124527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:38:10 compute-0 python3.9[124529]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:38:10 compute-0 sudo[124527]: pam_unix(sudo:session): session closed for user root
Nov 25 23:38:10 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v241: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:38:10 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:38:10 compute-0 sudo[124679]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzbiphxefuztyislnorgzvawmurxnzpc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113890.291597-260-217307112798916/AnsiballZ_stat.py'
Nov 25 23:38:10 compute-0 sudo[124679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:38:10 compute-0 python3.9[124681]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:38:10 compute-0 sudo[124679]: pam_unix(sudo:session): session closed for user root
Nov 25 23:38:11 compute-0 sudo[124757]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gphkmhdttbyozmpjmvsispwzysiiaoie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113890.291597-260-217307112798916/AnsiballZ_file.py'
Nov 25 23:38:11 compute-0 sudo[124757]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:38:11 compute-0 python3.9[124759]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.qp9ja4rk recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:38:11 compute-0 sudo[124757]: pam_unix(sudo:session): session closed for user root
Nov 25 23:38:11 compute-0 ceph-mon[75654]: pgmap v241: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:38:11 compute-0 sudo[124909]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqnugjejdwqkhsudrakmtueqcfqthsxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113891.5384252-272-170907173818215/AnsiballZ_stat.py'
Nov 25 23:38:11 compute-0 sudo[124909]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:38:12 compute-0 python3.9[124911]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:38:12 compute-0 sudo[124909]: pam_unix(sudo:session): session closed for user root
Nov 25 23:38:12 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v242: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:38:12 compute-0 sudo[124987]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uubgxivotdttxytumqctlneepddllrke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113891.5384252-272-170907173818215/AnsiballZ_file.py'
Nov 25 23:38:12 compute-0 sudo[124987]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:38:12 compute-0 python3.9[124989]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:38:12 compute-0 sudo[124987]: pam_unix(sudo:session): session closed for user root
Nov 25 23:38:13 compute-0 sudo[125139]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrskbrjooyflmqkgfjwumczkfqijfeto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113892.866652-285-163848943807109/AnsiballZ_command.py'
Nov 25 23:38:13 compute-0 sudo[125139]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:38:13 compute-0 ceph-mon[75654]: pgmap v242: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:38:13 compute-0 python3.9[125141]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:38:13 compute-0 sudo[125139]: pam_unix(sudo:session): session closed for user root
Nov 25 23:38:14 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v243: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:38:14 compute-0 sudo[125292]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-boeyyxmjoipqqxddaeqkjhzapvipblxb ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764113893.9233234-293-137113146421932/AnsiballZ_edpm_nftables_from_files.py'
Nov 25 23:38:14 compute-0 sudo[125292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:38:14 compute-0 python3[125294]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 25 23:38:14 compute-0 sudo[125292]: pam_unix(sudo:session): session closed for user root
Nov 25 23:38:15 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:38:15 compute-0 sudo[125446]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvarxhdegdsefxnetktazfyomhlciygj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113894.964381-301-215293438949825/AnsiballZ_stat.py'
Nov 25 23:38:15 compute-0 sudo[125446]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:38:15 compute-0 ceph-mon[75654]: pgmap v243: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:38:15 compute-0 python3.9[125448]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:38:15 compute-0 sudo[125446]: pam_unix(sudo:session): session closed for user root
Nov 25 23:38:15 compute-0 sudo[125524]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anhiaqmmuagtlvvpmgommsigbnvzentr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113894.964381-301-215293438949825/AnsiballZ_file.py'
Nov 25 23:38:15 compute-0 sudo[125524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:38:16 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v244: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:38:16 compute-0 python3.9[125526]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:38:16 compute-0 sudo[125524]: pam_unix(sudo:session): session closed for user root
Nov 25 23:38:16 compute-0 sshd-session[125295]: Received disconnect from 68.183.82.237 port 37346:11: Bye Bye [preauth]
Nov 25 23:38:16 compute-0 sshd-session[125295]: Disconnected from authenticating user root 68.183.82.237 port 37346 [preauth]
Nov 25 23:38:16 compute-0 sudo[125676]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stxndxtbmkhglcwkwozlcvhppcgzbiud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113896.4146812-313-159844375096350/AnsiballZ_stat.py'
Nov 25 23:38:16 compute-0 sudo[125676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:38:17 compute-0 python3.9[125678]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:38:17 compute-0 sudo[125676]: pam_unix(sudo:session): session closed for user root
Nov 25 23:38:17 compute-0 sudo[125754]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzgywasgslixdmmbubgxksyhxnawlvnw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113896.4146812-313-159844375096350/AnsiballZ_file.py'
Nov 25 23:38:17 compute-0 sudo[125754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:38:17 compute-0 ceph-mon[75654]: pgmap v244: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:38:17 compute-0 python3.9[125756]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:38:17 compute-0 sudo[125754]: pam_unix(sudo:session): session closed for user root
Nov 25 23:38:18 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v245: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:38:18 compute-0 sudo[125906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojvzphutyifbayfmhuqhfvfyqygbruun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113897.8449929-325-228049213711883/AnsiballZ_stat.py'
Nov 25 23:38:18 compute-0 sudo[125906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:38:18 compute-0 python3.9[125908]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:38:18 compute-0 sudo[125906]: pam_unix(sudo:session): session closed for user root
Nov 25 23:38:18 compute-0 sudo[125984]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjqvfbmlmcxilchfzedgbzyluslcektc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113897.8449929-325-228049213711883/AnsiballZ_file.py'
Nov 25 23:38:18 compute-0 sudo[125984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:38:18 compute-0 python3.9[125986]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:38:19 compute-0 sudo[125984]: pam_unix(sudo:session): session closed for user root
Nov 25 23:38:19 compute-0 ceph-mon[75654]: pgmap v245: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:38:19 compute-0 sudo[126136]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xddoakxkjiliruhflzrlnchfyhytsiya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113899.2434676-337-28972700559641/AnsiballZ_stat.py'
Nov 25 23:38:19 compute-0 sudo[126136]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:38:19 compute-0 python3.9[126138]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:38:19 compute-0 sudo[126136]: pam_unix(sudo:session): session closed for user root
Nov 25 23:38:19 compute-0 sudo[126140]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:38:19 compute-0 sudo[126140]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:38:19 compute-0 sudo[126140]: pam_unix(sudo:session): session closed for user root
Nov 25 23:38:20 compute-0 sudo[126171]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:38:20 compute-0 sudo[126171]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:38:20 compute-0 sudo[126171]: pam_unix(sudo:session): session closed for user root
Nov 25 23:38:20 compute-0 sudo[126216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:38:20 compute-0 sudo[126216]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:38:20 compute-0 sudo[126216]: pam_unix(sudo:session): session closed for user root
Nov 25 23:38:20 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v246: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:38:20 compute-0 sudo[126264]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 23:38:20 compute-0 sudo[126264]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:38:20 compute-0 sudo[126312]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjajyusawpjmpdgtqpbbpdflbvdoiwxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113899.2434676-337-28972700559641/AnsiballZ_file.py'
Nov 25 23:38:20 compute-0 sudo[126312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:38:20 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:38:20 compute-0 python3.9[126316]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:38:20 compute-0 sudo[126312]: pam_unix(sudo:session): session closed for user root
Nov 25 23:38:20 compute-0 sudo[126264]: pam_unix(sudo:session): session closed for user root
Nov 25 23:38:20 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 23:38:20 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:38:20 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 23:38:20 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 23:38:20 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 23:38:20 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:38:20 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev c3376adf-74ee-4b59-81a8-36534ac66217 does not exist
Nov 25 23:38:20 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev a4e3ec02-b0c2-4e80-a8a9-620b3ffbd64f does not exist
Nov 25 23:38:20 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev efa512fb-a8bd-47a6-aa72-9b8cb653995b does not exist
Nov 25 23:38:20 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 23:38:20 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 23:38:20 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 23:38:20 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 23:38:20 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 23:38:20 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:38:20 compute-0 sudo[126424]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:38:20 compute-0 sudo[126424]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:38:20 compute-0 sudo[126424]: pam_unix(sudo:session): session closed for user root
Nov 25 23:38:20 compute-0 sudo[126455]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:38:20 compute-0 sudo[126455]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:38:20 compute-0 sudo[126455]: pam_unix(sudo:session): session closed for user root
Nov 25 23:38:21 compute-0 sudo[126498]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:38:21 compute-0 sudo[126498]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:38:21 compute-0 sudo[126498]: pam_unix(sudo:session): session closed for user root
Nov 25 23:38:21 compute-0 sudo[126593]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yceloxefmagelkoauxelnapkugrmxegk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113900.6775615-349-260916921392582/AnsiballZ_stat.py'
Nov 25 23:38:21 compute-0 sudo[126593]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:38:21 compute-0 sudo[126553]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 23:38:21 compute-0 sudo[126553]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:38:21 compute-0 python3.9[126598]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:38:21 compute-0 sudo[126593]: pam_unix(sudo:session): session closed for user root
Nov 25 23:38:21 compute-0 ceph-mon[75654]: pgmap v246: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:38:21 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:38:21 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 23:38:21 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:38:21 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 23:38:21 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 23:38:21 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:38:21 compute-0 podman[126648]: 2025-11-25 23:38:21.554799147 +0000 UTC m=+0.065571539 container create 005a6a677b9732275f8e23c80e798af746da2ebde5c257917538c4fd48e11d72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mendel, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 23:38:21 compute-0 systemd[1]: Started libpod-conmon-005a6a677b9732275f8e23c80e798af746da2ebde5c257917538c4fd48e11d72.scope.
Nov 25 23:38:21 compute-0 podman[126648]: 2025-11-25 23:38:21.528105521 +0000 UTC m=+0.038877953 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:38:21 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:38:21 compute-0 podman[126648]: 2025-11-25 23:38:21.701005389 +0000 UTC m=+0.211777871 container init 005a6a677b9732275f8e23c80e798af746da2ebde5c257917538c4fd48e11d72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mendel, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 23:38:21 compute-0 podman[126648]: 2025-11-25 23:38:21.708888636 +0000 UTC m=+0.219661058 container start 005a6a677b9732275f8e23c80e798af746da2ebde5c257917538c4fd48e11d72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mendel, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 23:38:21 compute-0 podman[126648]: 2025-11-25 23:38:21.713155084 +0000 UTC m=+0.223927506 container attach 005a6a677b9732275f8e23c80e798af746da2ebde5c257917538c4fd48e11d72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mendel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 23:38:21 compute-0 angry_mendel[126703]: 167 167
Nov 25 23:38:21 compute-0 systemd[1]: libpod-005a6a677b9732275f8e23c80e798af746da2ebde5c257917538c4fd48e11d72.scope: Deactivated successfully.
Nov 25 23:38:21 compute-0 podman[126648]: 2025-11-25 23:38:21.715264952 +0000 UTC m=+0.226037384 container died 005a6a677b9732275f8e23c80e798af746da2ebde5c257917538c4fd48e11d72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mendel, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:38:21 compute-0 sudo[126734]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-myocvthnlyyyatgenfqazidlnhqcqvsr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113900.6775615-349-260916921392582/AnsiballZ_file.py'
Nov 25 23:38:21 compute-0 sudo[126734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:38:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-6cc46effeec7752c5c6491add84b12da2bab2a140c634c3566cc4917b755c06c-merged.mount: Deactivated successfully.
Nov 25 23:38:21 compute-0 podman[126648]: 2025-11-25 23:38:21.771991667 +0000 UTC m=+0.282764089 container remove 005a6a677b9732275f8e23c80e798af746da2ebde5c257917538c4fd48e11d72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mendel, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 23:38:21 compute-0 systemd[1]: libpod-conmon-005a6a677b9732275f8e23c80e798af746da2ebde5c257917538c4fd48e11d72.scope: Deactivated successfully.
Nov 25 23:38:21 compute-0 python3.9[126744]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:38:21 compute-0 sudo[126734]: pam_unix(sudo:session): session closed for user root
Nov 25 23:38:22 compute-0 podman[126756]: 2025-11-25 23:38:22.00675994 +0000 UTC m=+0.066123784 container create 3ba62df8cba2330ee7ffa9fb9f5945ca0eb567361dd60605ef5cce7b43b9e640 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_goldwasser, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 23:38:22 compute-0 systemd[1]: Started libpod-conmon-3ba62df8cba2330ee7ffa9fb9f5945ca0eb567361dd60605ef5cce7b43b9e640.scope.
Nov 25 23:38:22 compute-0 podman[126756]: 2025-11-25 23:38:21.976573688 +0000 UTC m=+0.035937572 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:38:22 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:38:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea59f65d329332d58b5a399d796d96339693cd368f7f126c7a10bae782a7e6b0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 23:38:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea59f65d329332d58b5a399d796d96339693cd368f7f126c7a10bae782a7e6b0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:38:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea59f65d329332d58b5a399d796d96339693cd368f7f126c7a10bae782a7e6b0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:38:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea59f65d329332d58b5a399d796d96339693cd368f7f126c7a10bae782a7e6b0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:38:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea59f65d329332d58b5a399d796d96339693cd368f7f126c7a10bae782a7e6b0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 23:38:22 compute-0 podman[126756]: 2025-11-25 23:38:22.133022273 +0000 UTC m=+0.192386157 container init 3ba62df8cba2330ee7ffa9fb9f5945ca0eb567361dd60605ef5cce7b43b9e640 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_goldwasser, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:38:22 compute-0 podman[126756]: 2025-11-25 23:38:22.139231444 +0000 UTC m=+0.198595288 container start 3ba62df8cba2330ee7ffa9fb9f5945ca0eb567361dd60605ef5cce7b43b9e640 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_goldwasser, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 23:38:22 compute-0 podman[126756]: 2025-11-25 23:38:22.143006488 +0000 UTC m=+0.202370332 container attach 3ba62df8cba2330ee7ffa9fb9f5945ca0eb567361dd60605ef5cce7b43b9e640 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_goldwasser, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:38:22 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v247: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:38:22 compute-0 sudo[126926]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edjbeuvfazniynruonqjdznyuljetupo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113902.2400286-362-56730924232085/AnsiballZ_command.py'
Nov 25 23:38:22 compute-0 sudo[126926]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:38:22 compute-0 ceph-mon[75654]: pgmap v247: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:38:22 compute-0 python3.9[126928]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:38:22 compute-0 sudo[126926]: pam_unix(sudo:session): session closed for user root
Nov 25 23:38:23 compute-0 exciting_goldwasser[126796]: --> passed data devices: 0 physical, 3 LVM
Nov 25 23:38:23 compute-0 exciting_goldwasser[126796]: --> relative data size: 1.0
Nov 25 23:38:23 compute-0 exciting_goldwasser[126796]: --> All data devices are unavailable
Nov 25 23:38:23 compute-0 systemd[1]: libpod-3ba62df8cba2330ee7ffa9fb9f5945ca0eb567361dd60605ef5cce7b43b9e640.scope: Deactivated successfully.
Nov 25 23:38:23 compute-0 podman[126756]: 2025-11-25 23:38:23.17947646 +0000 UTC m=+1.238840274 container died 3ba62df8cba2330ee7ffa9fb9f5945ca0eb567361dd60605ef5cce7b43b9e640 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_goldwasser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:38:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-ea59f65d329332d58b5a399d796d96339693cd368f7f126c7a10bae782a7e6b0-merged.mount: Deactivated successfully.
Nov 25 23:38:23 compute-0 podman[126756]: 2025-11-25 23:38:23.262994374 +0000 UTC m=+1.322358188 container remove 3ba62df8cba2330ee7ffa9fb9f5945ca0eb567361dd60605ef5cce7b43b9e640 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_goldwasser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 23:38:23 compute-0 systemd[1]: libpod-conmon-3ba62df8cba2330ee7ffa9fb9f5945ca0eb567361dd60605ef5cce7b43b9e640.scope: Deactivated successfully.
Nov 25 23:38:23 compute-0 sudo[126553]: pam_unix(sudo:session): session closed for user root
Nov 25 23:38:23 compute-0 sudo[127047]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:38:23 compute-0 sudo[127047]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:38:23 compute-0 sudo[127047]: pam_unix(sudo:session): session closed for user root
Nov 25 23:38:23 compute-0 sudo[127072]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:38:23 compute-0 sudo[127072]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:38:23 compute-0 sudo[127072]: pam_unix(sudo:session): session closed for user root
Nov 25 23:38:23 compute-0 sudo[127097]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:38:23 compute-0 sudo[127097]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:38:23 compute-0 sudo[127097]: pam_unix(sudo:session): session closed for user root
Nov 25 23:38:23 compute-0 sudo[127122]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 -- lvm list --format json
Nov 25 23:38:23 compute-0 sudo[127122]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:38:23 compute-0 sudo[127233]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrwkoymttnwhtllynsazojkhkjhnbohm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113903.1162815-370-170062647867476/AnsiballZ_blockinfile.py'
Nov 25 23:38:23 compute-0 sudo[127233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:38:23 compute-0 podman[127260]: 2025-11-25 23:38:23.893567592 +0000 UTC m=+0.054314129 container create 621aa9ab26e70ac64006d39fca58f2dc182909a7c318b226ab7464d10a0f614c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_shannon, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 23:38:23 compute-0 python3.9[127240]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:38:23 compute-0 systemd[1]: Started libpod-conmon-621aa9ab26e70ac64006d39fca58f2dc182909a7c318b226ab7464d10a0f614c.scope.
Nov 25 23:38:23 compute-0 sudo[127233]: pam_unix(sudo:session): session closed for user root
Nov 25 23:38:23 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:38:23 compute-0 podman[127260]: 2025-11-25 23:38:23.863652508 +0000 UTC m=+0.024399105 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:38:23 compute-0 podman[127260]: 2025-11-25 23:38:23.964488048 +0000 UTC m=+0.125234565 container init 621aa9ab26e70ac64006d39fca58f2dc182909a7c318b226ab7464d10a0f614c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_shannon, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:38:23 compute-0 podman[127260]: 2025-11-25 23:38:23.974775902 +0000 UTC m=+0.135522429 container start 621aa9ab26e70ac64006d39fca58f2dc182909a7c318b226ab7464d10a0f614c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_shannon, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 23:38:23 compute-0 dazzling_shannon[127277]: 167 167
Nov 25 23:38:23 compute-0 podman[127260]: 2025-11-25 23:38:23.978908346 +0000 UTC m=+0.139654863 container attach 621aa9ab26e70ac64006d39fca58f2dc182909a7c318b226ab7464d10a0f614c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 25 23:38:23 compute-0 systemd[1]: libpod-621aa9ab26e70ac64006d39fca58f2dc182909a7c318b226ab7464d10a0f614c.scope: Deactivated successfully.
Nov 25 23:38:23 compute-0 podman[127260]: 2025-11-25 23:38:23.97979171 +0000 UTC m=+0.140538247 container died 621aa9ab26e70ac64006d39fca58f2dc182909a7c318b226ab7464d10a0f614c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 23:38:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-f8ea403d3896ade7ef3894bf4772f140cf6c6431e8899d8fa05f9b0221f6cdef-merged.mount: Deactivated successfully.
Nov 25 23:38:24 compute-0 podman[127260]: 2025-11-25 23:38:24.045947564 +0000 UTC m=+0.206694101 container remove 621aa9ab26e70ac64006d39fca58f2dc182909a7c318b226ab7464d10a0f614c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_shannon, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:38:24 compute-0 systemd[1]: libpod-conmon-621aa9ab26e70ac64006d39fca58f2dc182909a7c318b226ab7464d10a0f614c.scope: Deactivated successfully.
Nov 25 23:38:24 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v248: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:38:24 compute-0 podman[127330]: 2025-11-25 23:38:24.287682281 +0000 UTC m=+0.061542527 container create 128c2958b019cef4f37e7756ec92b7cf13fed2601b94bf0c0c0657ab44880956 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 23:38:24 compute-0 podman[127330]: 2025-11-25 23:38:24.257896189 +0000 UTC m=+0.031756485 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:38:24 compute-0 systemd[1]: Started libpod-conmon-128c2958b019cef4f37e7756ec92b7cf13fed2601b94bf0c0c0657ab44880956.scope.
Nov 25 23:38:24 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:38:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f822ef8cc4421f28d58af02bb6979f0275d5c914589edbf89aca1e5bd3fde3f8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 23:38:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f822ef8cc4421f28d58af02bb6979f0275d5c914589edbf89aca1e5bd3fde3f8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:38:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f822ef8cc4421f28d58af02bb6979f0275d5c914589edbf89aca1e5bd3fde3f8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:38:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f822ef8cc4421f28d58af02bb6979f0275d5c914589edbf89aca1e5bd3fde3f8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:38:24 compute-0 podman[127330]: 2025-11-25 23:38:24.406837617 +0000 UTC m=+0.180697923 container init 128c2958b019cef4f37e7756ec92b7cf13fed2601b94bf0c0c0657ab44880956 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:38:24 compute-0 podman[127330]: 2025-11-25 23:38:24.419817115 +0000 UTC m=+0.193677341 container start 128c2958b019cef4f37e7756ec92b7cf13fed2601b94bf0c0c0657ab44880956 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_maxwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:38:24 compute-0 podman[127330]: 2025-11-25 23:38:24.426504079 +0000 UTC m=+0.200364385 container attach 128c2958b019cef4f37e7756ec92b7cf13fed2601b94bf0c0c0657ab44880956 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_maxwell, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 23:38:24 compute-0 sudo[127476]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzlnonaqjkmmlrmykjevqgsnukwzwtqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113904.2566874-379-81026162010747/AnsiballZ_file.py'
Nov 25 23:38:24 compute-0 sudo[127476]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:38:24 compute-0 python3.9[127478]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:38:24 compute-0 sudo[127476]: pam_unix(sudo:session): session closed for user root
Nov 25 23:38:25 compute-0 ceph-mon[75654]: pgmap v248: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:38:25 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:38:25 compute-0 modest_maxwell[127397]: {
Nov 25 23:38:25 compute-0 modest_maxwell[127397]:     "0": [
Nov 25 23:38:25 compute-0 modest_maxwell[127397]:         {
Nov 25 23:38:25 compute-0 modest_maxwell[127397]:             "devices": [
Nov 25 23:38:25 compute-0 modest_maxwell[127397]:                 "/dev/loop3"
Nov 25 23:38:25 compute-0 modest_maxwell[127397]:             ],
Nov 25 23:38:25 compute-0 modest_maxwell[127397]:             "lv_name": "ceph_lv0",
Nov 25 23:38:25 compute-0 modest_maxwell[127397]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 23:38:25 compute-0 modest_maxwell[127397]:             "lv_size": "21470642176",
Nov 25 23:38:25 compute-0 modest_maxwell[127397]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c5ddf08a-6193-41e0-8332-60b5083aa62e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 23:38:25 compute-0 modest_maxwell[127397]:             "lv_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 23:38:25 compute-0 modest_maxwell[127397]:             "name": "ceph_lv0",
Nov 25 23:38:25 compute-0 modest_maxwell[127397]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 23:38:25 compute-0 modest_maxwell[127397]:             "tags": {
Nov 25 23:38:25 compute-0 modest_maxwell[127397]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 23:38:25 compute-0 modest_maxwell[127397]:                 "ceph.block_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 23:38:25 compute-0 modest_maxwell[127397]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 23:38:25 compute-0 modest_maxwell[127397]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:38:25 compute-0 modest_maxwell[127397]:                 "ceph.cluster_name": "ceph",
Nov 25 23:38:25 compute-0 modest_maxwell[127397]:                 "ceph.crush_device_class": "",
Nov 25 23:38:25 compute-0 modest_maxwell[127397]:                 "ceph.encrypted": "0",
Nov 25 23:38:25 compute-0 modest_maxwell[127397]:                 "ceph.osd_fsid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 23:38:25 compute-0 modest_maxwell[127397]:                 "ceph.osd_id": "0",
Nov 25 23:38:25 compute-0 modest_maxwell[127397]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 23:38:25 compute-0 modest_maxwell[127397]:                 "ceph.type": "block",
Nov 25 23:38:25 compute-0 modest_maxwell[127397]:                 "ceph.vdo": "0"
Nov 25 23:38:25 compute-0 modest_maxwell[127397]:             },
Nov 25 23:38:25 compute-0 modest_maxwell[127397]:             "type": "block",
Nov 25 23:38:25 compute-0 modest_maxwell[127397]:             "vg_name": "ceph_vg0"
Nov 25 23:38:25 compute-0 modest_maxwell[127397]:         }
Nov 25 23:38:25 compute-0 modest_maxwell[127397]:     ],
Nov 25 23:38:25 compute-0 modest_maxwell[127397]:     "1": [
Nov 25 23:38:25 compute-0 modest_maxwell[127397]:         {
Nov 25 23:38:25 compute-0 modest_maxwell[127397]:             "devices": [
Nov 25 23:38:25 compute-0 modest_maxwell[127397]:                 "/dev/loop4"
Nov 25 23:38:25 compute-0 modest_maxwell[127397]:             ],
Nov 25 23:38:25 compute-0 modest_maxwell[127397]:             "lv_name": "ceph_lv1",
Nov 25 23:38:25 compute-0 modest_maxwell[127397]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 23:38:25 compute-0 modest_maxwell[127397]:             "lv_size": "21470642176",
Nov 25 23:38:25 compute-0 modest_maxwell[127397]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21bbab34-bea3-466b-8bf7-812749fcef47,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 23:38:25 compute-0 modest_maxwell[127397]:             "lv_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 23:38:25 compute-0 modest_maxwell[127397]:             "name": "ceph_lv1",
Nov 25 23:38:25 compute-0 modest_maxwell[127397]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 23:38:25 compute-0 modest_maxwell[127397]:             "tags": {
Nov 25 23:38:25 compute-0 modest_maxwell[127397]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 23:38:25 compute-0 modest_maxwell[127397]:                 "ceph.block_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 23:38:25 compute-0 modest_maxwell[127397]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 23:38:25 compute-0 modest_maxwell[127397]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:38:25 compute-0 modest_maxwell[127397]:                 "ceph.cluster_name": "ceph",
Nov 25 23:38:25 compute-0 modest_maxwell[127397]:                 "ceph.crush_device_class": "",
Nov 25 23:38:25 compute-0 modest_maxwell[127397]:                 "ceph.encrypted": "0",
Nov 25 23:38:25 compute-0 modest_maxwell[127397]:                 "ceph.osd_fsid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 23:38:25 compute-0 modest_maxwell[127397]:                 "ceph.osd_id": "1",
Nov 25 23:38:25 compute-0 modest_maxwell[127397]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 23:38:25 compute-0 modest_maxwell[127397]:                 "ceph.type": "block",
Nov 25 23:38:25 compute-0 modest_maxwell[127397]:                 "ceph.vdo": "0"
Nov 25 23:38:25 compute-0 modest_maxwell[127397]:             },
Nov 25 23:38:25 compute-0 modest_maxwell[127397]:             "type": "block",
Nov 25 23:38:25 compute-0 modest_maxwell[127397]:             "vg_name": "ceph_vg1"
Nov 25 23:38:25 compute-0 modest_maxwell[127397]:         }
Nov 25 23:38:25 compute-0 modest_maxwell[127397]:     ],
Nov 25 23:38:25 compute-0 modest_maxwell[127397]:     "2": [
Nov 25 23:38:25 compute-0 modest_maxwell[127397]:         {
Nov 25 23:38:25 compute-0 modest_maxwell[127397]:             "devices": [
Nov 25 23:38:25 compute-0 modest_maxwell[127397]:                 "/dev/loop5"
Nov 25 23:38:25 compute-0 modest_maxwell[127397]:             ],
Nov 25 23:38:25 compute-0 modest_maxwell[127397]:             "lv_name": "ceph_lv2",
Nov 25 23:38:25 compute-0 modest_maxwell[127397]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 23:38:25 compute-0 modest_maxwell[127397]:             "lv_size": "21470642176",
Nov 25 23:38:25 compute-0 modest_maxwell[127397]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=019d967b-1a56-4e90-8682-a890da577e20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 23:38:25 compute-0 modest_maxwell[127397]:             "lv_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 23:38:25 compute-0 modest_maxwell[127397]:             "name": "ceph_lv2",
Nov 25 23:38:25 compute-0 modest_maxwell[127397]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 23:38:25 compute-0 modest_maxwell[127397]:             "tags": {
Nov 25 23:38:25 compute-0 modest_maxwell[127397]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 23:38:25 compute-0 modest_maxwell[127397]:                 "ceph.block_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 23:38:25 compute-0 modest_maxwell[127397]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 23:38:25 compute-0 modest_maxwell[127397]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:38:25 compute-0 modest_maxwell[127397]:                 "ceph.cluster_name": "ceph",
Nov 25 23:38:25 compute-0 modest_maxwell[127397]:                 "ceph.crush_device_class": "",
Nov 25 23:38:25 compute-0 modest_maxwell[127397]:                 "ceph.encrypted": "0",
Nov 25 23:38:25 compute-0 modest_maxwell[127397]:                 "ceph.osd_fsid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 23:38:25 compute-0 modest_maxwell[127397]:                 "ceph.osd_id": "2",
Nov 25 23:38:25 compute-0 modest_maxwell[127397]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 23:38:25 compute-0 modest_maxwell[127397]:                 "ceph.type": "block",
Nov 25 23:38:25 compute-0 modest_maxwell[127397]:                 "ceph.vdo": "0"
Nov 25 23:38:25 compute-0 modest_maxwell[127397]:             },
Nov 25 23:38:25 compute-0 modest_maxwell[127397]:             "type": "block",
Nov 25 23:38:25 compute-0 modest_maxwell[127397]:             "vg_name": "ceph_vg2"
Nov 25 23:38:25 compute-0 modest_maxwell[127397]:         }
Nov 25 23:38:25 compute-0 modest_maxwell[127397]:     ]
Nov 25 23:38:25 compute-0 modest_maxwell[127397]: }
Nov 25 23:38:25 compute-0 systemd[1]: libpod-128c2958b019cef4f37e7756ec92b7cf13fed2601b94bf0c0c0657ab44880956.scope: Deactivated successfully.
Nov 25 23:38:25 compute-0 podman[127330]: 2025-11-25 23:38:25.297961021 +0000 UTC m=+1.071821277 container died 128c2958b019cef4f37e7756ec92b7cf13fed2601b94bf0c0c0657ab44880956 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:38:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-f822ef8cc4421f28d58af02bb6979f0275d5c914589edbf89aca1e5bd3fde3f8-merged.mount: Deactivated successfully.
Nov 25 23:38:25 compute-0 podman[127330]: 2025-11-25 23:38:25.429642342 +0000 UTC m=+1.203502598 container remove 128c2958b019cef4f37e7756ec92b7cf13fed2601b94bf0c0c0657ab44880956 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_maxwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 25 23:38:25 compute-0 systemd[1]: libpod-conmon-128c2958b019cef4f37e7756ec92b7cf13fed2601b94bf0c0c0657ab44880956.scope: Deactivated successfully.
Nov 25 23:38:25 compute-0 sudo[127647]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfjemhgauplbqdlamgmqrobzedyzxola ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113905.041426-379-252533870350167/AnsiballZ_file.py'
Nov 25 23:38:25 compute-0 sudo[127647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:38:25 compute-0 sudo[127122]: pam_unix(sudo:session): session closed for user root
Nov 25 23:38:25 compute-0 sudo[127650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:38:25 compute-0 sudo[127650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:38:25 compute-0 sudo[127650]: pam_unix(sudo:session): session closed for user root
Nov 25 23:38:25 compute-0 sudo[127675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:38:25 compute-0 python3.9[127649]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:38:25 compute-0 sudo[127675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:38:25 compute-0 sudo[127675]: pam_unix(sudo:session): session closed for user root
Nov 25 23:38:25 compute-0 sudo[127647]: pam_unix(sudo:session): session closed for user root
Nov 25 23:38:25 compute-0 sudo[127700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:38:25 compute-0 sudo[127700]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:38:25 compute-0 sudo[127700]: pam_unix(sudo:session): session closed for user root
Nov 25 23:38:25 compute-0 sudo[127749]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 -- raw list --format json
Nov 25 23:38:25 compute-0 sudo[127749]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:38:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:38:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:38:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:38:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:38:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:38:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:38:26 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v249: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:38:26 compute-0 podman[127867]: 2025-11-25 23:38:26.271126617 +0000 UTC m=+0.062025991 container create 27248072fcef41e0a957317ae6ea11fab1f6ac13e9c041ba673276f7d7262c15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_lovelace, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True)
Nov 25 23:38:26 compute-0 systemd[1]: Started libpod-conmon-27248072fcef41e0a957317ae6ea11fab1f6ac13e9c041ba673276f7d7262c15.scope.
Nov 25 23:38:26 compute-0 podman[127867]: 2025-11-25 23:38:26.247437914 +0000 UTC m=+0.038337258 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:38:26 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:38:26 compute-0 podman[127867]: 2025-11-25 23:38:26.373649344 +0000 UTC m=+0.164548728 container init 27248072fcef41e0a957317ae6ea11fab1f6ac13e9c041ba673276f7d7262c15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_lovelace, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 25 23:38:26 compute-0 podman[127867]: 2025-11-25 23:38:26.387033004 +0000 UTC m=+0.177932338 container start 27248072fcef41e0a957317ae6ea11fab1f6ac13e9c041ba673276f7d7262c15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:38:26 compute-0 podman[127867]: 2025-11-25 23:38:26.390640703 +0000 UTC m=+0.181540037 container attach 27248072fcef41e0a957317ae6ea11fab1f6ac13e9c041ba673276f7d7262c15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:38:26 compute-0 systemd[1]: libpod-27248072fcef41e0a957317ae6ea11fab1f6ac13e9c041ba673276f7d7262c15.scope: Deactivated successfully.
Nov 25 23:38:26 compute-0 crazy_lovelace[127906]: 167 167
Nov 25 23:38:26 compute-0 conmon[127906]: conmon 27248072fcef41e0a957 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-27248072fcef41e0a957317ae6ea11fab1f6ac13e9c041ba673276f7d7262c15.scope/container/memory.events
Nov 25 23:38:26 compute-0 podman[127867]: 2025-11-25 23:38:26.395074336 +0000 UTC m=+0.185973700 container died 27248072fcef41e0a957317ae6ea11fab1f6ac13e9c041ba673276f7d7262c15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 25 23:38:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-a9d6734a22b53bd07006e20f0d297bf603448731f1b2ad9b8dc48bcd8b478e61-merged.mount: Deactivated successfully.
Nov 25 23:38:26 compute-0 podman[127867]: 2025-11-25 23:38:26.43731366 +0000 UTC m=+0.228212994 container remove 27248072fcef41e0a957317ae6ea11fab1f6ac13e9c041ba673276f7d7262c15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_lovelace, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:38:26 compute-0 systemd[1]: libpod-conmon-27248072fcef41e0a957317ae6ea11fab1f6ac13e9c041ba673276f7d7262c15.scope: Deactivated successfully.
Nov 25 23:38:26 compute-0 sudo[127973]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwnchwlmsdhgdknztkkedhgvxerrpash ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113905.9068024-394-277839040047352/AnsiballZ_mount.py'
Nov 25 23:38:26 compute-0 sudo[127973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:38:26 compute-0 podman[127981]: 2025-11-25 23:38:26.625250313 +0000 UTC m=+0.075928205 container create 6ed1bcd8ed7778f1f3be4de70edd6f8241313120df07ed76249f4ec1f49d6c03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 23:38:26 compute-0 podman[127981]: 2025-11-25 23:38:26.571225812 +0000 UTC m=+0.021903724 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:38:26 compute-0 python3.9[127975]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 25 23:38:26 compute-0 sudo[127973]: pam_unix(sudo:session): session closed for user root
Nov 25 23:38:26 compute-0 systemd[1]: Started libpod-conmon-6ed1bcd8ed7778f1f3be4de70edd6f8241313120df07ed76249f4ec1f49d6c03.scope.
Nov 25 23:38:26 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:38:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05df26fa14b98d12c916101f28f601c3b9e83254bfebbe7c08807b9e533956d4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 23:38:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05df26fa14b98d12c916101f28f601c3b9e83254bfebbe7c08807b9e533956d4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:38:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05df26fa14b98d12c916101f28f601c3b9e83254bfebbe7c08807b9e533956d4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:38:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05df26fa14b98d12c916101f28f601c3b9e83254bfebbe7c08807b9e533956d4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:38:26 compute-0 podman[127981]: 2025-11-25 23:38:26.789607815 +0000 UTC m=+0.240285767 container init 6ed1bcd8ed7778f1f3be4de70edd6f8241313120df07ed76249f4ec1f49d6c03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_kapitsa, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:38:26 compute-0 podman[127981]: 2025-11-25 23:38:26.799625842 +0000 UTC m=+0.250303764 container start 6ed1bcd8ed7778f1f3be4de70edd6f8241313120df07ed76249f4ec1f49d6c03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_kapitsa, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 23:38:26 compute-0 podman[127981]: 2025-11-25 23:38:26.825583647 +0000 UTC m=+0.276261579 container attach 6ed1bcd8ed7778f1f3be4de70edd6f8241313120df07ed76249f4ec1f49d6c03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_kapitsa, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:38:27 compute-0 sudo[128152]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ouhduzekhvvqpsetwkkebarlqazndgoy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113906.8981826-394-147642715440727/AnsiballZ_mount.py'
Nov 25 23:38:27 compute-0 ceph-mon[75654]: pgmap v249: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:38:27 compute-0 sudo[128152]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:38:27 compute-0 python3.9[128154]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 25 23:38:27 compute-0 sudo[128152]: pam_unix(sudo:session): session closed for user root
Nov 25 23:38:27 compute-0 quizzical_kapitsa[127998]: {
Nov 25 23:38:27 compute-0 quizzical_kapitsa[127998]:     "019d967b-1a56-4e90-8682-a890da577e20": {
Nov 25 23:38:27 compute-0 quizzical_kapitsa[127998]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:38:27 compute-0 quizzical_kapitsa[127998]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 23:38:27 compute-0 quizzical_kapitsa[127998]:         "osd_id": 2,
Nov 25 23:38:27 compute-0 quizzical_kapitsa[127998]:         "osd_uuid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 23:38:27 compute-0 quizzical_kapitsa[127998]:         "type": "bluestore"
Nov 25 23:38:27 compute-0 quizzical_kapitsa[127998]:     },
Nov 25 23:38:27 compute-0 quizzical_kapitsa[127998]:     "21bbab34-bea3-466b-8bf7-812749fcef47": {
Nov 25 23:38:27 compute-0 quizzical_kapitsa[127998]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:38:27 compute-0 quizzical_kapitsa[127998]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 23:38:27 compute-0 quizzical_kapitsa[127998]:         "osd_id": 1,
Nov 25 23:38:27 compute-0 quizzical_kapitsa[127998]:         "osd_uuid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 23:38:27 compute-0 quizzical_kapitsa[127998]:         "type": "bluestore"
Nov 25 23:38:27 compute-0 quizzical_kapitsa[127998]:     },
Nov 25 23:38:27 compute-0 quizzical_kapitsa[127998]:     "c5ddf08a-6193-41e0-8332-60b5083aa62e": {
Nov 25 23:38:27 compute-0 quizzical_kapitsa[127998]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:38:27 compute-0 quizzical_kapitsa[127998]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 23:38:27 compute-0 quizzical_kapitsa[127998]:         "osd_id": 0,
Nov 25 23:38:27 compute-0 quizzical_kapitsa[127998]:         "osd_uuid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 23:38:27 compute-0 quizzical_kapitsa[127998]:         "type": "bluestore"
Nov 25 23:38:27 compute-0 quizzical_kapitsa[127998]:     }
Nov 25 23:38:27 compute-0 quizzical_kapitsa[127998]: }
Nov 25 23:38:27 compute-0 systemd[1]: libpod-6ed1bcd8ed7778f1f3be4de70edd6f8241313120df07ed76249f4ec1f49d6c03.scope: Deactivated successfully.
Nov 25 23:38:27 compute-0 systemd[1]: libpod-6ed1bcd8ed7778f1f3be4de70edd6f8241313120df07ed76249f4ec1f49d6c03.scope: Consumed 1.008s CPU time.
Nov 25 23:38:27 compute-0 podman[128207]: 2025-11-25 23:38:27.864567849 +0000 UTC m=+0.044133718 container died 6ed1bcd8ed7778f1f3be4de70edd6f8241313120df07ed76249f4ec1f49d6c03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_kapitsa, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 23:38:27 compute-0 sshd-session[120245]: Connection closed by 192.168.122.30 port 34864
Nov 25 23:38:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-05df26fa14b98d12c916101f28f601c3b9e83254bfebbe7c08807b9e533956d4-merged.mount: Deactivated successfully.
Nov 25 23:38:27 compute-0 sshd-session[120242]: pam_unix(sshd:session): session closed for user zuul
Nov 25 23:38:27 compute-0 systemd[1]: session-40.scope: Deactivated successfully.
Nov 25 23:38:27 compute-0 systemd[1]: session-40.scope: Consumed 36.235s CPU time.
Nov 25 23:38:27 compute-0 systemd-logind[789]: Session 40 logged out. Waiting for processes to exit.
Nov 25 23:38:27 compute-0 systemd-logind[789]: Removed session 40.
Nov 25 23:38:27 compute-0 podman[128207]: 2025-11-25 23:38:27.942705814 +0000 UTC m=+0.122271683 container remove 6ed1bcd8ed7778f1f3be4de70edd6f8241313120df07ed76249f4ec1f49d6c03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:38:27 compute-0 systemd[1]: libpod-conmon-6ed1bcd8ed7778f1f3be4de70edd6f8241313120df07ed76249f4ec1f49d6c03.scope: Deactivated successfully.
Nov 25 23:38:27 compute-0 sudo[127749]: pam_unix(sudo:session): session closed for user root
Nov 25 23:38:28 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 23:38:28 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:38:28 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 23:38:28 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:38:28 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev 5a5b85cb-e577-42ac-8a68-307b78764378 does not exist
Nov 25 23:38:28 compute-0 sudo[128222]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:38:28 compute-0 sudo[128222]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:38:28 compute-0 sudo[128222]: pam_unix(sudo:session): session closed for user root
Nov 25 23:38:28 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v250: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:38:28 compute-0 sudo[128247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 23:38:28 compute-0 sudo[128247]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:38:28 compute-0 sudo[128247]: pam_unix(sudo:session): session closed for user root
Nov 25 23:38:29 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:38:29 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:38:29 compute-0 ceph-mon[75654]: pgmap v250: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:38:30 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v251: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:38:30 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:38:31 compute-0 ceph-mon[75654]: pgmap v251: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:38:32 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v252: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:38:33 compute-0 ceph-mon[75654]: pgmap v252: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:38:33 compute-0 sshd-session[128272]: Accepted publickey for zuul from 192.168.122.30 port 53562 ssh2: ECDSA SHA256:7fdySOwLsSW4/MY3qn2D8coSBpPDcuftesbV+ivPlmY
Nov 25 23:38:33 compute-0 systemd-logind[789]: New session 41 of user zuul.
Nov 25 23:38:33 compute-0 systemd[1]: Started Session 41 of User zuul.
Nov 25 23:38:33 compute-0 sshd-session[128272]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 23:38:34 compute-0 sudo[128425]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-taciyokuhlkqqntwovvdwrornqucwkyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113913.4790487-16-10247576725409/AnsiballZ_tempfile.py'
Nov 25 23:38:34 compute-0 sudo[128425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:38:34 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v253: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:38:34 compute-0 python3.9[128427]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Nov 25 23:38:34 compute-0 sudo[128425]: pam_unix(sudo:session): session closed for user root
Nov 25 23:38:35 compute-0 sudo[128577]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htbsjcfojiwdxalakagkeukbnzhvqbjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113914.60003-28-132166894515584/AnsiballZ_stat.py'
Nov 25 23:38:35 compute-0 sudo[128577]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:38:35 compute-0 ceph-mon[75654]: pgmap v253: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:38:35 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:38:35 compute-0 python3.9[128579]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 23:38:35 compute-0 sudo[128577]: pam_unix(sudo:session): session closed for user root
Nov 25 23:38:36 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v254: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:38:36 compute-0 sudo[128731]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aysxbxxokcskfsuoxqendtdigayqquim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113915.7402074-36-70370122684622/AnsiballZ_slurp.py'
Nov 25 23:38:36 compute-0 sudo[128731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:38:36 compute-0 python3.9[128733]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Nov 25 23:38:36 compute-0 sudo[128731]: pam_unix(sudo:session): session closed for user root
Nov 25 23:38:37 compute-0 sudo[128883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqavbolsfkhomjnwmweofdxqwtkawktx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113916.684458-44-193719256025881/AnsiballZ_stat.py'
Nov 25 23:38:37 compute-0 sudo[128883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:38:37 compute-0 ceph-mon[75654]: pgmap v254: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:38:37 compute-0 python3.9[128885]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.zboni6_q follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:38:37 compute-0 sudo[128883]: pam_unix(sudo:session): session closed for user root
Nov 25 23:38:37 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 25 23:38:38 compute-0 sudo[129010]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-syenkgkkptfyfprvpswpjqozeiixmwvw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113916.684458-44-193719256025881/AnsiballZ_copy.py'
Nov 25 23:38:38 compute-0 sudo[129010]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:38:38 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v255: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:38:38 compute-0 python3.9[129012]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.zboni6_q mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764113916.684458-44-193719256025881/.source.zboni6_q _original_basename=.rc3_2wdh follow=False checksum=e1ea54a4c377a432c95b6fe332b8f5ad4e4245e0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:38:38 compute-0 sudo[129010]: pam_unix(sudo:session): session closed for user root
Nov 25 23:38:39 compute-0 sudo[129162]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pswsvpwsbjdmrspefpyhjrszocmpplom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113918.5131636-59-147261979175780/AnsiballZ_setup.py'
Nov 25 23:38:39 compute-0 sudo[129162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:38:39 compute-0 ceph-mon[75654]: pgmap v255: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:38:39 compute-0 python3.9[129164]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 23:38:39 compute-0 sudo[129162]: pam_unix(sudo:session): session closed for user root
Nov 25 23:38:40 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v256: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:38:40 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:38:40 compute-0 sudo[129314]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rggncbaexsvzhleinkvsenjtcmlkfylb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113919.8568811-68-67681569752078/AnsiballZ_blockinfile.py'
Nov 25 23:38:40 compute-0 sudo[129314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:38:40 compute-0 python3.9[129316]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCxrfdY9cGWIXdy/1Oy3o25kkem+UfkfNZM3QAaYeemr9vZEt0Kpt4rTEaZjtK/HkgMSoli0ko2twHhREfmcDjCZiEvSPhpr9yvJyxLe6m3r7nR2fIVc/1+5SeUdcJGWT8hvgD5okMZtCerl/MiW6+tFRt7Ar6X2TFlwXPjq3wia85WpL7X9vq40wZz0XlbpQxNxcEJWeVajcrd63Qib0m1FmhnmHPUqLHN0WmxXnMtONzo4fUQjq3zn230bIZCmjbFatl10s4NRy2udfAA7Xi0ubCZxQ/E8omg7y4ZxA94dJHZPmkCFSVLZUqdW3S3Ofhcem+PFVKRR2UvfcYHi79G6lS5brk3pbHqdyjd4/3scYp3aXFFt7ErEEhVud762RLGAHeACGlJQxmX8B/FbnWmbkw8BfptrYtzSuSqIXmN3UXrLrmfRrB+IMcIbbs/vzVMk6n6BzUjdXscFfnPltHEyvmdeIEBDyC5FLoJ2bTTrQpLt63pLIU09IA55rhBA+E=
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBQX5RNdc24Y/t6cF9q9hL3e4G9bhmnpPT/NJWIujGtr
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKt48jJg/HSNlIL9ftEIQgyUPOj8qZ1KotNNqzrVPi+UhJTDsaDnHI9k4z0iWOz87RQtpHNoPDx9+/vOjXzjj4o=
                                              create=True mode=0644 path=/tmp/ansible.zboni6_q state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:38:40 compute-0 sudo[129314]: pam_unix(sudo:session): session closed for user root
Nov 25 23:38:41 compute-0 ceph-mon[75654]: pgmap v256: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:38:41 compute-0 sudo[129466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnqaueyuiuoqkdnfbmlfgmucbbsfpvfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113920.9084284-76-130973256968541/AnsiballZ_command.py'
Nov 25 23:38:41 compute-0 sudo[129466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:38:41 compute-0 python3.9[129468]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.zboni6_q' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:38:41 compute-0 sudo[129466]: pam_unix(sudo:session): session closed for user root
Nov 25 23:38:42 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v257: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:38:42 compute-0 sudo[129620]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcguijzdnejlgsbnrmhtxcjngxsqqeym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113921.8754537-84-143723572053527/AnsiballZ_file.py'
Nov 25 23:38:42 compute-0 sudo[129620]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:38:42 compute-0 python3.9[129622]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.zboni6_q state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:38:42 compute-0 sudo[129620]: pam_unix(sudo:session): session closed for user root
Nov 25 23:38:43 compute-0 sshd-session[128275]: Connection closed by 192.168.122.30 port 53562
Nov 25 23:38:43 compute-0 sshd-session[128272]: pam_unix(sshd:session): session closed for user zuul
Nov 25 23:38:43 compute-0 systemd-logind[789]: Session 41 logged out. Waiting for processes to exit.
Nov 25 23:38:43 compute-0 systemd[1]: session-41.scope: Deactivated successfully.
Nov 25 23:38:43 compute-0 systemd[1]: session-41.scope: Consumed 6.665s CPU time.
Nov 25 23:38:43 compute-0 systemd-logind[789]: Removed session 41.
Nov 25 23:38:43 compute-0 ceph-mon[75654]: pgmap v257: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:38:44 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v258: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:38:44 compute-0 sshd-session[129647]: Received disconnect from 202.83.162.167 port 47652:11: Bye Bye [preauth]
Nov 25 23:38:44 compute-0 sshd-session[129647]: Disconnected from authenticating user root 202.83.162.167 port 47652 [preauth]
Nov 25 23:38:45 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:38:45 compute-0 ceph-mon[75654]: pgmap v258: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:38:46 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v259: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:38:47 compute-0 ceph-mon[75654]: pgmap v259: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:38:48 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v260: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:38:48 compute-0 sshd-session[129649]: Accepted publickey for zuul from 192.168.122.30 port 51036 ssh2: ECDSA SHA256:7fdySOwLsSW4/MY3qn2D8coSBpPDcuftesbV+ivPlmY
Nov 25 23:38:48 compute-0 systemd-logind[789]: New session 42 of user zuul.
Nov 25 23:38:48 compute-0 systemd[1]: Started Session 42 of User zuul.
Nov 25 23:38:48 compute-0 sshd-session[129649]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 23:38:49 compute-0 ceph-mon[75654]: pgmap v260: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:38:49 compute-0 python3.9[129802]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 23:38:50 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v261: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:38:50 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:38:50 compute-0 sudo[129956]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwhmgblojimlcfyaawsdbzubbaghicva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113930.0964506-32-221906413184893/AnsiballZ_systemd.py'
Nov 25 23:38:50 compute-0 sudo[129956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:38:50 compute-0 python3.9[129958]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 25 23:38:51 compute-0 sudo[129956]: pam_unix(sudo:session): session closed for user root
Nov 25 23:38:51 compute-0 ceph-mon[75654]: pgmap v261: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:38:51 compute-0 sudo[130110]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwqafoloihadrgjootqhgtszfobqtgjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113931.234904-40-266555721147882/AnsiballZ_systemd.py'
Nov 25 23:38:51 compute-0 sudo[130110]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:38:51 compute-0 python3.9[130112]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 23:38:52 compute-0 sudo[130110]: pam_unix(sudo:session): session closed for user root
Nov 25 23:38:52 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v262: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:38:52 compute-0 sshd-session[71905]: Received disconnect from 38.102.83.97 port 49484:11: disconnected by user
Nov 25 23:38:52 compute-0 sshd-session[71905]: Disconnected from user zuul 38.102.83.97 port 49484
Nov 25 23:38:52 compute-0 sshd-session[71902]: pam_unix(sshd:session): session closed for user zuul
Nov 25 23:38:52 compute-0 systemd[1]: session-18.scope: Deactivated successfully.
Nov 25 23:38:52 compute-0 systemd[1]: session-18.scope: Consumed 1min 27.470s CPU time.
Nov 25 23:38:52 compute-0 systemd-logind[789]: Session 18 logged out. Waiting for processes to exit.
Nov 25 23:38:52 compute-0 systemd-logind[789]: Removed session 18.
Nov 25 23:38:52 compute-0 sudo[130263]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-liqomghkticimhrdrxgebdmtexyunkoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113932.2750733-49-60672507718096/AnsiballZ_command.py'
Nov 25 23:38:52 compute-0 sudo[130263]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:38:53 compute-0 python3.9[130265]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:38:53 compute-0 sudo[130263]: pam_unix(sudo:session): session closed for user root
Nov 25 23:38:53 compute-0 ceph-mon[75654]: pgmap v262: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:38:53 compute-0 sudo[130416]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmhouyuumzrtrtpxwigjrmkaonbczbmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113933.3327432-57-152921554014323/AnsiballZ_stat.py'
Nov 25 23:38:53 compute-0 sudo[130416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:38:54 compute-0 python3.9[130418]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 23:38:54 compute-0 sudo[130416]: pam_unix(sudo:session): session closed for user root
Nov 25 23:38:54 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v263: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:38:54 compute-0 sudo[130568]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eznrfslqcsvuqjfuacoktcffuzppmzld ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113934.3628235-66-33719454894579/AnsiballZ_file.py'
Nov 25 23:38:54 compute-0 sudo[130568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:38:55 compute-0 python3.9[130570]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:38:55 compute-0 sudo[130568]: pam_unix(sudo:session): session closed for user root
Nov 25 23:38:55 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:38:55 compute-0 ceph-mon[75654]: pgmap v263: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:38:55 compute-0 sshd-session[129652]: Connection closed by 192.168.122.30 port 51036
Nov 25 23:38:55 compute-0 sshd-session[129649]: pam_unix(sshd:session): session closed for user zuul
Nov 25 23:38:55 compute-0 systemd[1]: session-42.scope: Deactivated successfully.
Nov 25 23:38:55 compute-0 systemd[1]: session-42.scope: Consumed 4.560s CPU time.
Nov 25 23:38:55 compute-0 systemd-logind[789]: Session 42 logged out. Waiting for processes to exit.
Nov 25 23:38:55 compute-0 systemd-logind[789]: Removed session 42.
Nov 25 23:38:56 compute-0 ceph-mgr[75954]: [balancer INFO root] Optimize plan auto_2025-11-25_23:38:56
Nov 25 23:38:56 compute-0 ceph-mgr[75954]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 23:38:56 compute-0 ceph-mgr[75954]: [balancer INFO root] do_upmap
Nov 25 23:38:56 compute-0 ceph-mgr[75954]: [balancer INFO root] pools ['.mgr', 'backups', 'images', 'volumes', 'vms', 'cephfs.cephfs.data', 'cephfs.cephfs.meta']
Nov 25 23:38:56 compute-0 ceph-mgr[75954]: [balancer INFO root] prepared 0/10 changes
Nov 25 23:38:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:38:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:38:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:38:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:38:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:38:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:38:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 23:38:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 23:38:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 23:38:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 23:38:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 23:38:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 23:38:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 23:38:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 23:38:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 23:38:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 23:38:56 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v264: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:38:56 compute-0 sshd-session[130571]: Received disconnect from 124.18.141.70 port 57050:11: Bye Bye [preauth]
Nov 25 23:38:56 compute-0 sshd-session[130571]: Disconnected from authenticating user root 124.18.141.70 port 57050 [preauth]
Nov 25 23:38:57 compute-0 ceph-mon[75654]: pgmap v264: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:38:58 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v265: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:38:59 compute-0 ceph-mon[75654]: pgmap v265: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:39:00 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v266: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:39:00 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:39:00 compute-0 sshd-session[130597]: Accepted publickey for zuul from 192.168.122.30 port 43872 ssh2: ECDSA SHA256:7fdySOwLsSW4/MY3qn2D8coSBpPDcuftesbV+ivPlmY
Nov 25 23:39:00 compute-0 systemd-logind[789]: New session 43 of user zuul.
Nov 25 23:39:00 compute-0 systemd[1]: Started Session 43 of User zuul.
Nov 25 23:39:00 compute-0 sshd-session[130597]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 23:39:01 compute-0 ceph-mon[75654]: pgmap v266: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:39:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 23:39:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:39:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 23:39:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:39:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:39:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:39:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:39:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:39:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:39:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:39:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:39:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:39:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 23:39:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:39:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:39:01 compute-0 python3.9[130750]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 23:39:02 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v267: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:39:02 compute-0 sudo[130904]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oycgqfkgaphrfzmsswpogulzbbdpnexu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113942.2264366-34-164345956163446/AnsiballZ_setup.py'
Nov 25 23:39:02 compute-0 sudo[130904]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:39:02 compute-0 python3.9[130906]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 23:39:03 compute-0 sudo[130904]: pam_unix(sudo:session): session closed for user root
Nov 25 23:39:03 compute-0 ceph-mon[75654]: pgmap v267: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:39:03 compute-0 sudo[130988]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bnwlkyvcfxkgmmciabsvomroqithpeey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113942.2264366-34-164345956163446/AnsiballZ_dnf.py'
Nov 25 23:39:03 compute-0 sudo[130988]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:39:03 compute-0 python3.9[130990]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 25 23:39:04 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v268: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:39:05 compute-0 sudo[130988]: pam_unix(sudo:session): session closed for user root
Nov 25 23:39:05 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:39:05 compute-0 ceph-mon[75654]: pgmap v268: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:39:06 compute-0 python3.9[131141]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:39:06 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v269: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:39:07 compute-0 ceph-mon[75654]: pgmap v269: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:39:07 compute-0 python3.9[131292]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 25 23:39:08 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v270: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:39:08 compute-0 python3.9[131442]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 23:39:09 compute-0 python3.9[131592]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 23:39:09 compute-0 ceph-mon[75654]: pgmap v270: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:39:09 compute-0 sshd-session[130600]: Connection closed by 192.168.122.30 port 43872
Nov 25 23:39:09 compute-0 sshd-session[130597]: pam_unix(sshd:session): session closed for user zuul
Nov 25 23:39:09 compute-0 systemd[1]: session-43.scope: Deactivated successfully.
Nov 25 23:39:09 compute-0 systemd[1]: session-43.scope: Consumed 6.458s CPU time.
Nov 25 23:39:09 compute-0 systemd-logind[789]: Session 43 logged out. Waiting for processes to exit.
Nov 25 23:39:09 compute-0 systemd-logind[789]: Removed session 43.
Nov 25 23:39:10 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v271: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:39:10 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:39:11 compute-0 ceph-mon[75654]: pgmap v271: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:39:12 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v272: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:39:13 compute-0 ceph-mon[75654]: pgmap v272: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:39:14 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v273: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:39:14 compute-0 ceph-mon[75654]: pgmap v273: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:39:15 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:39:15 compute-0 sshd-session[131617]: Accepted publickey for zuul from 192.168.122.30 port 40942 ssh2: ECDSA SHA256:7fdySOwLsSW4/MY3qn2D8coSBpPDcuftesbV+ivPlmY
Nov 25 23:39:15 compute-0 systemd-logind[789]: New session 44 of user zuul.
Nov 25 23:39:15 compute-0 systemd[1]: Started Session 44 of User zuul.
Nov 25 23:39:15 compute-0 sshd-session[131617]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 23:39:16 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v274: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:39:16 compute-0 python3.9[131770]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 23:39:17 compute-0 ceph-mon[75654]: pgmap v274: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:39:18 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v275: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:39:18 compute-0 sudo[131924]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcfujnjrhtbqdpqmohvyxplarqkscptj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113957.8888497-50-225373033178940/AnsiballZ_file.py'
Nov 25 23:39:18 compute-0 sudo[131924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:39:18 compute-0 python3.9[131926]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:39:18 compute-0 sudo[131924]: pam_unix(sudo:session): session closed for user root
Nov 25 23:39:19 compute-0 sudo[132076]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmdanktlpblrpaohtrstjrgrkeqbwtax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113958.8426297-50-129966430963440/AnsiballZ_file.py'
Nov 25 23:39:19 compute-0 sudo[132076]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:39:19 compute-0 ceph-mon[75654]: pgmap v275: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:39:19 compute-0 python3.9[132078]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:39:19 compute-0 sudo[132076]: pam_unix(sudo:session): session closed for user root
Nov 25 23:39:20 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v276: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:39:20 compute-0 sudo[132228]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slmldwypxiqyjytbgbxatsizkjybipxi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113959.6805308-65-268349530606722/AnsiballZ_stat.py'
Nov 25 23:39:20 compute-0 sudo[132228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:39:20 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:39:20 compute-0 python3.9[132230]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:39:20 compute-0 sudo[132228]: pam_unix(sudo:session): session closed for user root
Nov 25 23:39:21 compute-0 sudo[132351]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvjkecjkrxafzuhqmiiixewflaxapnyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113959.6805308-65-268349530606722/AnsiballZ_copy.py'
Nov 25 23:39:21 compute-0 sudo[132351]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:39:21 compute-0 python3.9[132353]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764113959.6805308-65-268349530606722/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=708a802323a417e1d7112a11e86e380054abde76 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:39:21 compute-0 sudo[132351]: pam_unix(sudo:session): session closed for user root
Nov 25 23:39:21 compute-0 ceph-mon[75654]: pgmap v276: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:39:21 compute-0 sudo[132503]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqfojgtsgckqkbipozlztinrpghogpki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113961.4226327-65-95726541018487/AnsiballZ_stat.py'
Nov 25 23:39:21 compute-0 sudo[132503]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:39:22 compute-0 python3.9[132505]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:39:22 compute-0 sudo[132503]: pam_unix(sudo:session): session closed for user root
Nov 25 23:39:22 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v277: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:39:22 compute-0 sudo[132626]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-helrcnflpulxttkwklvxhmnsurjtjbjo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113961.4226327-65-95726541018487/AnsiballZ_copy.py'
Nov 25 23:39:22 compute-0 sudo[132626]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:39:22 compute-0 python3.9[132628]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764113961.4226327-65-95726541018487/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=b24b90e66ef71edc8d3c31d00769c6c66b0b4046 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:39:22 compute-0 sudo[132626]: pam_unix(sudo:session): session closed for user root
Nov 25 23:39:23 compute-0 ceph-mon[75654]: pgmap v277: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:39:23 compute-0 sudo[132778]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbmcfkvtlsqkbchrhogqykrmjmlnilpx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113962.92012-65-239447004265498/AnsiballZ_stat.py'
Nov 25 23:39:23 compute-0 sudo[132778]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:39:23 compute-0 python3.9[132780]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:39:23 compute-0 sudo[132778]: pam_unix(sudo:session): session closed for user root
Nov 25 23:39:24 compute-0 sudo[132901]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kaimpvcmxriikilfbmfoxufbhiwxgasm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113962.92012-65-239447004265498/AnsiballZ_copy.py'
Nov 25 23:39:24 compute-0 sudo[132901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:39:24 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v278: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:39:24 compute-0 python3.9[132903]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764113962.92012-65-239447004265498/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=f6acf4e4a587aa842132b9a48c89216027c4f3e8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:39:24 compute-0 sudo[132901]: pam_unix(sudo:session): session closed for user root
Nov 25 23:39:24 compute-0 sudo[133053]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwsesgmgesqxolcxqskdgdnvkmxgxlry ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113964.5934405-109-256556883910252/AnsiballZ_file.py'
Nov 25 23:39:24 compute-0 sudo[133053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:39:25 compute-0 python3.9[133055]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:39:25 compute-0 sudo[133053]: pam_unix(sudo:session): session closed for user root
Nov 25 23:39:25 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:39:25 compute-0 ceph-mon[75654]: pgmap v278: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:39:25 compute-0 sudo[133205]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbzuppnhocijzggfwbsnvocwxyfsmonw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113965.385436-109-181110528140693/AnsiballZ_file.py'
Nov 25 23:39:25 compute-0 sudo[133205]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:39:25 compute-0 python3.9[133207]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:39:25 compute-0 sudo[133205]: pam_unix(sudo:session): session closed for user root
Nov 25 23:39:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:39:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:39:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:39:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:39:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:39:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:39:26 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v279: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:39:26 compute-0 sudo[133357]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cyehdowgojzsvtsuahlylvnbupksltiv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113966.1543388-124-209122477058586/AnsiballZ_stat.py'
Nov 25 23:39:26 compute-0 sudo[133357]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:39:26 compute-0 python3.9[133359]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:39:26 compute-0 sudo[133357]: pam_unix(sudo:session): session closed for user root
Nov 25 23:39:27 compute-0 sudo[133480]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbmsabrcptxgkbjbzfmwunwmzexwgsbt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113966.1543388-124-209122477058586/AnsiballZ_copy.py'
Nov 25 23:39:27 compute-0 sudo[133480]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:39:27 compute-0 ceph-mon[75654]: pgmap v279: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:39:27 compute-0 python3.9[133482]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764113966.1543388-124-209122477058586/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=acafaa52eb7a56de33ea8a98161fffe977c5e7c8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:39:27 compute-0 sudo[133480]: pam_unix(sudo:session): session closed for user root
Nov 25 23:39:27 compute-0 sudo[133632]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fidhgzeksftomxgscgonwgjbzgdlurqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113967.6123862-124-235392930615/AnsiballZ_stat.py'
Nov 25 23:39:27 compute-0 sudo[133632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:39:28 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v280: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:39:28 compute-0 python3.9[133634]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:39:28 compute-0 sudo[133632]: pam_unix(sudo:session): session closed for user root
Nov 25 23:39:28 compute-0 sudo[133635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:39:28 compute-0 sudo[133635]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:39:28 compute-0 sudo[133635]: pam_unix(sudo:session): session closed for user root
Nov 25 23:39:28 compute-0 sudo[133683]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:39:28 compute-0 sudo[133683]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:39:28 compute-0 sudo[133683]: pam_unix(sudo:session): session closed for user root
Nov 25 23:39:28 compute-0 sudo[133732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:39:28 compute-0 sudo[133732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:39:28 compute-0 sudo[133732]: pam_unix(sudo:session): session closed for user root
Nov 25 23:39:28 compute-0 sudo[133780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 23:39:28 compute-0 sudo[133780]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:39:28 compute-0 sudo[133855]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-juvfqjlbfcmoysfhdwxrufwwzcvqyxpd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113967.6123862-124-235392930615/AnsiballZ_copy.py'
Nov 25 23:39:28 compute-0 sudo[133855]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:39:28 compute-0 python3.9[133857]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764113967.6123862-124-235392930615/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=d9b8f08c6aff6f6c46c7d6bb501615569a442e0c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:39:28 compute-0 sudo[133855]: pam_unix(sudo:session): session closed for user root
Nov 25 23:39:29 compute-0 sudo[133780]: pam_unix(sudo:session): session closed for user root
Nov 25 23:39:29 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 23:39:29 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:39:29 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 23:39:29 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 23:39:29 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 23:39:29 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:39:29 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev 70784659-3f49-416b-889d-9874e52e9ff9 does not exist
Nov 25 23:39:29 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev be3046af-7de4-4037-b296-96a53e432a23 does not exist
Nov 25 23:39:29 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev 893e1cdb-5359-436b-a355-30b9a7b4e756 does not exist
Nov 25 23:39:29 compute-0 ceph-mon[75654]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Nov 25 23:39:29 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:39:29.157536) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 23:39:29 compute-0 ceph-mon[75654]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Nov 25 23:39:29 compute-0 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764113969157664, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 6639, "num_deletes": 251, "total_data_size": 7389855, "memory_usage": 7597936, "flush_reason": "Manual Compaction"}
Nov 25 23:39:29 compute-0 ceph-mon[75654]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Nov 25 23:39:29 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 23:39:29 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 23:39:29 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 23:39:29 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 23:39:29 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 23:39:29 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:39:29 compute-0 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764113969192935, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 5579081, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 141, "largest_seqno": 6777, "table_properties": {"data_size": 5555946, "index_size": 14957, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7109, "raw_key_size": 63013, "raw_average_key_size": 22, "raw_value_size": 5501803, "raw_average_value_size": 1936, "num_data_blocks": 670, "num_entries": 2841, "num_filter_entries": 2841, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764113470, "oldest_key_time": 1764113470, "file_creation_time": 1764113969, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cf57a6b1-796f-4cfa-b350-53eb10a4554d", "db_session_id": "Q7VS70283MEZ1V621ZPR", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Nov 25 23:39:29 compute-0 ceph-mon[75654]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 35930 microseconds, and 21421 cpu microseconds.
Nov 25 23:39:29 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:39:29.193459) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 5579081 bytes OK
Nov 25 23:39:29 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:39:29.193665) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Nov 25 23:39:29 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:39:29.195350) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Nov 25 23:39:29 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:39:29.195374) EVENT_LOG_v1 {"time_micros": 1764113969195365, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Nov 25 23:39:29 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:39:29.195403) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Nov 25 23:39:29 compute-0 ceph-mon[75654]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 7361779, prev total WAL file size 7361779, number of live WAL files 2.
Nov 25 23:39:29 compute-0 ceph-mon[75654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 23:39:29 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:39:29.199492) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Nov 25 23:39:29 compute-0 ceph-mon[75654]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Nov 25 23:39:29 compute-0 ceph-mon[75654]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(5448KB) 13(53KB) 8(1944B)]
Nov 25 23:39:29 compute-0 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764113969199647, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 5636282, "oldest_snapshot_seqno": -1}
Nov 25 23:39:29 compute-0 ceph-mon[75654]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 2657 keys, 5591633 bytes, temperature: kUnknown
Nov 25 23:39:29 compute-0 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764113969241812, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 5591633, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 5568878, "index_size": 15030, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 6661, "raw_key_size": 61085, "raw_average_key_size": 22, "raw_value_size": 5516351, "raw_average_value_size": 2076, "num_data_blocks": 673, "num_entries": 2657, "num_filter_entries": 2657, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764113467, "oldest_key_time": 0, "file_creation_time": 1764113969, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cf57a6b1-796f-4cfa-b350-53eb10a4554d", "db_session_id": "Q7VS70283MEZ1V621ZPR", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Nov 25 23:39:29 compute-0 ceph-mon[75654]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 23:39:29 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:39:29.242152) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 5591633 bytes
Nov 25 23:39:29 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:39:29.243913) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 133.4 rd, 132.3 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(5.4, 0.0 +0.0 blob) out(5.3 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 2947, records dropped: 290 output_compression: NoCompression
Nov 25 23:39:29 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:39:29.243945) EVENT_LOG_v1 {"time_micros": 1764113969243929, "job": 4, "event": "compaction_finished", "compaction_time_micros": 42262, "compaction_time_cpu_micros": 24867, "output_level": 6, "num_output_files": 1, "total_output_size": 5591633, "num_input_records": 2947, "num_output_records": 2657, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 23:39:29 compute-0 ceph-mon[75654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 23:39:29 compute-0 sudo[133970]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:39:29 compute-0 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764113969246543, "job": 4, "event": "table_file_deletion", "file_number": 19}
Nov 25 23:39:29 compute-0 ceph-mon[75654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 23:39:29 compute-0 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764113969246645, "job": 4, "event": "table_file_deletion", "file_number": 13}
Nov 25 23:39:29 compute-0 ceph-mon[75654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 23:39:29 compute-0 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764113969246705, "job": 4, "event": "table_file_deletion", "file_number": 8}
Nov 25 23:39:29 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:39:29.199358) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 23:39:29 compute-0 sudo[133970]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:39:29 compute-0 sudo[133970]: pam_unix(sudo:session): session closed for user root
Nov 25 23:39:29 compute-0 ceph-mon[75654]: pgmap v280: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:39:29 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:39:29 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 23:39:29 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:39:29 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 23:39:29 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 23:39:29 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:39:29 compute-0 sudo[134021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:39:29 compute-0 sudo[134021]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:39:29 compute-0 sudo[134021]: pam_unix(sudo:session): session closed for user root
Nov 25 23:39:29 compute-0 sudo[134106]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eghtdipouwymxzekejrmjtdsvzgblomd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113969.0418453-124-71774113746416/AnsiballZ_stat.py'
Nov 25 23:39:29 compute-0 sudo[134106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:39:29 compute-0 sudo[134073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:39:29 compute-0 sudo[134073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:39:29 compute-0 sudo[134073]: pam_unix(sudo:session): session closed for user root
Nov 25 23:39:29 compute-0 sudo[134116]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 23:39:29 compute-0 sudo[134116]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:39:29 compute-0 python3.9[134113]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:39:29 compute-0 sudo[134106]: pam_unix(sudo:session): session closed for user root
Nov 25 23:39:29 compute-0 podman[134234]: 2025-11-25 23:39:29.845146171 +0000 UTC m=+0.044749353 container create e81706e9817ac3e5d0b2bad0563b48611d99381cc064064e4e32d31ba408f34a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 23:39:29 compute-0 systemd[1]: Started libpod-conmon-e81706e9817ac3e5d0b2bad0563b48611d99381cc064064e4e32d31ba408f34a.scope.
Nov 25 23:39:29 compute-0 podman[134234]: 2025-11-25 23:39:29.825074759 +0000 UTC m=+0.024677991 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:39:29 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:39:29 compute-0 podman[134234]: 2025-11-25 23:39:29.940128677 +0000 UTC m=+0.139731949 container init e81706e9817ac3e5d0b2bad0563b48611d99381cc064064e4e32d31ba408f34a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 23:39:29 compute-0 podman[134234]: 2025-11-25 23:39:29.949537377 +0000 UTC m=+0.149140559 container start e81706e9817ac3e5d0b2bad0563b48611d99381cc064064e4e32d31ba408f34a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_elion, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:39:29 compute-0 podman[134234]: 2025-11-25 23:39:29.953207968 +0000 UTC m=+0.152811170 container attach e81706e9817ac3e5d0b2bad0563b48611d99381cc064064e4e32d31ba408f34a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_elion, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 23:39:29 compute-0 blissful_elion[134288]: 167 167
Nov 25 23:39:29 compute-0 systemd[1]: libpod-e81706e9817ac3e5d0b2bad0563b48611d99381cc064064e4e32d31ba408f34a.scope: Deactivated successfully.
Nov 25 23:39:29 compute-0 conmon[134288]: conmon e81706e9817ac3e5d0b2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e81706e9817ac3e5d0b2bad0563b48611d99381cc064064e4e32d31ba408f34a.scope/container/memory.events
Nov 25 23:39:29 compute-0 podman[134234]: 2025-11-25 23:39:29.958256887 +0000 UTC m=+0.157860089 container died e81706e9817ac3e5d0b2bad0563b48611d99381cc064064e4e32d31ba408f34a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_elion, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:39:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-935ed644a086e8888fa5c739cd429c5be0b0c0e9879be0a91b476cce89d1db16-merged.mount: Deactivated successfully.
Nov 25 23:39:30 compute-0 podman[134234]: 2025-11-25 23:39:30.001493268 +0000 UTC m=+0.201096460 container remove e81706e9817ac3e5d0b2bad0563b48611d99381cc064064e4e32d31ba408f34a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_elion, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:39:30 compute-0 systemd[1]: libpod-conmon-e81706e9817ac3e5d0b2bad0563b48611d99381cc064064e4e32d31ba408f34a.scope: Deactivated successfully.
Nov 25 23:39:30 compute-0 sudo[134330]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujcfbpbyezghrsjiusqqkcdcdqqukddn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113969.0418453-124-71774113746416/AnsiballZ_copy.py'
Nov 25 23:39:30 compute-0 sudo[134330]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:39:30 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v281: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:39:30 compute-0 podman[134344]: 2025-11-25 23:39:30.194596606 +0000 UTC m=+0.043594551 container create 95fd9f8ab8b39dbc9a96785ca3e82917e569fe4f4cc3b4d26e263fab834448f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:39:30 compute-0 python3.9[134338]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764113969.0418453-124-71774113746416/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=0f628f24b5a0cafde1c45217ffb210d911430796 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:39:30 compute-0 sudo[134330]: pam_unix(sudo:session): session closed for user root
Nov 25 23:39:30 compute-0 systemd[1]: Started libpod-conmon-95fd9f8ab8b39dbc9a96785ca3e82917e569fe4f4cc3b4d26e263fab834448f3.scope.
Nov 25 23:39:30 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:39:30 compute-0 podman[134344]: 2025-11-25 23:39:30.179006287 +0000 UTC m=+0.028004242 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:39:30 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:39:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/734d84fa67c61456db7a3d20ed42bfa1d0ddfd607dff90900b0ab27cabee3703/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 23:39:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/734d84fa67c61456db7a3d20ed42bfa1d0ddfd607dff90900b0ab27cabee3703/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:39:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/734d84fa67c61456db7a3d20ed42bfa1d0ddfd607dff90900b0ab27cabee3703/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:39:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/734d84fa67c61456db7a3d20ed42bfa1d0ddfd607dff90900b0ab27cabee3703/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:39:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/734d84fa67c61456db7a3d20ed42bfa1d0ddfd607dff90900b0ab27cabee3703/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 23:39:30 compute-0 podman[134344]: 2025-11-25 23:39:30.303314161 +0000 UTC m=+0.152312116 container init 95fd9f8ab8b39dbc9a96785ca3e82917e569fe4f4cc3b4d26e263fab834448f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_booth, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 23:39:30 compute-0 podman[134344]: 2025-11-25 23:39:30.317992015 +0000 UTC m=+0.166989950 container start 95fd9f8ab8b39dbc9a96785ca3e82917e569fe4f4cc3b4d26e263fab834448f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_booth, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 23:39:30 compute-0 podman[134344]: 2025-11-25 23:39:30.321756299 +0000 UTC m=+0.170754264 container attach 95fd9f8ab8b39dbc9a96785ca3e82917e569fe4f4cc3b4d26e263fab834448f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_booth, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:39:30 compute-0 sudo[134515]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drbrbreiymzednjxbnwlinhsjcwxueso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113970.498762-168-103404085105317/AnsiballZ_file.py'
Nov 25 23:39:30 compute-0 sudo[134515]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:39:31 compute-0 python3.9[134517]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:39:31 compute-0 sudo[134515]: pam_unix(sudo:session): session closed for user root
Nov 25 23:39:31 compute-0 ceph-mon[75654]: pgmap v281: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:39:31 compute-0 zen_booth[134361]: --> passed data devices: 0 physical, 3 LVM
Nov 25 23:39:31 compute-0 zen_booth[134361]: --> relative data size: 1.0
Nov 25 23:39:31 compute-0 zen_booth[134361]: --> All data devices are unavailable
Nov 25 23:39:31 compute-0 systemd[1]: libpod-95fd9f8ab8b39dbc9a96785ca3e82917e569fe4f4cc3b4d26e263fab834448f3.scope: Deactivated successfully.
Nov 25 23:39:31 compute-0 systemd[1]: libpod-95fd9f8ab8b39dbc9a96785ca3e82917e569fe4f4cc3b4d26e263fab834448f3.scope: Consumed 1.010s CPU time.
Nov 25 23:39:31 compute-0 podman[134344]: 2025-11-25 23:39:31.405699702 +0000 UTC m=+1.254697657 container died 95fd9f8ab8b39dbc9a96785ca3e82917e569fe4f4cc3b4d26e263fab834448f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_booth, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 25 23:39:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-734d84fa67c61456db7a3d20ed42bfa1d0ddfd607dff90900b0ab27cabee3703-merged.mount: Deactivated successfully.
Nov 25 23:39:31 compute-0 podman[134344]: 2025-11-25 23:39:31.473746317 +0000 UTC m=+1.322744262 container remove 95fd9f8ab8b39dbc9a96785ca3e82917e569fe4f4cc3b4d26e263fab834448f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_booth, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 23:39:31 compute-0 systemd[1]: libpod-conmon-95fd9f8ab8b39dbc9a96785ca3e82917e569fe4f4cc3b4d26e263fab834448f3.scope: Deactivated successfully.
Nov 25 23:39:31 compute-0 sudo[134116]: pam_unix(sudo:session): session closed for user root
Nov 25 23:39:31 compute-0 sudo[134717]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipmfwzwbxyhpuloiywgbjrtaluyejlui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113971.2196171-168-176416269364171/AnsiballZ_file.py'
Nov 25 23:39:31 compute-0 sudo[134717]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:39:31 compute-0 sudo[134688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:39:31 compute-0 sudo[134688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:39:31 compute-0 sudo[134688]: pam_unix(sudo:session): session closed for user root
Nov 25 23:39:31 compute-0 sudo[134732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:39:31 compute-0 sudo[134732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:39:31 compute-0 sudo[134732]: pam_unix(sudo:session): session closed for user root
Nov 25 23:39:31 compute-0 sudo[134757]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:39:31 compute-0 sudo[134757]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:39:31 compute-0 sudo[134757]: pam_unix(sudo:session): session closed for user root
Nov 25 23:39:31 compute-0 python3.9[134729]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:39:31 compute-0 sudo[134717]: pam_unix(sudo:session): session closed for user root
Nov 25 23:39:31 compute-0 sudo[134782]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 -- lvm list --format json
Nov 25 23:39:31 compute-0 sudo[134782]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:39:32 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v282: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:39:32 compute-0 podman[134923]: 2025-11-25 23:39:32.218342524 +0000 UTC m=+0.051258082 container create 3657f3c5515de2da99a24d367264ead9a1375e800dd2b8cc2522e387b4bff79c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jackson, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:39:32 compute-0 systemd[1]: Started libpod-conmon-3657f3c5515de2da99a24d367264ead9a1375e800dd2b8cc2522e387b4bff79c.scope.
Nov 25 23:39:32 compute-0 podman[134923]: 2025-11-25 23:39:32.195935217 +0000 UTC m=+0.028850845 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:39:32 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:39:32 compute-0 podman[134923]: 2025-11-25 23:39:32.33291837 +0000 UTC m=+0.165834008 container init 3657f3c5515de2da99a24d367264ead9a1375e800dd2b8cc2522e387b4bff79c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jackson, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 23:39:32 compute-0 podman[134923]: 2025-11-25 23:39:32.343350727 +0000 UTC m=+0.176266315 container start 3657f3c5515de2da99a24d367264ead9a1375e800dd2b8cc2522e387b4bff79c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:39:32 compute-0 podman[134923]: 2025-11-25 23:39:32.347614095 +0000 UTC m=+0.180529693 container attach 3657f3c5515de2da99a24d367264ead9a1375e800dd2b8cc2522e387b4bff79c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 23:39:32 compute-0 charming_jackson[134962]: 167 167
Nov 25 23:39:32 compute-0 systemd[1]: libpod-3657f3c5515de2da99a24d367264ead9a1375e800dd2b8cc2522e387b4bff79c.scope: Deactivated successfully.
Nov 25 23:39:32 compute-0 podman[134923]: 2025-11-25 23:39:32.353523997 +0000 UTC m=+0.186439575 container died 3657f3c5515de2da99a24d367264ead9a1375e800dd2b8cc2522e387b4bff79c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jackson, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 23:39:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-1765c60143c495ae4ced75202631be7a08601e36037577d665b815adb2226951-merged.mount: Deactivated successfully.
Nov 25 23:39:32 compute-0 podman[134923]: 2025-11-25 23:39:32.400978404 +0000 UTC m=+0.233893962 container remove 3657f3c5515de2da99a24d367264ead9a1375e800dd2b8cc2522e387b4bff79c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jackson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:39:32 compute-0 systemd[1]: libpod-conmon-3657f3c5515de2da99a24d367264ead9a1375e800dd2b8cc2522e387b4bff79c.scope: Deactivated successfully.
Nov 25 23:39:32 compute-0 sudo[135031]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yexsyygndnnoknvookdqppyqvawiqgfk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113972.066453-183-146216826534562/AnsiballZ_stat.py'
Nov 25 23:39:32 compute-0 sudo[135031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:39:32 compute-0 podman[135039]: 2025-11-25 23:39:32.634344342 +0000 UTC m=+0.065018152 container create f3021f7c08142ad743c76f0875cb642a3e6cd1e6c3172f73e984f21be6d59a0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_ptolemy, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:39:32 compute-0 python3.9[135033]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:39:32 compute-0 systemd[1]: Started libpod-conmon-f3021f7c08142ad743c76f0875cb642a3e6cd1e6c3172f73e984f21be6d59a0c.scope.
Nov 25 23:39:32 compute-0 sudo[135031]: pam_unix(sudo:session): session closed for user root
Nov 25 23:39:32 compute-0 podman[135039]: 2025-11-25 23:39:32.60776357 +0000 UTC m=+0.038437390 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:39:32 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:39:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/226a952aa181c51da27f8819c5983527197a02a5ee4bdaa7edf7ece42030577c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 23:39:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/226a952aa181c51da27f8819c5983527197a02a5ee4bdaa7edf7ece42030577c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:39:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/226a952aa181c51da27f8819c5983527197a02a5ee4bdaa7edf7ece42030577c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:39:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/226a952aa181c51da27f8819c5983527197a02a5ee4bdaa7edf7ece42030577c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:39:32 compute-0 podman[135039]: 2025-11-25 23:39:32.753043771 +0000 UTC m=+0.183717571 container init f3021f7c08142ad743c76f0875cb642a3e6cd1e6c3172f73e984f21be6d59a0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_ptolemy, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:39:32 compute-0 podman[135039]: 2025-11-25 23:39:32.765536135 +0000 UTC m=+0.196209955 container start f3021f7c08142ad743c76f0875cb642a3e6cd1e6c3172f73e984f21be6d59a0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_ptolemy, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 23:39:32 compute-0 podman[135039]: 2025-11-25 23:39:32.771052357 +0000 UTC m=+0.201726147 container attach f3021f7c08142ad743c76f0875cb642a3e6cd1e6c3172f73e984f21be6d59a0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_ptolemy, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 23:39:33 compute-0 sudo[135182]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdcozyrrghdmzgbkuktdljiqvjopcyqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113972.066453-183-146216826534562/AnsiballZ_copy.py'
Nov 25 23:39:33 compute-0 sudo[135182]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:39:33 compute-0 ceph-mon[75654]: pgmap v282: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:39:33 compute-0 python3.9[135184]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764113972.066453-183-146216826534562/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=82490ca0d4ca448635a334617268049830ebd7a6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:39:33 compute-0 sudo[135182]: pam_unix(sudo:session): session closed for user root
Nov 25 23:39:33 compute-0 happy_ptolemy[135057]: {
Nov 25 23:39:33 compute-0 happy_ptolemy[135057]:     "0": [
Nov 25 23:39:33 compute-0 happy_ptolemy[135057]:         {
Nov 25 23:39:33 compute-0 happy_ptolemy[135057]:             "devices": [
Nov 25 23:39:33 compute-0 happy_ptolemy[135057]:                 "/dev/loop3"
Nov 25 23:39:33 compute-0 happy_ptolemy[135057]:             ],
Nov 25 23:39:33 compute-0 happy_ptolemy[135057]:             "lv_name": "ceph_lv0",
Nov 25 23:39:33 compute-0 happy_ptolemy[135057]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 23:39:33 compute-0 happy_ptolemy[135057]:             "lv_size": "21470642176",
Nov 25 23:39:33 compute-0 happy_ptolemy[135057]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c5ddf08a-6193-41e0-8332-60b5083aa62e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 23:39:33 compute-0 happy_ptolemy[135057]:             "lv_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 23:39:33 compute-0 happy_ptolemy[135057]:             "name": "ceph_lv0",
Nov 25 23:39:33 compute-0 happy_ptolemy[135057]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 23:39:33 compute-0 happy_ptolemy[135057]:             "tags": {
Nov 25 23:39:33 compute-0 happy_ptolemy[135057]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 23:39:33 compute-0 happy_ptolemy[135057]:                 "ceph.block_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 23:39:33 compute-0 happy_ptolemy[135057]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 23:39:33 compute-0 happy_ptolemy[135057]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:39:33 compute-0 happy_ptolemy[135057]:                 "ceph.cluster_name": "ceph",
Nov 25 23:39:33 compute-0 happy_ptolemy[135057]:                 "ceph.crush_device_class": "",
Nov 25 23:39:33 compute-0 happy_ptolemy[135057]:                 "ceph.encrypted": "0",
Nov 25 23:39:33 compute-0 happy_ptolemy[135057]:                 "ceph.osd_fsid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 23:39:33 compute-0 happy_ptolemy[135057]:                 "ceph.osd_id": "0",
Nov 25 23:39:33 compute-0 happy_ptolemy[135057]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 23:39:33 compute-0 happy_ptolemy[135057]:                 "ceph.type": "block",
Nov 25 23:39:33 compute-0 happy_ptolemy[135057]:                 "ceph.vdo": "0"
Nov 25 23:39:33 compute-0 happy_ptolemy[135057]:             },
Nov 25 23:39:33 compute-0 happy_ptolemy[135057]:             "type": "block",
Nov 25 23:39:33 compute-0 happy_ptolemy[135057]:             "vg_name": "ceph_vg0"
Nov 25 23:39:33 compute-0 happy_ptolemy[135057]:         }
Nov 25 23:39:33 compute-0 happy_ptolemy[135057]:     ],
Nov 25 23:39:33 compute-0 happy_ptolemy[135057]:     "1": [
Nov 25 23:39:33 compute-0 happy_ptolemy[135057]:         {
Nov 25 23:39:33 compute-0 happy_ptolemy[135057]:             "devices": [
Nov 25 23:39:33 compute-0 happy_ptolemy[135057]:                 "/dev/loop4"
Nov 25 23:39:33 compute-0 happy_ptolemy[135057]:             ],
Nov 25 23:39:33 compute-0 happy_ptolemy[135057]:             "lv_name": "ceph_lv1",
Nov 25 23:39:33 compute-0 happy_ptolemy[135057]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 23:39:33 compute-0 happy_ptolemy[135057]:             "lv_size": "21470642176",
Nov 25 23:39:33 compute-0 happy_ptolemy[135057]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21bbab34-bea3-466b-8bf7-812749fcef47,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 23:39:33 compute-0 happy_ptolemy[135057]:             "lv_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 23:39:33 compute-0 happy_ptolemy[135057]:             "name": "ceph_lv1",
Nov 25 23:39:33 compute-0 happy_ptolemy[135057]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 23:39:33 compute-0 happy_ptolemy[135057]:             "tags": {
Nov 25 23:39:33 compute-0 happy_ptolemy[135057]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 23:39:33 compute-0 happy_ptolemy[135057]:                 "ceph.block_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 23:39:33 compute-0 happy_ptolemy[135057]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 23:39:33 compute-0 happy_ptolemy[135057]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:39:33 compute-0 happy_ptolemy[135057]:                 "ceph.cluster_name": "ceph",
Nov 25 23:39:33 compute-0 happy_ptolemy[135057]:                 "ceph.crush_device_class": "",
Nov 25 23:39:33 compute-0 happy_ptolemy[135057]:                 "ceph.encrypted": "0",
Nov 25 23:39:33 compute-0 happy_ptolemy[135057]:                 "ceph.osd_fsid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 23:39:33 compute-0 happy_ptolemy[135057]:                 "ceph.osd_id": "1",
Nov 25 23:39:33 compute-0 happy_ptolemy[135057]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 23:39:33 compute-0 happy_ptolemy[135057]:                 "ceph.type": "block",
Nov 25 23:39:33 compute-0 happy_ptolemy[135057]:                 "ceph.vdo": "0"
Nov 25 23:39:33 compute-0 happy_ptolemy[135057]:             },
Nov 25 23:39:33 compute-0 happy_ptolemy[135057]:             "type": "block",
Nov 25 23:39:33 compute-0 happy_ptolemy[135057]:             "vg_name": "ceph_vg1"
Nov 25 23:39:33 compute-0 happy_ptolemy[135057]:         }
Nov 25 23:39:33 compute-0 happy_ptolemy[135057]:     ],
Nov 25 23:39:33 compute-0 happy_ptolemy[135057]:     "2": [
Nov 25 23:39:33 compute-0 happy_ptolemy[135057]:         {
Nov 25 23:39:33 compute-0 happy_ptolemy[135057]:             "devices": [
Nov 25 23:39:33 compute-0 happy_ptolemy[135057]:                 "/dev/loop5"
Nov 25 23:39:33 compute-0 happy_ptolemy[135057]:             ],
Nov 25 23:39:33 compute-0 happy_ptolemy[135057]:             "lv_name": "ceph_lv2",
Nov 25 23:39:33 compute-0 happy_ptolemy[135057]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 23:39:33 compute-0 happy_ptolemy[135057]:             "lv_size": "21470642176",
Nov 25 23:39:33 compute-0 happy_ptolemy[135057]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=019d967b-1a56-4e90-8682-a890da577e20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 23:39:33 compute-0 happy_ptolemy[135057]:             "lv_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 23:39:33 compute-0 happy_ptolemy[135057]:             "name": "ceph_lv2",
Nov 25 23:39:33 compute-0 happy_ptolemy[135057]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 23:39:33 compute-0 happy_ptolemy[135057]:             "tags": {
Nov 25 23:39:33 compute-0 happy_ptolemy[135057]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 23:39:33 compute-0 happy_ptolemy[135057]:                 "ceph.block_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 23:39:33 compute-0 happy_ptolemy[135057]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 23:39:33 compute-0 happy_ptolemy[135057]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:39:33 compute-0 happy_ptolemy[135057]:                 "ceph.cluster_name": "ceph",
Nov 25 23:39:33 compute-0 happy_ptolemy[135057]:                 "ceph.crush_device_class": "",
Nov 25 23:39:33 compute-0 happy_ptolemy[135057]:                 "ceph.encrypted": "0",
Nov 25 23:39:33 compute-0 happy_ptolemy[135057]:                 "ceph.osd_fsid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 23:39:33 compute-0 happy_ptolemy[135057]:                 "ceph.osd_id": "2",
Nov 25 23:39:33 compute-0 happy_ptolemy[135057]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 23:39:33 compute-0 happy_ptolemy[135057]:                 "ceph.type": "block",
Nov 25 23:39:33 compute-0 happy_ptolemy[135057]:                 "ceph.vdo": "0"
Nov 25 23:39:33 compute-0 happy_ptolemy[135057]:             },
Nov 25 23:39:33 compute-0 happy_ptolemy[135057]:             "type": "block",
Nov 25 23:39:33 compute-0 happy_ptolemy[135057]:             "vg_name": "ceph_vg2"
Nov 25 23:39:33 compute-0 happy_ptolemy[135057]:         }
Nov 25 23:39:33 compute-0 happy_ptolemy[135057]:     ]
Nov 25 23:39:33 compute-0 happy_ptolemy[135057]: }
Nov 25 23:39:33 compute-0 systemd[1]: libpod-f3021f7c08142ad743c76f0875cb642a3e6cd1e6c3172f73e984f21be6d59a0c.scope: Deactivated successfully.
Nov 25 23:39:33 compute-0 podman[135039]: 2025-11-25 23:39:33.55941269 +0000 UTC m=+0.990086470 container died f3021f7c08142ad743c76f0875cb642a3e6cd1e6c3172f73e984f21be6d59a0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_ptolemy, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 23:39:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-226a952aa181c51da27f8819c5983527197a02a5ee4bdaa7edf7ece42030577c-merged.mount: Deactivated successfully.
Nov 25 23:39:33 compute-0 podman[135039]: 2025-11-25 23:39:33.617927911 +0000 UTC m=+1.048601701 container remove f3021f7c08142ad743c76f0875cb642a3e6cd1e6c3172f73e984f21be6d59a0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_ptolemy, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:39:33 compute-0 systemd[1]: libpod-conmon-f3021f7c08142ad743c76f0875cb642a3e6cd1e6c3172f73e984f21be6d59a0c.scope: Deactivated successfully.
Nov 25 23:39:33 compute-0 sudo[134782]: pam_unix(sudo:session): session closed for user root
Nov 25 23:39:33 compute-0 sudo[135254]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:39:33 compute-0 sudo[135254]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:39:33 compute-0 sudo[135254]: pam_unix(sudo:session): session closed for user root
Nov 25 23:39:33 compute-0 sudo[135308]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:39:33 compute-0 sudo[135308]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:39:33 compute-0 sudo[135308]: pam_unix(sudo:session): session closed for user root
Nov 25 23:39:33 compute-0 sudo[135352]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:39:33 compute-0 sudo[135352]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:39:33 compute-0 sudo[135352]: pam_unix(sudo:session): session closed for user root
Nov 25 23:39:34 compute-0 sudo[135398]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 -- raw list --format json
Nov 25 23:39:34 compute-0 sudo[135398]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:39:34 compute-0 sudo[135447]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ioqmlxnocxyddjwezvlhwgpddwbriwvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113973.6404994-183-260501623802104/AnsiballZ_stat.py'
Nov 25 23:39:34 compute-0 sudo[135447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:39:34 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v283: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:39:34 compute-0 python3.9[135450]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:39:34 compute-0 sudo[135447]: pam_unix(sudo:session): session closed for user root
Nov 25 23:39:34 compute-0 podman[135514]: 2025-11-25 23:39:34.419057986 +0000 UTC m=+0.049004521 container create 2f1d3628882aebe17a272c5b92d2f5fe686d4eb1a5314f7b243daaf52132656f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_mclean, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 23:39:34 compute-0 systemd[1]: Started libpod-conmon-2f1d3628882aebe17a272c5b92d2f5fe686d4eb1a5314f7b243daaf52132656f.scope.
Nov 25 23:39:34 compute-0 podman[135514]: 2025-11-25 23:39:34.399793655 +0000 UTC m=+0.029740180 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:39:34 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:39:34 compute-0 podman[135514]: 2025-11-25 23:39:34.516064487 +0000 UTC m=+0.146011032 container init 2f1d3628882aebe17a272c5b92d2f5fe686d4eb1a5314f7b243daaf52132656f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_mclean, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:39:34 compute-0 podman[135514]: 2025-11-25 23:39:34.52559386 +0000 UTC m=+0.155540375 container start 2f1d3628882aebe17a272c5b92d2f5fe686d4eb1a5314f7b243daaf52132656f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_mclean, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:39:34 compute-0 podman[135514]: 2025-11-25 23:39:34.529013345 +0000 UTC m=+0.158959860 container attach 2f1d3628882aebe17a272c5b92d2f5fe686d4eb1a5314f7b243daaf52132656f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 23:39:34 compute-0 lucid_mclean[135554]: 167 167
Nov 25 23:39:34 compute-0 systemd[1]: libpod-2f1d3628882aebe17a272c5b92d2f5fe686d4eb1a5314f7b243daaf52132656f.scope: Deactivated successfully.
Nov 25 23:39:34 compute-0 podman[135514]: 2025-11-25 23:39:34.533767385 +0000 UTC m=+0.163713900 container died 2f1d3628882aebe17a272c5b92d2f5fe686d4eb1a5314f7b243daaf52132656f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_mclean, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:39:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-ee3e201b3e1ad1a87f8e14d6ff3104296581143b26b5c5b2b263939aaffe54ad-merged.mount: Deactivated successfully.
Nov 25 23:39:34 compute-0 podman[135514]: 2025-11-25 23:39:34.573126589 +0000 UTC m=+0.203073104 container remove 2f1d3628882aebe17a272c5b92d2f5fe686d4eb1a5314f7b243daaf52132656f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 23:39:34 compute-0 systemd[1]: libpod-conmon-2f1d3628882aebe17a272c5b92d2f5fe686d4eb1a5314f7b243daaf52132656f.scope: Deactivated successfully.
Nov 25 23:39:34 compute-0 sudo[135655]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugtbweyaeeqgyyyhzdbjgtbpcqcgnqmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113973.6404994-183-260501623802104/AnsiballZ_copy.py'
Nov 25 23:39:34 compute-0 sudo[135655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:39:34 compute-0 podman[135645]: 2025-11-25 23:39:34.784222683 +0000 UTC m=+0.056295321 container create 28798d46ba0de11ca4f21fcad5709ad80267c3eac730b7064859118a814125e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_dewdney, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:39:34 compute-0 systemd[1]: Started libpod-conmon-28798d46ba0de11ca4f21fcad5709ad80267c3eac730b7064859118a814125e3.scope.
Nov 25 23:39:34 compute-0 podman[135645]: 2025-11-25 23:39:34.765350983 +0000 UTC m=+0.037423671 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:39:34 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:39:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4d2ffa4f6670520e56e4478f42c2290e8e2c108f4b23ed5b921db8b0847b399/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 23:39:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4d2ffa4f6670520e56e4478f42c2290e8e2c108f4b23ed5b921db8b0847b399/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:39:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4d2ffa4f6670520e56e4478f42c2290e8e2c108f4b23ed5b921db8b0847b399/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:39:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4d2ffa4f6670520e56e4478f42c2290e8e2c108f4b23ed5b921db8b0847b399/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:39:34 compute-0 podman[135645]: 2025-11-25 23:39:34.910242594 +0000 UTC m=+0.182315292 container init 28798d46ba0de11ca4f21fcad5709ad80267c3eac730b7064859118a814125e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_dewdney, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 23:39:34 compute-0 podman[135645]: 2025-11-25 23:39:34.921137225 +0000 UTC m=+0.193209883 container start 28798d46ba0de11ca4f21fcad5709ad80267c3eac730b7064859118a814125e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_dewdney, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:39:34 compute-0 podman[135645]: 2025-11-25 23:39:34.924424445 +0000 UTC m=+0.196497093 container attach 28798d46ba0de11ca4f21fcad5709ad80267c3eac730b7064859118a814125e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_dewdney, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507)
Nov 25 23:39:34 compute-0 python3.9[135665]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764113973.6404994-183-260501623802104/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=d9b8f08c6aff6f6c46c7d6bb501615569a442e0c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:39:35 compute-0 sudo[135655]: pam_unix(sudo:session): session closed for user root
Nov 25 23:39:35 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:39:35 compute-0 ceph-mon[75654]: pgmap v283: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:39:35 compute-0 sudo[135829]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpilcxybzooogekmcgtqnmjquesebwwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113975.1989286-183-222597940016056/AnsiballZ_stat.py'
Nov 25 23:39:35 compute-0 sudo[135829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:39:35 compute-0 python3.9[135834]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:39:35 compute-0 sudo[135829]: pam_unix(sudo:session): session closed for user root
Nov 25 23:39:35 compute-0 stupefied_dewdney[135670]: {
Nov 25 23:39:35 compute-0 stupefied_dewdney[135670]:     "019d967b-1a56-4e90-8682-a890da577e20": {
Nov 25 23:39:35 compute-0 stupefied_dewdney[135670]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:39:35 compute-0 stupefied_dewdney[135670]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 23:39:35 compute-0 stupefied_dewdney[135670]:         "osd_id": 2,
Nov 25 23:39:35 compute-0 stupefied_dewdney[135670]:         "osd_uuid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 23:39:35 compute-0 stupefied_dewdney[135670]:         "type": "bluestore"
Nov 25 23:39:35 compute-0 stupefied_dewdney[135670]:     },
Nov 25 23:39:35 compute-0 stupefied_dewdney[135670]:     "21bbab34-bea3-466b-8bf7-812749fcef47": {
Nov 25 23:39:35 compute-0 stupefied_dewdney[135670]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:39:35 compute-0 stupefied_dewdney[135670]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 23:39:35 compute-0 stupefied_dewdney[135670]:         "osd_id": 1,
Nov 25 23:39:35 compute-0 stupefied_dewdney[135670]:         "osd_uuid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 23:39:35 compute-0 stupefied_dewdney[135670]:         "type": "bluestore"
Nov 25 23:39:35 compute-0 stupefied_dewdney[135670]:     },
Nov 25 23:39:35 compute-0 stupefied_dewdney[135670]:     "c5ddf08a-6193-41e0-8332-60b5083aa62e": {
Nov 25 23:39:35 compute-0 stupefied_dewdney[135670]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:39:35 compute-0 stupefied_dewdney[135670]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 23:39:35 compute-0 stupefied_dewdney[135670]:         "osd_id": 0,
Nov 25 23:39:35 compute-0 stupefied_dewdney[135670]:         "osd_uuid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 23:39:35 compute-0 stupefied_dewdney[135670]:         "type": "bluestore"
Nov 25 23:39:35 compute-0 stupefied_dewdney[135670]:     }
Nov 25 23:39:35 compute-0 stupefied_dewdney[135670]: }
Nov 25 23:39:36 compute-0 systemd[1]: libpod-28798d46ba0de11ca4f21fcad5709ad80267c3eac730b7064859118a814125e3.scope: Deactivated successfully.
Nov 25 23:39:36 compute-0 systemd[1]: libpod-28798d46ba0de11ca4f21fcad5709ad80267c3eac730b7064859118a814125e3.scope: Consumed 1.089s CPU time.
Nov 25 23:39:36 compute-0 podman[135645]: 2025-11-25 23:39:36.006371364 +0000 UTC m=+1.278444042 container died 28798d46ba0de11ca4f21fcad5709ad80267c3eac730b7064859118a814125e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_dewdney, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:39:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-a4d2ffa4f6670520e56e4478f42c2290e8e2c108f4b23ed5b921db8b0847b399-merged.mount: Deactivated successfully.
Nov 25 23:39:36 compute-0 podman[135645]: 2025-11-25 23:39:36.062239233 +0000 UTC m=+1.334311871 container remove 28798d46ba0de11ca4f21fcad5709ad80267c3eac730b7064859118a814125e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_dewdney, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:39:36 compute-0 systemd[1]: libpod-conmon-28798d46ba0de11ca4f21fcad5709ad80267c3eac730b7064859118a814125e3.scope: Deactivated successfully.
Nov 25 23:39:36 compute-0 sudo[135398]: pam_unix(sudo:session): session closed for user root
Nov 25 23:39:36 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 23:39:36 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:39:36 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 23:39:36 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:39:36 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev 8646aa4b-d4cd-4c90-8c4f-732837c37d3b does not exist
Nov 25 23:39:36 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v284: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:39:36 compute-0 sudo[135940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:39:36 compute-0 sudo[135940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:39:36 compute-0 sudo[135940]: pam_unix(sudo:session): session closed for user root
Nov 25 23:39:36 compute-0 sudo[135989]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 23:39:36 compute-0 sudo[135989]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:39:36 compute-0 sudo[135989]: pam_unix(sudo:session): session closed for user root
Nov 25 23:39:36 compute-0 sudo[136040]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxinfqvofynvtliftruojlspmngflwwe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113975.1989286-183-222597940016056/AnsiballZ_copy.py'
Nov 25 23:39:36 compute-0 sudo[136040]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:39:36 compute-0 python3.9[136042]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764113975.1989286-183-222597940016056/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=4cd48abd63382b9dcb37144a7ad2a1da07f0d3bd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:39:36 compute-0 sudo[136040]: pam_unix(sudo:session): session closed for user root
Nov 25 23:39:37 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:39:37 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:39:37 compute-0 ceph-mon[75654]: pgmap v284: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:39:37 compute-0 sudo[136192]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkwniewbqxuxmvuqwsswboszgxwzhxge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113977.4437459-243-8863280227786/AnsiballZ_file.py'
Nov 25 23:39:37 compute-0 sudo[136192]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:39:38 compute-0 python3.9[136194]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:39:38 compute-0 sudo[136192]: pam_unix(sudo:session): session closed for user root
Nov 25 23:39:38 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v285: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:39:38 compute-0 sudo[136344]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntdjhaempodibtntorutgekbjmmliwge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113978.253518-251-245657119477948/AnsiballZ_stat.py'
Nov 25 23:39:38 compute-0 sudo[136344]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:39:38 compute-0 python3.9[136346]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:39:38 compute-0 sudo[136344]: pam_unix(sudo:session): session closed for user root
Nov 25 23:39:39 compute-0 ceph-mon[75654]: pgmap v285: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:39:39 compute-0 sudo[136467]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-losbcstdtclerygqyaktywvqvaukkuii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113978.253518-251-245657119477948/AnsiballZ_copy.py'
Nov 25 23:39:39 compute-0 sudo[136467]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:39:39 compute-0 python3.9[136469]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764113978.253518-251-245657119477948/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=70850e65b5e36c1a89abd37d8f250d78131a9b48 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:39:39 compute-0 sudo[136467]: pam_unix(sudo:session): session closed for user root
Nov 25 23:39:40 compute-0 sudo[136619]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igzkxeowtgzlgtzxygbktgpjjcwtywsx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113979.7172058-267-249208884801588/AnsiballZ_file.py'
Nov 25 23:39:40 compute-0 sudo[136619]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:39:40 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v286: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:39:40 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:39:40 compute-0 python3.9[136621]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:39:40 compute-0 sudo[136619]: pam_unix(sudo:session): session closed for user root
Nov 25 23:39:40 compute-0 sudo[136771]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kitufxjotjaafiqokykgipicwkfxthio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113980.4917448-275-256866420861290/AnsiballZ_stat.py'
Nov 25 23:39:40 compute-0 sudo[136771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:39:41 compute-0 python3.9[136773]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:39:41 compute-0 sudo[136771]: pam_unix(sudo:session): session closed for user root
Nov 25 23:39:41 compute-0 ceph-mon[75654]: pgmap v286: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:39:41 compute-0 sudo[136894]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ycopznwvgkgkakzyempsuhcgxbpnqmsr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113980.4917448-275-256866420861290/AnsiballZ_copy.py'
Nov 25 23:39:41 compute-0 sudo[136894]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:39:41 compute-0 python3.9[136896]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764113980.4917448-275-256866420861290/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=70850e65b5e36c1a89abd37d8f250d78131a9b48 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:39:41 compute-0 sudo[136894]: pam_unix(sudo:session): session closed for user root
Nov 25 23:39:42 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v287: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:39:42 compute-0 sudo[137046]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wejmpwpfdbaqvcshjqgiaeepnqhjpinx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113982.2123826-291-139582469544023/AnsiballZ_file.py'
Nov 25 23:39:42 compute-0 sudo[137046]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:39:42 compute-0 python3.9[137048]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:39:42 compute-0 sudo[137046]: pam_unix(sudo:session): session closed for user root
Nov 25 23:39:43 compute-0 ceph-mon[75654]: pgmap v287: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:39:43 compute-0 sudo[137198]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqqwynhxdudwwmulozdkyolqgnkersdm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113982.9753563-299-65149963766971/AnsiballZ_stat.py'
Nov 25 23:39:43 compute-0 sudo[137198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:39:43 compute-0 python3.9[137200]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:39:43 compute-0 sudo[137198]: pam_unix(sudo:session): session closed for user root
Nov 25 23:39:43 compute-0 sudo[137321]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgmrfxvodncfhmpnhpqkkrmxtwdgfxuc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113982.9753563-299-65149963766971/AnsiballZ_copy.py'
Nov 25 23:39:43 compute-0 sudo[137321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:39:44 compute-0 python3.9[137323]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764113982.9753563-299-65149963766971/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=70850e65b5e36c1a89abd37d8f250d78131a9b48 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:39:44 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v288: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:39:44 compute-0 sudo[137321]: pam_unix(sudo:session): session closed for user root
Nov 25 23:39:44 compute-0 sudo[137473]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-soqttocznhczfyncavifqplxglzgkgsj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113984.394637-315-220465928962129/AnsiballZ_file.py'
Nov 25 23:39:44 compute-0 sudo[137473]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:39:44 compute-0 python3.9[137475]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:39:45 compute-0 sudo[137473]: pam_unix(sudo:session): session closed for user root
Nov 25 23:39:45 compute-0 ceph-mon[75654]: pgmap v288: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:39:45 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:39:45 compute-0 sudo[137625]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnrdpxultgbymkwiwwkajpknetodfxws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113985.197418-323-245228928062019/AnsiballZ_stat.py'
Nov 25 23:39:45 compute-0 sudo[137625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:39:45 compute-0 python3.9[137627]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:39:45 compute-0 sudo[137625]: pam_unix(sudo:session): session closed for user root
Nov 25 23:39:46 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v289: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:39:46 compute-0 sudo[137748]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzhjndsvndpnhaezslirgoduegydfghz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113985.197418-323-245228928062019/AnsiballZ_copy.py'
Nov 25 23:39:46 compute-0 sudo[137748]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:39:46 compute-0 python3.9[137750]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764113985.197418-323-245228928062019/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=70850e65b5e36c1a89abd37d8f250d78131a9b48 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:39:46 compute-0 sudo[137748]: pam_unix(sudo:session): session closed for user root
Nov 25 23:39:46 compute-0 sudo[137900]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eeoanfftzozjfupynsvimchpegvacagu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113986.7093272-339-119980825636413/AnsiballZ_file.py'
Nov 25 23:39:46 compute-0 sudo[137900]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:39:47 compute-0 python3.9[137902]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:39:47 compute-0 sudo[137900]: pam_unix(sudo:session): session closed for user root
Nov 25 23:39:47 compute-0 ceph-mon[75654]: pgmap v289: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:39:47 compute-0 sudo[138052]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anantgmfqakkarbgbtineonpldwqlrcc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113987.368622-347-152964366831669/AnsiballZ_stat.py'
Nov 25 23:39:47 compute-0 sudo[138052]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:39:47 compute-0 python3.9[138054]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:39:47 compute-0 sudo[138052]: pam_unix(sudo:session): session closed for user root
Nov 25 23:39:48 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v290: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:39:48 compute-0 sudo[138175]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvrhrkuczzclptpgnninbwtciqqtqtmz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113987.368622-347-152964366831669/AnsiballZ_copy.py'
Nov 25 23:39:48 compute-0 sudo[138175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:39:48 compute-0 python3.9[138177]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764113987.368622-347-152964366831669/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=70850e65b5e36c1a89abd37d8f250d78131a9b48 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:39:48 compute-0 sudo[138175]: pam_unix(sudo:session): session closed for user root
Nov 25 23:39:49 compute-0 sudo[138327]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aerovemsrqyhaoghnzdqbbsbmgustprx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113988.918931-363-209192930536563/AnsiballZ_file.py'
Nov 25 23:39:49 compute-0 sudo[138327]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:39:49 compute-0 ceph-mon[75654]: pgmap v290: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:39:49 compute-0 python3.9[138329]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:39:49 compute-0 sudo[138327]: pam_unix(sudo:session): session closed for user root
Nov 25 23:39:50 compute-0 sudo[138479]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sukzzwabfscyafrjlqxkkpcyeebslfwh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113989.7105746-371-155055669596739/AnsiballZ_stat.py'
Nov 25 23:39:50 compute-0 sudo[138479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:39:50 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v291: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:39:50 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:39:50 compute-0 python3.9[138481]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:39:50 compute-0 sudo[138479]: pam_unix(sudo:session): session closed for user root
Nov 25 23:39:50 compute-0 sudo[138602]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzmatxaanpjhdoftyaeljtmaatyqmakx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113989.7105746-371-155055669596739/AnsiballZ_copy.py'
Nov 25 23:39:50 compute-0 sudo[138602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:39:51 compute-0 python3.9[138604]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764113989.7105746-371-155055669596739/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=70850e65b5e36c1a89abd37d8f250d78131a9b48 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:39:51 compute-0 sudo[138602]: pam_unix(sudo:session): session closed for user root
Nov 25 23:39:51 compute-0 ceph-mon[75654]: pgmap v291: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:39:51 compute-0 sshd-session[131620]: Connection closed by 192.168.122.30 port 40942
Nov 25 23:39:51 compute-0 sshd-session[131617]: pam_unix(sshd:session): session closed for user zuul
Nov 25 23:39:51 compute-0 systemd-logind[789]: Session 44 logged out. Waiting for processes to exit.
Nov 25 23:39:51 compute-0 systemd[1]: session-44.scope: Deactivated successfully.
Nov 25 23:39:51 compute-0 systemd[1]: session-44.scope: Consumed 27.505s CPU time.
Nov 25 23:39:51 compute-0 systemd-logind[789]: Removed session 44.
Nov 25 23:39:52 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v292: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:39:53 compute-0 ceph-mon[75654]: pgmap v292: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:39:54 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v293: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:39:55 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:39:55 compute-0 ceph-mon[75654]: pgmap v293: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:39:56 compute-0 ceph-mgr[75954]: [balancer INFO root] Optimize plan auto_2025-11-25_23:39:56
Nov 25 23:39:56 compute-0 ceph-mgr[75954]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 23:39:56 compute-0 ceph-mgr[75954]: [balancer INFO root] do_upmap
Nov 25 23:39:56 compute-0 ceph-mgr[75954]: [balancer INFO root] pools ['backups', 'vms', 'volumes', 'images', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr']
Nov 25 23:39:56 compute-0 ceph-mgr[75954]: [balancer INFO root] prepared 0/10 changes
Nov 25 23:39:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:39:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:39:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:39:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:39:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:39:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:39:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 23:39:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 23:39:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 23:39:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 23:39:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 23:39:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 23:39:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 23:39:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 23:39:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 23:39:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 23:39:56 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v294: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:39:57 compute-0 ceph-mon[75654]: pgmap v294: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:39:57 compute-0 sshd-session[138629]: Accepted publickey for zuul from 192.168.122.30 port 39792 ssh2: ECDSA SHA256:7fdySOwLsSW4/MY3qn2D8coSBpPDcuftesbV+ivPlmY
Nov 25 23:39:57 compute-0 systemd-logind[789]: New session 45 of user zuul.
Nov 25 23:39:57 compute-0 systemd[1]: Started Session 45 of User zuul.
Nov 25 23:39:57 compute-0 sshd-session[138629]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 23:39:58 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v295: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:39:58 compute-0 sudo[138782]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eiuorvcwejdawconqibfluvuvctnorky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113997.9925168-22-217313281567332/AnsiballZ_file.py'
Nov 25 23:39:58 compute-0 sudo[138782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:39:58 compute-0 python3.9[138784]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:39:58 compute-0 sudo[138782]: pam_unix(sudo:session): session closed for user root
Nov 25 23:39:59 compute-0 ceph-mon[75654]: pgmap v295: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:39:59 compute-0 sudo[138934]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qabcmsisbyddnlosfoexiwbryplqlnqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113999.044084-34-113953875962895/AnsiballZ_stat.py'
Nov 25 23:39:59 compute-0 sudo[138934]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:39:59 compute-0 python3.9[138936]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:39:59 compute-0 sudo[138934]: pam_unix(sudo:session): session closed for user root
Nov 25 23:40:00 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v296: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:40:00 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:40:00 compute-0 sudo[139057]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oepztqazexiymzsxomjdhksdtlymtspc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764113999.044084-34-113953875962895/AnsiballZ_copy.py'
Nov 25 23:40:00 compute-0 sudo[139057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:40:00 compute-0 python3.9[139059]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764113999.044084-34-113953875962895/.source.conf _original_basename=ceph.conf follow=False checksum=2731b3c25c88107bdeb6ffd28d9d5d2aeb7ab117 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:40:00 compute-0 sudo[139057]: pam_unix(sudo:session): session closed for user root
Nov 25 23:40:01 compute-0 sudo[139209]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xssayzqrygyuxlcsubiahhdfykmwklsm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114000.8133557-34-69613324286698/AnsiballZ_stat.py'
Nov 25 23:40:01 compute-0 sudo[139209]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:40:01 compute-0 python3.9[139211]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:40:01 compute-0 sudo[139209]: pam_unix(sudo:session): session closed for user root
Nov 25 23:40:01 compute-0 ceph-mon[75654]: pgmap v296: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:40:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 23:40:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:40:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 23:40:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:40:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:40:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:40:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:40:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:40:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:40:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:40:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:40:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:40:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 23:40:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:40:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:40:01 compute-0 sudo[139332]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikvycvovdveiuycsorluxnozwaysstsb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114000.8133557-34-69613324286698/AnsiballZ_copy.py'
Nov 25 23:40:01 compute-0 sudo[139332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:40:01 compute-0 python3.9[139334]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764114000.8133557-34-69613324286698/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=7118a3e4848d5b96f84dfc7266d24215d2762b5c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:40:02 compute-0 sudo[139332]: pam_unix(sudo:session): session closed for user root
Nov 25 23:40:02 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v297: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:40:02 compute-0 sshd-session[138632]: Connection closed by 192.168.122.30 port 39792
Nov 25 23:40:02 compute-0 sshd-session[138629]: pam_unix(sshd:session): session closed for user zuul
Nov 25 23:40:02 compute-0 systemd[1]: session-45.scope: Deactivated successfully.
Nov 25 23:40:02 compute-0 systemd[1]: session-45.scope: Consumed 3.169s CPU time.
Nov 25 23:40:02 compute-0 systemd-logind[789]: Session 45 logged out. Waiting for processes to exit.
Nov 25 23:40:02 compute-0 systemd-logind[789]: Removed session 45.
Nov 25 23:40:03 compute-0 ceph-mon[75654]: pgmap v297: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:40:04 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v298: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:40:05 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:40:05 compute-0 ceph-mon[75654]: pgmap v298: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:40:06 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v299: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:40:07 compute-0 ceph-mon[75654]: pgmap v299: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:40:08 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v300: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:40:08 compute-0 sshd-session[139359]: Accepted publickey for zuul from 192.168.122.30 port 60068 ssh2: ECDSA SHA256:7fdySOwLsSW4/MY3qn2D8coSBpPDcuftesbV+ivPlmY
Nov 25 23:40:08 compute-0 systemd-logind[789]: New session 46 of user zuul.
Nov 25 23:40:08 compute-0 systemd[1]: Started Session 46 of User zuul.
Nov 25 23:40:08 compute-0 sshd-session[139359]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 23:40:09 compute-0 ceph-mon[75654]: pgmap v300: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:40:09 compute-0 python3.9[139512]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 23:40:10 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v301: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:40:10 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:40:10 compute-0 sudo[139670]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eetjedurfylwkyrfjtkhheglyxkynwtg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114010.334068-34-105605638838762/AnsiballZ_file.py'
Nov 25 23:40:10 compute-0 sudo[139670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:40:11 compute-0 python3.9[139672]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:40:11 compute-0 sudo[139670]: pam_unix(sudo:session): session closed for user root
Nov 25 23:40:11 compute-0 ceph-mon[75654]: pgmap v301: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:40:11 compute-0 sshd-session[139566]: Invalid user admin from 202.83.162.167 port 46934
Nov 25 23:40:11 compute-0 sudo[139822]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkvrczxabkrjrwgyycgbdziunxbtlxsg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114011.332688-34-11417435210395/AnsiballZ_file.py'
Nov 25 23:40:11 compute-0 sudo[139822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:40:11 compute-0 sshd-session[139566]: Received disconnect from 202.83.162.167 port 46934:11: Bye Bye [preauth]
Nov 25 23:40:11 compute-0 sshd-session[139566]: Disconnected from invalid user admin 202.83.162.167 port 46934 [preauth]
Nov 25 23:40:11 compute-0 python3.9[139824]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:40:11 compute-0 sudo[139822]: pam_unix(sudo:session): session closed for user root
Nov 25 23:40:12 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v302: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:40:12 compute-0 python3.9[139974]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 23:40:13 compute-0 sshd-session[139541]: Connection closed by 45.78.216.132 port 56782 [preauth]
Nov 25 23:40:13 compute-0 ceph-mon[75654]: pgmap v302: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:40:13 compute-0 sudo[140124]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kclbchwpougefiuiasxuowtdkmuwmucm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114013.1091943-57-136398341234060/AnsiballZ_seboolean.py'
Nov 25 23:40:13 compute-0 sudo[140124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:40:13 compute-0 python3.9[140126]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Nov 25 23:40:14 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v303: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:40:14 compute-0 sudo[140124]: pam_unix(sudo:session): session closed for user root
Nov 25 23:40:15 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:40:15 compute-0 ceph-mon[75654]: pgmap v303: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:40:15 compute-0 sudo[140280]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdipwbqnxxkzjmfnfuzosotrsyhvwutk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114015.5710075-67-178289409986175/AnsiballZ_setup.py'
Nov 25 23:40:15 compute-0 dbus-broker-launch[769]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Nov 25 23:40:15 compute-0 sudo[140280]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:40:16 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v304: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:40:16 compute-0 python3.9[140282]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 23:40:16 compute-0 sudo[140280]: pam_unix(sudo:session): session closed for user root
Nov 25 23:40:17 compute-0 sudo[140364]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjvftdkszmkjflfruvcxfvuosjfwctxi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114015.5710075-67-178289409986175/AnsiballZ_dnf.py'
Nov 25 23:40:17 compute-0 sudo[140364]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:40:17 compute-0 python3.9[140366]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 23:40:17 compute-0 ceph-mon[75654]: pgmap v304: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:40:18 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v305: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:40:18 compute-0 sudo[140364]: pam_unix(sudo:session): session closed for user root
Nov 25 23:40:19 compute-0 sudo[140517]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evlesmowdemifmxtgasteonppoxulaip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114018.7735639-79-264194605448134/AnsiballZ_systemd.py'
Nov 25 23:40:19 compute-0 sudo[140517]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:40:19 compute-0 ceph-mon[75654]: pgmap v305: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:40:19 compute-0 python3.9[140519]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 25 23:40:20 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v306: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:40:20 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:40:20 compute-0 sudo[140517]: pam_unix(sudo:session): session closed for user root
Nov 25 23:40:21 compute-0 ceph-mon[75654]: pgmap v306: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:40:21 compute-0 sudo[140672]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndtkagxhayhjhplldcvhmpczvzrcbkoa ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764114021.1183224-87-249767612951105/AnsiballZ_edpm_nftables_snippet.py'
Nov 25 23:40:21 compute-0 sudo[140672]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:40:21 compute-0 python3[140674]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks
                                             rule:
                                               proto: udp
                                               dport: 4789
                                           - rule_name: 119 neutron geneve networks
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               state: ["UNTRACKED"]
                                           - rule_name: 120 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: OUTPUT
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                           - rule_name: 121 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: PREROUTING
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                            dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Nov 25 23:40:21 compute-0 sudo[140672]: pam_unix(sudo:session): session closed for user root
Nov 25 23:40:22 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v307: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:40:22 compute-0 sudo[140824]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-arhtlsjmwzxaygsfprywyhrftvcdpnvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114022.2395-96-196680619685006/AnsiballZ_file.py'
Nov 25 23:40:22 compute-0 sudo[140824]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:40:22 compute-0 python3.9[140826]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:40:22 compute-0 sudo[140824]: pam_unix(sudo:session): session closed for user root
Nov 25 23:40:23 compute-0 ceph-mon[75654]: pgmap v307: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:40:23 compute-0 sudo[140976]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xojutwgwhoocuhcdoyfmsrkypgykbiym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114023.1036353-104-190115209011118/AnsiballZ_stat.py'
Nov 25 23:40:23 compute-0 sudo[140976]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:40:23 compute-0 python3.9[140978]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:40:23 compute-0 sudo[140976]: pam_unix(sudo:session): session closed for user root
Nov 25 23:40:24 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v308: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:40:24 compute-0 sudo[141054]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xowihngyqgpgoyetnqtyhgvdkibctnzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114023.1036353-104-190115209011118/AnsiballZ_file.py'
Nov 25 23:40:24 compute-0 sudo[141054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:40:24 compute-0 python3.9[141056]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:40:24 compute-0 sudo[141054]: pam_unix(sudo:session): session closed for user root
Nov 25 23:40:25 compute-0 sudo[141206]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpkpajlsnzvuhliwudubpdaoflpihaej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114024.6429315-116-154208042030328/AnsiballZ_stat.py'
Nov 25 23:40:25 compute-0 sudo[141206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:40:25 compute-0 python3.9[141208]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:40:25 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:40:25 compute-0 sudo[141206]: pam_unix(sudo:session): session closed for user root
Nov 25 23:40:25 compute-0 ceph-mon[75654]: pgmap v308: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:40:25 compute-0 sudo[141284]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjstvzfpyusyzzbffphpnvmhohqlorbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114024.6429315-116-154208042030328/AnsiballZ_file.py'
Nov 25 23:40:25 compute-0 sudo[141284]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:40:25 compute-0 python3.9[141286]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.lsm9ftva recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:40:25 compute-0 sudo[141284]: pam_unix(sudo:session): session closed for user root
Nov 25 23:40:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:40:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:40:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:40:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:40:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:40:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:40:26 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v309: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:40:26 compute-0 sudo[141436]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmlhiiznbcdxqjjtqpxkjudmtpaugnql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114026.0401056-128-64306029042150/AnsiballZ_stat.py'
Nov 25 23:40:26 compute-0 sudo[141436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:40:26 compute-0 python3.9[141438]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:40:26 compute-0 sudo[141436]: pam_unix(sudo:session): session closed for user root
Nov 25 23:40:26 compute-0 sudo[141514]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqeiquxsstypcaxfpekgmxprkfydersk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114026.0401056-128-64306029042150/AnsiballZ_file.py'
Nov 25 23:40:26 compute-0 sudo[141514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:40:27 compute-0 python3.9[141516]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:40:27 compute-0 sudo[141514]: pam_unix(sudo:session): session closed for user root
Nov 25 23:40:27 compute-0 ceph-mon[75654]: pgmap v309: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:40:27 compute-0 sudo[141666]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yiilftzxrenukawoinqcrhipbfcvaslb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114027.4626594-141-270707691683332/AnsiballZ_command.py'
Nov 25 23:40:27 compute-0 sudo[141666]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:40:28 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v310: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:40:28 compute-0 python3.9[141668]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:40:28 compute-0 sudo[141666]: pam_unix(sudo:session): session closed for user root
Nov 25 23:40:29 compute-0 sudo[141819]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmsxzoxowiztbmlatqauvjppxxnfluob ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764114028.4914534-149-149630222307727/AnsiballZ_edpm_nftables_from_files.py'
Nov 25 23:40:29 compute-0 sudo[141819]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:40:29 compute-0 python3[141821]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 25 23:40:29 compute-0 sudo[141819]: pam_unix(sudo:session): session closed for user root
Nov 25 23:40:29 compute-0 ceph-mon[75654]: pgmap v310: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:40:29 compute-0 sudo[141971]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlcupbgiucrticgvyklgykwylnwlvzgx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114029.4797447-157-161243024922662/AnsiballZ_stat.py'
Nov 25 23:40:29 compute-0 sudo[141971]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:40:30 compute-0 python3.9[141973]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:40:30 compute-0 sudo[141971]: pam_unix(sudo:session): session closed for user root
Nov 25 23:40:30 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v311: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:40:30 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:40:30 compute-0 sudo[142096]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uiujmbyrpdstwpvwsklhgxckmrzoeaqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114029.4797447-157-161243024922662/AnsiballZ_copy.py'
Nov 25 23:40:30 compute-0 sudo[142096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:40:30 compute-0 python3.9[142098]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764114029.4797447-157-161243024922662/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:40:30 compute-0 sudo[142096]: pam_unix(sudo:session): session closed for user root
Nov 25 23:40:31 compute-0 ceph-mon[75654]: pgmap v311: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:40:31 compute-0 sudo[142248]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evpjnoxfgfwkksqubmppdtrjenmpgjge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114031.2438476-172-226970138791403/AnsiballZ_stat.py'
Nov 25 23:40:31 compute-0 sudo[142248]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:40:31 compute-0 python3.9[142250]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:40:31 compute-0 sudo[142248]: pam_unix(sudo:session): session closed for user root
Nov 25 23:40:32 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v312: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:40:32 compute-0 sudo[142373]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgnpugijxexmvwxljvzqeduoihfyzywf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114031.2438476-172-226970138791403/AnsiballZ_copy.py'
Nov 25 23:40:32 compute-0 sudo[142373]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:40:32 compute-0 python3.9[142375]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764114031.2438476-172-226970138791403/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:40:32 compute-0 sudo[142373]: pam_unix(sudo:session): session closed for user root
Nov 25 23:40:33 compute-0 sudo[142527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brklkqfkczjgzbptbhwapvuclaserevj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114032.8423355-187-265472278721859/AnsiballZ_stat.py'
Nov 25 23:40:33 compute-0 sudo[142527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:40:33 compute-0 python3.9[142529]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:40:33 compute-0 sudo[142527]: pam_unix(sudo:session): session closed for user root
Nov 25 23:40:33 compute-0 ceph-mon[75654]: pgmap v312: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:40:33 compute-0 sudo[142652]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aftcrehtsucmdftcltqiubozppepvjuh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114032.8423355-187-265472278721859/AnsiballZ_copy.py'
Nov 25 23:40:33 compute-0 sudo[142652]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:40:34 compute-0 sshd-session[142482]: Invalid user jack from 124.18.141.70 port 50022
Nov 25 23:40:34 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v313: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:40:34 compute-0 python3.9[142654]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764114032.8423355-187-265472278721859/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:40:34 compute-0 sudo[142652]: pam_unix(sudo:session): session closed for user root
Nov 25 23:40:34 compute-0 sshd-session[142482]: Received disconnect from 124.18.141.70 port 50022:11: Bye Bye [preauth]
Nov 25 23:40:34 compute-0 sshd-session[142482]: Disconnected from invalid user jack 124.18.141.70 port 50022 [preauth]
Nov 25 23:40:34 compute-0 sudo[142804]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmbskxyzswmlderesthmowoelmbriiqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114034.4581976-202-86526455506206/AnsiballZ_stat.py'
Nov 25 23:40:34 compute-0 sudo[142804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:40:34 compute-0 python3.9[142806]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:40:35 compute-0 sudo[142804]: pam_unix(sudo:session): session closed for user root
Nov 25 23:40:35 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:40:35 compute-0 sudo[142929]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onqzgniuqcrikvwdvkfzhdxkefxmmxml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114034.4581976-202-86526455506206/AnsiballZ_copy.py'
Nov 25 23:40:35 compute-0 sudo[142929]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:40:35 compute-0 ceph-mon[75654]: pgmap v313: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:40:35 compute-0 python3.9[142931]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764114034.4581976-202-86526455506206/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:40:35 compute-0 sudo[142929]: pam_unix(sudo:session): session closed for user root
Nov 25 23:40:36 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v314: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:40:36 compute-0 sudo[143056]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:40:36 compute-0 sudo[143056]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:40:36 compute-0 sudo[143056]: pam_unix(sudo:session): session closed for user root
Nov 25 23:40:36 compute-0 sudo[143104]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrnufjptlpnslgytcfbvihbxggbilxgd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114035.9360292-217-4637282991207/AnsiballZ_stat.py'
Nov 25 23:40:36 compute-0 sudo[143104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:40:36 compute-0 sudo[143109]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:40:36 compute-0 sudo[143109]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:40:36 compute-0 sudo[143109]: pam_unix(sudo:session): session closed for user root
Nov 25 23:40:36 compute-0 sudo[143134]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:40:36 compute-0 sudo[143134]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:40:36 compute-0 sudo[143134]: pam_unix(sudo:session): session closed for user root
Nov 25 23:40:36 compute-0 python3.9[143108]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:40:36 compute-0 sudo[143159]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 23:40:36 compute-0 sudo[143159]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:40:36 compute-0 sudo[143104]: pam_unix(sudo:session): session closed for user root
Nov 25 23:40:37 compute-0 sudo[143327]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-myowjbifuanufpzvjziqlharddvxhsxm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114035.9360292-217-4637282991207/AnsiballZ_copy.py'
Nov 25 23:40:37 compute-0 sudo[143327]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:40:37 compute-0 sudo[143159]: pam_unix(sudo:session): session closed for user root
Nov 25 23:40:37 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 23:40:37 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:40:37 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 23:40:37 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 23:40:37 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 23:40:37 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:40:37 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev 76468d1e-4f2e-49c1-b3ec-90ee88a19724 does not exist
Nov 25 23:40:37 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev 84fb1ff5-ab67-4473-a789-de4ac3c61791 does not exist
Nov 25 23:40:37 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev 387496ba-f49c-4711-b60a-d2e1faefac8c does not exist
Nov 25 23:40:37 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 23:40:37 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 23:40:37 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 23:40:37 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 23:40:37 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 23:40:37 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:40:37 compute-0 sudo[143342]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:40:37 compute-0 sudo[143342]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:40:37 compute-0 sudo[143342]: pam_unix(sudo:session): session closed for user root
Nov 25 23:40:37 compute-0 python3.9[143337]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764114035.9360292-217-4637282991207/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:40:37 compute-0 sudo[143327]: pam_unix(sudo:session): session closed for user root
Nov 25 23:40:37 compute-0 sudo[143367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:40:37 compute-0 sudo[143367]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:40:37 compute-0 sudo[143367]: pam_unix(sudo:session): session closed for user root
Nov 25 23:40:37 compute-0 sudo[143392]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:40:37 compute-0 sudo[143392]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:40:37 compute-0 sudo[143392]: pam_unix(sudo:session): session closed for user root
Nov 25 23:40:37 compute-0 sudo[143441]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 23:40:37 compute-0 sudo[143441]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:40:37 compute-0 ceph-mon[75654]: pgmap v314: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:40:37 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:40:37 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 23:40:37 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:40:37 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 23:40:37 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 23:40:37 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:40:37 compute-0 podman[143580]: 2025-11-25 23:40:37.80066877 +0000 UTC m=+0.043229943 container create 72fb4e1d488e0cf4ddd4ddcb541adf76172c74f55c60f3d1496f7f35c8873f83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_knuth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 23:40:37 compute-0 systemd[1]: Started libpod-conmon-72fb4e1d488e0cf4ddd4ddcb541adf76172c74f55c60f3d1496f7f35c8873f83.scope.
Nov 25 23:40:37 compute-0 podman[143580]: 2025-11-25 23:40:37.783187669 +0000 UTC m=+0.025748842 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:40:37 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:40:37 compute-0 podman[143580]: 2025-11-25 23:40:37.926359389 +0000 UTC m=+0.168920612 container init 72fb4e1d488e0cf4ddd4ddcb541adf76172c74f55c60f3d1496f7f35c8873f83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_knuth, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 23:40:37 compute-0 podman[143580]: 2025-11-25 23:40:37.944762275 +0000 UTC m=+0.187323488 container start 72fb4e1d488e0cf4ddd4ddcb541adf76172c74f55c60f3d1496f7f35c8873f83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_knuth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 25 23:40:37 compute-0 sudo[143649]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhusugcssttwyfhrexcltgmefkyykdpy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114037.5474613-232-78494290604899/AnsiballZ_file.py'
Nov 25 23:40:37 compute-0 sudo[143649]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:40:37 compute-0 podman[143580]: 2025-11-25 23:40:37.948607947 +0000 UTC m=+0.191169130 container attach 72fb4e1d488e0cf4ddd4ddcb541adf76172c74f55c60f3d1496f7f35c8873f83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_knuth, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 23:40:37 compute-0 gallant_knuth[143620]: 167 167
Nov 25 23:40:37 compute-0 systemd[1]: libpod-72fb4e1d488e0cf4ddd4ddcb541adf76172c74f55c60f3d1496f7f35c8873f83.scope: Deactivated successfully.
Nov 25 23:40:37 compute-0 podman[143580]: 2025-11-25 23:40:37.951490343 +0000 UTC m=+0.194051526 container died 72fb4e1d488e0cf4ddd4ddcb541adf76172c74f55c60f3d1496f7f35c8873f83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_knuth, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 23:40:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-8b5961e259e82ef71a840599fc8fa64c39b84b387a9b40a3db7a363d25befabd-merged.mount: Deactivated successfully.
Nov 25 23:40:38 compute-0 podman[143580]: 2025-11-25 23:40:38.004573345 +0000 UTC m=+0.247134528 container remove 72fb4e1d488e0cf4ddd4ddcb541adf76172c74f55c60f3d1496f7f35c8873f83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_knuth, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:40:38 compute-0 systemd[1]: libpod-conmon-72fb4e1d488e0cf4ddd4ddcb541adf76172c74f55c60f3d1496f7f35c8873f83.scope: Deactivated successfully.
Nov 25 23:40:38 compute-0 python3.9[143653]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:40:38 compute-0 podman[143676]: 2025-11-25 23:40:38.141026388 +0000 UTC m=+0.042061992 container create 5a919a94ba59865762b4b250335b38b635d1fc03787ecd77b49b8b24e1d630bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_kapitsa, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:40:38 compute-0 sudo[143649]: pam_unix(sudo:session): session closed for user root
Nov 25 23:40:38 compute-0 systemd[1]: Started libpod-conmon-5a919a94ba59865762b4b250335b38b635d1fc03787ecd77b49b8b24e1d630bd.scope.
Nov 25 23:40:38 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v315: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:40:38 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:40:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6ac20287d5b334a2b74cb68b479c82c14e45d92bfdcb387978e1969d6562d91/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 23:40:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6ac20287d5b334a2b74cb68b479c82c14e45d92bfdcb387978e1969d6562d91/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:40:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6ac20287d5b334a2b74cb68b479c82c14e45d92bfdcb387978e1969d6562d91/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:40:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6ac20287d5b334a2b74cb68b479c82c14e45d92bfdcb387978e1969d6562d91/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:40:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6ac20287d5b334a2b74cb68b479c82c14e45d92bfdcb387978e1969d6562d91/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 23:40:38 compute-0 podman[143676]: 2025-11-25 23:40:38.121698747 +0000 UTC m=+0.022734381 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:40:38 compute-0 podman[143676]: 2025-11-25 23:40:38.225659663 +0000 UTC m=+0.126695277 container init 5a919a94ba59865762b4b250335b38b635d1fc03787ecd77b49b8b24e1d630bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_kapitsa, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:40:38 compute-0 podman[143676]: 2025-11-25 23:40:38.241712667 +0000 UTC m=+0.142748301 container start 5a919a94ba59865762b4b250335b38b635d1fc03787ecd77b49b8b24e1d630bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:40:38 compute-0 podman[143676]: 2025-11-25 23:40:38.246514754 +0000 UTC m=+0.147550378 container attach 5a919a94ba59865762b4b250335b38b635d1fc03787ecd77b49b8b24e1d630bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_kapitsa, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:40:38 compute-0 sudo[143847]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmkggcrlsimifcsvcrhaxmnlmmyjnfqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114038.3472202-240-212907308310213/AnsiballZ_command.py'
Nov 25 23:40:38 compute-0 sudo[143847]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:40:38 compute-0 python3.9[143849]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:40:38 compute-0 sudo[143847]: pam_unix(sudo:session): session closed for user root
Nov 25 23:40:39 compute-0 strange_kapitsa[143697]: --> passed data devices: 0 physical, 3 LVM
Nov 25 23:40:39 compute-0 strange_kapitsa[143697]: --> relative data size: 1.0
Nov 25 23:40:39 compute-0 strange_kapitsa[143697]: --> All data devices are unavailable
Nov 25 23:40:39 compute-0 systemd[1]: libpod-5a919a94ba59865762b4b250335b38b635d1fc03787ecd77b49b8b24e1d630bd.scope: Deactivated successfully.
Nov 25 23:40:39 compute-0 systemd[1]: libpod-5a919a94ba59865762b4b250335b38b635d1fc03787ecd77b49b8b24e1d630bd.scope: Consumed 1.031s CPU time.
Nov 25 23:40:39 compute-0 podman[143676]: 2025-11-25 23:40:39.347318763 +0000 UTC m=+1.248354397 container died 5a919a94ba59865762b4b250335b38b635d1fc03787ecd77b49b8b24e1d630bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_kapitsa, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 23:40:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-b6ac20287d5b334a2b74cb68b479c82c14e45d92bfdcb387978e1969d6562d91-merged.mount: Deactivated successfully.
Nov 25 23:40:39 compute-0 podman[143676]: 2025-11-25 23:40:39.414124317 +0000 UTC m=+1.315159931 container remove 5a919a94ba59865762b4b250335b38b635d1fc03787ecd77b49b8b24e1d630bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_kapitsa, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:40:39 compute-0 systemd[1]: libpod-conmon-5a919a94ba59865762b4b250335b38b635d1fc03787ecd77b49b8b24e1d630bd.scope: Deactivated successfully.
Nov 25 23:40:39 compute-0 sudo[143441]: pam_unix(sudo:session): session closed for user root
Nov 25 23:40:39 compute-0 sudo[143988]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:40:39 compute-0 sudo[143988]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:40:39 compute-0 sudo[143988]: pam_unix(sudo:session): session closed for user root
Nov 25 23:40:39 compute-0 ceph-mon[75654]: pgmap v315: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:40:39 compute-0 sudo[144037]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:40:39 compute-0 sudo[144037]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:40:39 compute-0 sudo[144037]: pam_unix(sudo:session): session closed for user root
Nov 25 23:40:39 compute-0 sudo[144086]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsgfldittmircyeqsdfhesbjuyihdruy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114039.1161637-248-208899356904618/AnsiballZ_blockinfile.py'
Nov 25 23:40:39 compute-0 sudo[144086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:40:39 compute-0 sudo[144090]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:40:39 compute-0 sudo[144090]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:40:39 compute-0 sudo[144090]: pam_unix(sudo:session): session closed for user root
Nov 25 23:40:39 compute-0 sudo[144116]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 -- lvm list --format json
Nov 25 23:40:39 compute-0 sudo[144116]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:40:39 compute-0 python3.9[144093]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:40:39 compute-0 sudo[144086]: pam_unix(sudo:session): session closed for user root
Nov 25 23:40:40 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v316: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:40:40 compute-0 podman[144228]: 2025-11-25 23:40:40.223621134 +0000 UTC m=+0.068342756 container create b4332e9ec5bf4faef3d9a65dc1d91212967bb11bb9f7b87eab75d9d74e4da573 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ride, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:40:40 compute-0 systemd[1]: Started libpod-conmon-b4332e9ec5bf4faef3d9a65dc1d91212967bb11bb9f7b87eab75d9d74e4da573.scope.
Nov 25 23:40:40 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:40:40 compute-0 podman[144228]: 2025-11-25 23:40:40.194906305 +0000 UTC m=+0.039627967 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:40:40 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:40:40 compute-0 podman[144228]: 2025-11-25 23:40:40.325304279 +0000 UTC m=+0.170025881 container init b4332e9ec5bf4faef3d9a65dc1d91212967bb11bb9f7b87eab75d9d74e4da573 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ride, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:40:40 compute-0 podman[144228]: 2025-11-25 23:40:40.33215116 +0000 UTC m=+0.176872772 container start b4332e9ec5bf4faef3d9a65dc1d91212967bb11bb9f7b87eab75d9d74e4da573 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ride, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:40:40 compute-0 podman[144228]: 2025-11-25 23:40:40.335966421 +0000 UTC m=+0.180688023 container attach b4332e9ec5bf4faef3d9a65dc1d91212967bb11bb9f7b87eab75d9d74e4da573 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ride, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 23:40:40 compute-0 gallant_ride[144273]: 167 167
Nov 25 23:40:40 compute-0 systemd[1]: libpod-b4332e9ec5bf4faef3d9a65dc1d91212967bb11bb9f7b87eab75d9d74e4da573.scope: Deactivated successfully.
Nov 25 23:40:40 compute-0 podman[144228]: 2025-11-25 23:40:40.342698989 +0000 UTC m=+0.187420601 container died b4332e9ec5bf4faef3d9a65dc1d91212967bb11bb9f7b87eab75d9d74e4da573 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ride, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True)
Nov 25 23:40:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-f9b8f5c7fb9dfbca8d5a3158e10704e875009fb248abcf49c021ca53584e4492-merged.mount: Deactivated successfully.
Nov 25 23:40:40 compute-0 podman[144228]: 2025-11-25 23:40:40.390796469 +0000 UTC m=+0.235518091 container remove b4332e9ec5bf4faef3d9a65dc1d91212967bb11bb9f7b87eab75d9d74e4da573 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ride, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 23:40:40 compute-0 systemd[1]: libpod-conmon-b4332e9ec5bf4faef3d9a65dc1d91212967bb11bb9f7b87eab75d9d74e4da573.scope: Deactivated successfully.
Nov 25 23:40:40 compute-0 sudo[144377]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvoqbfsixnmlqlmrwnulwnipavnftjrq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114040.147954-257-48524358810868/AnsiballZ_command.py'
Nov 25 23:40:40 compute-0 sudo[144377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:40:40 compute-0 podman[144365]: 2025-11-25 23:40:40.578860754 +0000 UTC m=+0.053608646 container create 0ef22848f5f69b3a90c2b51331d7ec6fd6504b67e31e9aa089f6dcfc2eb59146 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_tharp, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 23:40:40 compute-0 systemd[1]: Started libpod-conmon-0ef22848f5f69b3a90c2b51331d7ec6fd6504b67e31e9aa089f6dcfc2eb59146.scope.
Nov 25 23:40:40 compute-0 podman[144365]: 2025-11-25 23:40:40.554619775 +0000 UTC m=+0.029367757 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:40:40 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:40:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f95f96fcb8752f12c5065bd0dd6a92dc0bec704c3ac7fd064726ba4dc2bb18c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 23:40:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f95f96fcb8752f12c5065bd0dd6a92dc0bec704c3ac7fd064726ba4dc2bb18c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:40:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f95f96fcb8752f12c5065bd0dd6a92dc0bec704c3ac7fd064726ba4dc2bb18c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:40:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f95f96fcb8752f12c5065bd0dd6a92dc0bec704c3ac7fd064726ba4dc2bb18c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:40:40 compute-0 podman[144365]: 2025-11-25 23:40:40.680798067 +0000 UTC m=+0.155545969 container init 0ef22848f5f69b3a90c2b51331d7ec6fd6504b67e31e9aa089f6dcfc2eb59146 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_tharp, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:40:40 compute-0 podman[144365]: 2025-11-25 23:40:40.697472127 +0000 UTC m=+0.172220049 container start 0ef22848f5f69b3a90c2b51331d7ec6fd6504b67e31e9aa089f6dcfc2eb59146 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_tharp, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:40:40 compute-0 podman[144365]: 2025-11-25 23:40:40.701209836 +0000 UTC m=+0.175957738 container attach 0ef22848f5f69b3a90c2b51331d7ec6fd6504b67e31e9aa089f6dcfc2eb59146 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_tharp, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 23:40:40 compute-0 python3.9[144384]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:40:40 compute-0 sudo[144377]: pam_unix(sudo:session): session closed for user root
Nov 25 23:40:41 compute-0 sudo[144546]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awwjdupnewmfunkurrllrnkzxrbnueqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114040.969908-265-124043330524549/AnsiballZ_stat.py'
Nov 25 23:40:41 compute-0 sudo[144546]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:40:41 compute-0 xenodochial_tharp[144389]: {
Nov 25 23:40:41 compute-0 xenodochial_tharp[144389]:     "0": [
Nov 25 23:40:41 compute-0 xenodochial_tharp[144389]:         {
Nov 25 23:40:41 compute-0 xenodochial_tharp[144389]:             "devices": [
Nov 25 23:40:41 compute-0 xenodochial_tharp[144389]:                 "/dev/loop3"
Nov 25 23:40:41 compute-0 xenodochial_tharp[144389]:             ],
Nov 25 23:40:41 compute-0 xenodochial_tharp[144389]:             "lv_name": "ceph_lv0",
Nov 25 23:40:41 compute-0 xenodochial_tharp[144389]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 23:40:41 compute-0 xenodochial_tharp[144389]:             "lv_size": "21470642176",
Nov 25 23:40:41 compute-0 xenodochial_tharp[144389]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c5ddf08a-6193-41e0-8332-60b5083aa62e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 23:40:41 compute-0 xenodochial_tharp[144389]:             "lv_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 23:40:41 compute-0 xenodochial_tharp[144389]:             "name": "ceph_lv0",
Nov 25 23:40:41 compute-0 xenodochial_tharp[144389]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 23:40:41 compute-0 xenodochial_tharp[144389]:             "tags": {
Nov 25 23:40:41 compute-0 xenodochial_tharp[144389]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 23:40:41 compute-0 xenodochial_tharp[144389]:                 "ceph.block_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 23:40:41 compute-0 xenodochial_tharp[144389]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 23:40:41 compute-0 xenodochial_tharp[144389]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:40:41 compute-0 xenodochial_tharp[144389]:                 "ceph.cluster_name": "ceph",
Nov 25 23:40:41 compute-0 xenodochial_tharp[144389]:                 "ceph.crush_device_class": "",
Nov 25 23:40:41 compute-0 xenodochial_tharp[144389]:                 "ceph.encrypted": "0",
Nov 25 23:40:41 compute-0 xenodochial_tharp[144389]:                 "ceph.osd_fsid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 23:40:41 compute-0 xenodochial_tharp[144389]:                 "ceph.osd_id": "0",
Nov 25 23:40:41 compute-0 xenodochial_tharp[144389]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 23:40:41 compute-0 xenodochial_tharp[144389]:                 "ceph.type": "block",
Nov 25 23:40:41 compute-0 xenodochial_tharp[144389]:                 "ceph.vdo": "0"
Nov 25 23:40:41 compute-0 xenodochial_tharp[144389]:             },
Nov 25 23:40:41 compute-0 xenodochial_tharp[144389]:             "type": "block",
Nov 25 23:40:41 compute-0 xenodochial_tharp[144389]:             "vg_name": "ceph_vg0"
Nov 25 23:40:41 compute-0 xenodochial_tharp[144389]:         }
Nov 25 23:40:41 compute-0 xenodochial_tharp[144389]:     ],
Nov 25 23:40:41 compute-0 xenodochial_tharp[144389]:     "1": [
Nov 25 23:40:41 compute-0 xenodochial_tharp[144389]:         {
Nov 25 23:40:41 compute-0 xenodochial_tharp[144389]:             "devices": [
Nov 25 23:40:41 compute-0 xenodochial_tharp[144389]:                 "/dev/loop4"
Nov 25 23:40:41 compute-0 xenodochial_tharp[144389]:             ],
Nov 25 23:40:41 compute-0 xenodochial_tharp[144389]:             "lv_name": "ceph_lv1",
Nov 25 23:40:41 compute-0 xenodochial_tharp[144389]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 23:40:41 compute-0 xenodochial_tharp[144389]:             "lv_size": "21470642176",
Nov 25 23:40:41 compute-0 xenodochial_tharp[144389]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21bbab34-bea3-466b-8bf7-812749fcef47,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 23:40:41 compute-0 xenodochial_tharp[144389]:             "lv_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 23:40:41 compute-0 xenodochial_tharp[144389]:             "name": "ceph_lv1",
Nov 25 23:40:41 compute-0 xenodochial_tharp[144389]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 23:40:41 compute-0 xenodochial_tharp[144389]:             "tags": {
Nov 25 23:40:41 compute-0 xenodochial_tharp[144389]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 23:40:41 compute-0 xenodochial_tharp[144389]:                 "ceph.block_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 23:40:41 compute-0 xenodochial_tharp[144389]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 23:40:41 compute-0 xenodochial_tharp[144389]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:40:41 compute-0 xenodochial_tharp[144389]:                 "ceph.cluster_name": "ceph",
Nov 25 23:40:41 compute-0 xenodochial_tharp[144389]:                 "ceph.crush_device_class": "",
Nov 25 23:40:41 compute-0 xenodochial_tharp[144389]:                 "ceph.encrypted": "0",
Nov 25 23:40:41 compute-0 xenodochial_tharp[144389]:                 "ceph.osd_fsid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 23:40:41 compute-0 xenodochial_tharp[144389]:                 "ceph.osd_id": "1",
Nov 25 23:40:41 compute-0 xenodochial_tharp[144389]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 23:40:41 compute-0 xenodochial_tharp[144389]:                 "ceph.type": "block",
Nov 25 23:40:41 compute-0 xenodochial_tharp[144389]:                 "ceph.vdo": "0"
Nov 25 23:40:41 compute-0 xenodochial_tharp[144389]:             },
Nov 25 23:40:41 compute-0 xenodochial_tharp[144389]:             "type": "block",
Nov 25 23:40:41 compute-0 xenodochial_tharp[144389]:             "vg_name": "ceph_vg1"
Nov 25 23:40:41 compute-0 xenodochial_tharp[144389]:         }
Nov 25 23:40:41 compute-0 xenodochial_tharp[144389]:     ],
Nov 25 23:40:41 compute-0 xenodochial_tharp[144389]:     "2": [
Nov 25 23:40:41 compute-0 xenodochial_tharp[144389]:         {
Nov 25 23:40:41 compute-0 xenodochial_tharp[144389]:             "devices": [
Nov 25 23:40:41 compute-0 xenodochial_tharp[144389]:                 "/dev/loop5"
Nov 25 23:40:41 compute-0 xenodochial_tharp[144389]:             ],
Nov 25 23:40:41 compute-0 xenodochial_tharp[144389]:             "lv_name": "ceph_lv2",
Nov 25 23:40:41 compute-0 xenodochial_tharp[144389]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 23:40:41 compute-0 xenodochial_tharp[144389]:             "lv_size": "21470642176",
Nov 25 23:40:41 compute-0 xenodochial_tharp[144389]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=019d967b-1a56-4e90-8682-a890da577e20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 23:40:41 compute-0 xenodochial_tharp[144389]:             "lv_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 23:40:41 compute-0 xenodochial_tharp[144389]:             "name": "ceph_lv2",
Nov 25 23:40:41 compute-0 xenodochial_tharp[144389]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 23:40:41 compute-0 xenodochial_tharp[144389]:             "tags": {
Nov 25 23:40:41 compute-0 xenodochial_tharp[144389]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 23:40:41 compute-0 xenodochial_tharp[144389]:                 "ceph.block_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 23:40:41 compute-0 xenodochial_tharp[144389]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 23:40:41 compute-0 xenodochial_tharp[144389]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:40:41 compute-0 xenodochial_tharp[144389]:                 "ceph.cluster_name": "ceph",
Nov 25 23:40:41 compute-0 xenodochial_tharp[144389]:                 "ceph.crush_device_class": "",
Nov 25 23:40:41 compute-0 xenodochial_tharp[144389]:                 "ceph.encrypted": "0",
Nov 25 23:40:41 compute-0 xenodochial_tharp[144389]:                 "ceph.osd_fsid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 23:40:41 compute-0 xenodochial_tharp[144389]:                 "ceph.osd_id": "2",
Nov 25 23:40:41 compute-0 xenodochial_tharp[144389]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 23:40:41 compute-0 xenodochial_tharp[144389]:                 "ceph.type": "block",
Nov 25 23:40:41 compute-0 xenodochial_tharp[144389]:                 "ceph.vdo": "0"
Nov 25 23:40:41 compute-0 xenodochial_tharp[144389]:             },
Nov 25 23:40:41 compute-0 xenodochial_tharp[144389]:             "type": "block",
Nov 25 23:40:41 compute-0 xenodochial_tharp[144389]:             "vg_name": "ceph_vg2"
Nov 25 23:40:41 compute-0 xenodochial_tharp[144389]:         }
Nov 25 23:40:41 compute-0 xenodochial_tharp[144389]:     ]
Nov 25 23:40:41 compute-0 xenodochial_tharp[144389]: }
Nov 25 23:40:41 compute-0 systemd[1]: libpod-0ef22848f5f69b3a90c2b51331d7ec6fd6504b67e31e9aa089f6dcfc2eb59146.scope: Deactivated successfully.
Nov 25 23:40:41 compute-0 podman[144365]: 2025-11-25 23:40:41.477776073 +0000 UTC m=+0.952523965 container died 0ef22848f5f69b3a90c2b51331d7ec6fd6504b67e31e9aa089f6dcfc2eb59146 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:40:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-7f95f96fcb8752f12c5065bd0dd6a92dc0bec704c3ac7fd064726ba4dc2bb18c-merged.mount: Deactivated successfully.
Nov 25 23:40:41 compute-0 podman[144365]: 2025-11-25 23:40:41.552464526 +0000 UTC m=+1.027212418 container remove 0ef22848f5f69b3a90c2b51331d7ec6fd6504b67e31e9aa089f6dcfc2eb59146 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:40:41 compute-0 systemd[1]: libpod-conmon-0ef22848f5f69b3a90c2b51331d7ec6fd6504b67e31e9aa089f6dcfc2eb59146.scope: Deactivated successfully.
Nov 25 23:40:41 compute-0 ceph-mon[75654]: pgmap v316: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:40:41 compute-0 sudo[144116]: pam_unix(sudo:session): session closed for user root
Nov 25 23:40:41 compute-0 python3.9[144548]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 23:40:41 compute-0 sudo[144546]: pam_unix(sudo:session): session closed for user root
Nov 25 23:40:41 compute-0 sudo[144565]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:40:41 compute-0 sudo[144565]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:40:41 compute-0 sudo[144565]: pam_unix(sudo:session): session closed for user root
Nov 25 23:40:41 compute-0 sudo[144600]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:40:41 compute-0 sudo[144600]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:40:41 compute-0 sudo[144600]: pam_unix(sudo:session): session closed for user root
Nov 25 23:40:41 compute-0 sudo[144641]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:40:41 compute-0 sudo[144641]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:40:41 compute-0 sudo[144641]: pam_unix(sudo:session): session closed for user root
Nov 25 23:40:41 compute-0 sudo[144688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 -- raw list --format json
Nov 25 23:40:41 compute-0 sudo[144688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:40:42 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v317: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:40:42 compute-0 sudo[144850]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfzbnuneaksnhhmkbrxeymljrnnwcntb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114041.8809507-273-267793294771122/AnsiballZ_command.py'
Nov 25 23:40:42 compute-0 sudo[144850]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:40:42 compute-0 podman[144860]: 2025-11-25 23:40:42.415091505 +0000 UTC m=+0.068514620 container create 68b30db24bab63162255f60f87234ff010ca68e3d692970087713fd968fc0236 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_hopper, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:40:42 compute-0 systemd[1]: Started libpod-conmon-68b30db24bab63162255f60f87234ff010ca68e3d692970087713fd968fc0236.scope.
Nov 25 23:40:42 compute-0 podman[144860]: 2025-11-25 23:40:42.386901651 +0000 UTC m=+0.040324806 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:40:42 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:40:42 compute-0 podman[144860]: 2025-11-25 23:40:42.526076056 +0000 UTC m=+0.179499221 container init 68b30db24bab63162255f60f87234ff010ca68e3d692970087713fd968fc0236 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_hopper, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:40:42 compute-0 podman[144860]: 2025-11-25 23:40:42.538590796 +0000 UTC m=+0.192013911 container start 68b30db24bab63162255f60f87234ff010ca68e3d692970087713fd968fc0236 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_hopper, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:40:42 compute-0 podman[144860]: 2025-11-25 23:40:42.542712295 +0000 UTC m=+0.196135410 container attach 68b30db24bab63162255f60f87234ff010ca68e3d692970087713fd968fc0236 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_hopper, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 23:40:42 compute-0 practical_hopper[144876]: 167 167
Nov 25 23:40:42 compute-0 python3.9[144858]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:40:42 compute-0 systemd[1]: libpod-68b30db24bab63162255f60f87234ff010ca68e3d692970087713fd968fc0236.scope: Deactivated successfully.
Nov 25 23:40:42 compute-0 sudo[144850]: pam_unix(sudo:session): session closed for user root
Nov 25 23:40:42 compute-0 podman[144882]: 2025-11-25 23:40:42.615335233 +0000 UTC m=+0.042556905 container died 68b30db24bab63162255f60f87234ff010ca68e3d692970087713fd968fc0236 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:40:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-9c50b5cd25fc6d714b7dff4a5768ebfff2d947f196e37967344a6426c5fb3fed-merged.mount: Deactivated successfully.
Nov 25 23:40:42 compute-0 podman[144882]: 2025-11-25 23:40:42.657344293 +0000 UTC m=+0.084565915 container remove 68b30db24bab63162255f60f87234ff010ca68e3d692970087713fd968fc0236 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 23:40:42 compute-0 systemd[1]: libpod-conmon-68b30db24bab63162255f60f87234ff010ca68e3d692970087713fd968fc0236.scope: Deactivated successfully.
Nov 25 23:40:42 compute-0 podman[144953]: 2025-11-25 23:40:42.907588861 +0000 UTC m=+0.071616903 container create 5902e114cc48e6765a2a6679955979d274b7ee2e8da17aae3e7c0727bec514ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_elion, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 25 23:40:42 compute-0 systemd[1]: Started libpod-conmon-5902e114cc48e6765a2a6679955979d274b7ee2e8da17aae3e7c0727bec514ae.scope.
Nov 25 23:40:42 compute-0 podman[144953]: 2025-11-25 23:40:42.879844918 +0000 UTC m=+0.043873010 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:40:42 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:40:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e79b1073b562df039a976ed1f58ead015484db472e98805ab8230e9740bda87/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 23:40:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e79b1073b562df039a976ed1f58ead015484db472e98805ab8230e9740bda87/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:40:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e79b1073b562df039a976ed1f58ead015484db472e98805ab8230e9740bda87/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:40:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e79b1073b562df039a976ed1f58ead015484db472e98805ab8230e9740bda87/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:40:42 compute-0 podman[144953]: 2025-11-25 23:40:42.992884853 +0000 UTC m=+0.156912875 container init 5902e114cc48e6765a2a6679955979d274b7ee2e8da17aae3e7c0727bec514ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_elion, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 23:40:43 compute-0 podman[144953]: 2025-11-25 23:40:43.010621971 +0000 UTC m=+0.174649993 container start 5902e114cc48e6765a2a6679955979d274b7ee2e8da17aae3e7c0727bec514ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_elion, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 25 23:40:43 compute-0 podman[144953]: 2025-11-25 23:40:43.013965399 +0000 UTC m=+0.177993441 container attach 5902e114cc48e6765a2a6679955979d274b7ee2e8da17aae3e7c0727bec514ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:40:43 compute-0 sudo[145077]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofyvqukcgadbhaznujayjkbstlffjsgj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114042.803563-281-48422502014886/AnsiballZ_file.py'
Nov 25 23:40:43 compute-0 sudo[145077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:40:43 compute-0 python3.9[145079]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:40:43 compute-0 sudo[145077]: pam_unix(sudo:session): session closed for user root
Nov 25 23:40:43 compute-0 ceph-mon[75654]: pgmap v317: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:40:44 compute-0 nervous_elion[145006]: {
Nov 25 23:40:44 compute-0 nervous_elion[145006]:     "019d967b-1a56-4e90-8682-a890da577e20": {
Nov 25 23:40:44 compute-0 nervous_elion[145006]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:40:44 compute-0 nervous_elion[145006]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 23:40:44 compute-0 nervous_elion[145006]:         "osd_id": 2,
Nov 25 23:40:44 compute-0 nervous_elion[145006]:         "osd_uuid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 23:40:44 compute-0 nervous_elion[145006]:         "type": "bluestore"
Nov 25 23:40:44 compute-0 nervous_elion[145006]:     },
Nov 25 23:40:44 compute-0 nervous_elion[145006]:     "21bbab34-bea3-466b-8bf7-812749fcef47": {
Nov 25 23:40:44 compute-0 nervous_elion[145006]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:40:44 compute-0 nervous_elion[145006]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 23:40:44 compute-0 nervous_elion[145006]:         "osd_id": 1,
Nov 25 23:40:44 compute-0 nervous_elion[145006]:         "osd_uuid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 23:40:44 compute-0 nervous_elion[145006]:         "type": "bluestore"
Nov 25 23:40:44 compute-0 nervous_elion[145006]:     },
Nov 25 23:40:44 compute-0 nervous_elion[145006]:     "c5ddf08a-6193-41e0-8332-60b5083aa62e": {
Nov 25 23:40:44 compute-0 nervous_elion[145006]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:40:44 compute-0 nervous_elion[145006]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 23:40:44 compute-0 nervous_elion[145006]:         "osd_id": 0,
Nov 25 23:40:44 compute-0 nervous_elion[145006]:         "osd_uuid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 23:40:44 compute-0 nervous_elion[145006]:         "type": "bluestore"
Nov 25 23:40:44 compute-0 nervous_elion[145006]:     }
Nov 25 23:40:44 compute-0 nervous_elion[145006]: }
Nov 25 23:40:44 compute-0 systemd[1]: libpod-5902e114cc48e6765a2a6679955979d274b7ee2e8da17aae3e7c0727bec514ae.scope: Deactivated successfully.
Nov 25 23:40:44 compute-0 podman[144953]: 2025-11-25 23:40:44.070273134 +0000 UTC m=+1.234301176 container died 5902e114cc48e6765a2a6679955979d274b7ee2e8da17aae3e7c0727bec514ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_elion, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 23:40:44 compute-0 systemd[1]: libpod-5902e114cc48e6765a2a6679955979d274b7ee2e8da17aae3e7c0727bec514ae.scope: Consumed 1.065s CPU time.
Nov 25 23:40:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-6e79b1073b562df039a976ed1f58ead015484db472e98805ab8230e9740bda87-merged.mount: Deactivated successfully.
Nov 25 23:40:44 compute-0 podman[144953]: 2025-11-25 23:40:44.15000794 +0000 UTC m=+1.314035952 container remove 5902e114cc48e6765a2a6679955979d274b7ee2e8da17aae3e7c0727bec514ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_elion, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 23:40:44 compute-0 systemd[1]: libpod-conmon-5902e114cc48e6765a2a6679955979d274b7ee2e8da17aae3e7c0727bec514ae.scope: Deactivated successfully.
Nov 25 23:40:44 compute-0 sudo[144688]: pam_unix(sudo:session): session closed for user root
Nov 25 23:40:44 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v318: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:40:44 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 23:40:44 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:40:44 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 23:40:44 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:40:44 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev 282d49a5-96ba-4e26-b13d-d594e373ac69 does not exist
Nov 25 23:40:44 compute-0 sudo[145203]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:40:44 compute-0 sudo[145203]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:40:44 compute-0 sudo[145203]: pam_unix(sudo:session): session closed for user root
Nov 25 23:40:44 compute-0 sudo[145243]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 23:40:44 compute-0 sudo[145243]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:40:44 compute-0 sudo[145243]: pam_unix(sudo:session): session closed for user root
Nov 25 23:40:44 compute-0 python3.9[145318]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 23:40:45 compute-0 ceph-mon[75654]: pgmap v318: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:40:45 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:40:45 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:40:45 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:40:45 compute-0 ceph-mon[75654]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Nov 25 23:40:45 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:40:45.282999) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 23:40:45 compute-0 ceph-mon[75654]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Nov 25 23:40:45 compute-0 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764114045283062, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 845, "num_deletes": 250, "total_data_size": 792371, "memory_usage": 808712, "flush_reason": "Manual Compaction"}
Nov 25 23:40:45 compute-0 ceph-mon[75654]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Nov 25 23:40:45 compute-0 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764114045290890, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 507851, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 6778, "largest_seqno": 7622, "table_properties": {"data_size": 504359, "index_size": 1272, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8793, "raw_average_key_size": 19, "raw_value_size": 496971, "raw_average_value_size": 1106, "num_data_blocks": 60, "num_entries": 449, "num_filter_entries": 449, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764113969, "oldest_key_time": 1764113969, "file_creation_time": 1764114045, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cf57a6b1-796f-4cfa-b350-53eb10a4554d", "db_session_id": "Q7VS70283MEZ1V621ZPR", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Nov 25 23:40:45 compute-0 ceph-mon[75654]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 7916 microseconds, and 2073 cpu microseconds.
Nov 25 23:40:45 compute-0 ceph-mon[75654]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 23:40:45 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:40:45.290926) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 507851 bytes OK
Nov 25 23:40:45 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:40:45.290943) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Nov 25 23:40:45 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:40:45.292663) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Nov 25 23:40:45 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:40:45.292675) EVENT_LOG_v1 {"time_micros": 1764114045292671, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 23:40:45 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:40:45.292689) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 23:40:45 compute-0 ceph-mon[75654]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 788198, prev total WAL file size 788198, number of live WAL files 2.
Nov 25 23:40:45 compute-0 ceph-mon[75654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 23:40:45 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:40:45.293079) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323531' seq:0, type:0; will stop at (end)
Nov 25 23:40:45 compute-0 ceph-mon[75654]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 23:40:45 compute-0 ceph-mon[75654]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(495KB)], [20(5460KB)]
Nov 25 23:40:45 compute-0 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764114045293109, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 6099484, "oldest_snapshot_seqno": -1}
Nov 25 23:40:45 compute-0 ceph-mon[75654]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 2627 keys, 4461660 bytes, temperature: kUnknown
Nov 25 23:40:45 compute-0 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764114045312814, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 4461660, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 4442145, "index_size": 11854, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 6597, "raw_key_size": 60833, "raw_average_key_size": 23, "raw_value_size": 4393082, "raw_average_value_size": 1672, "num_data_blocks": 536, "num_entries": 2627, "num_filter_entries": 2627, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764113467, "oldest_key_time": 0, "file_creation_time": 1764114045, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cf57a6b1-796f-4cfa-b350-53eb10a4554d", "db_session_id": "Q7VS70283MEZ1V621ZPR", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Nov 25 23:40:45 compute-0 ceph-mon[75654]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 23:40:45 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:40:45.312963) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 4461660 bytes
Nov 25 23:40:45 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:40:45.314219) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 308.8 rd, 225.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 5.3 +0.0 blob) out(4.3 +0.0 blob), read-write-amplify(20.8) write-amplify(8.8) OK, records in: 3106, records dropped: 479 output_compression: NoCompression
Nov 25 23:40:45 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:40:45.314235) EVENT_LOG_v1 {"time_micros": 1764114045314227, "job": 6, "event": "compaction_finished", "compaction_time_micros": 19750, "compaction_time_cpu_micros": 10477, "output_level": 6, "num_output_files": 1, "total_output_size": 4461660, "num_input_records": 3106, "num_output_records": 2627, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 23:40:45 compute-0 ceph-mon[75654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 23:40:45 compute-0 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764114045314383, "job": 6, "event": "table_file_deletion", "file_number": 22}
Nov 25 23:40:45 compute-0 ceph-mon[75654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 23:40:45 compute-0 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764114045315111, "job": 6, "event": "table_file_deletion", "file_number": 20}
Nov 25 23:40:45 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:40:45.293021) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 23:40:45 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:40:45.315159) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 23:40:45 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:40:45.315164) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 23:40:45 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:40:45.315165) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 23:40:45 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:40:45.315167) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 23:40:45 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:40:45.315188) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 23:40:45 compute-0 sudo[145469]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evtorppzxqmhwwnnoywsidbxotzonwea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114045.4734457-321-65334201661013/AnsiballZ_command.py'
Nov 25 23:40:45 compute-0 sudo[145469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:40:46 compute-0 python3.9[145471]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:3e:0a:c6:22:5a:f7" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch 
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:40:46 compute-0 ovs-vsctl[145472]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:3e:0a:c6:22:5a:f7 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Nov 25 23:40:46 compute-0 sudo[145469]: pam_unix(sudo:session): session closed for user root
Nov 25 23:40:46 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v319: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:40:46 compute-0 sudo[145622]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgxkhsnsvvnnsvmmpbxvnwwcvphecixo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114046.392698-330-204707313691061/AnsiballZ_command.py'
Nov 25 23:40:46 compute-0 sudo[145622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:40:46 compute-0 python3.9[145624]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ovs-vsctl show | grep -q "Manager"
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:40:47 compute-0 sudo[145622]: pam_unix(sudo:session): session closed for user root
Nov 25 23:40:47 compute-0 ceph-mon[75654]: pgmap v319: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:40:47 compute-0 sudo[145777]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxhzsccrmhsvkvkdavkuwnukeqigvjuc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114047.2289205-338-272750689888377/AnsiballZ_command.py'
Nov 25 23:40:47 compute-0 sudo[145777]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:40:47 compute-0 python3.9[145779]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:40:47 compute-0 ovs-vsctl[145780]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Nov 25 23:40:47 compute-0 sudo[145777]: pam_unix(sudo:session): session closed for user root
Nov 25 23:40:48 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v320: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:40:48 compute-0 python3.9[145930]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 23:40:49 compute-0 sudo[146082]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmpmsfcgdnbhmouufqqfkgttvkwqjccp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114048.8170269-355-145939701111759/AnsiballZ_file.py'
Nov 25 23:40:49 compute-0 sudo[146082]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:40:49 compute-0 ceph-mon[75654]: pgmap v320: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:40:49 compute-0 python3.9[146084]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:40:49 compute-0 sudo[146082]: pam_unix(sudo:session): session closed for user root
Nov 25 23:40:50 compute-0 sudo[146234]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onqxuylqaawrnugoxxagejcnqojoaupd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114049.6390715-363-29950773171689/AnsiballZ_stat.py'
Nov 25 23:40:50 compute-0 sudo[146234]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:40:50 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v321: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:40:50 compute-0 python3.9[146236]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:40:50 compute-0 sudo[146234]: pam_unix(sudo:session): session closed for user root
Nov 25 23:40:50 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:40:50 compute-0 sudo[146312]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfiwaqpgyxcmdgvoxjmyhmdylyhouvpx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114049.6390715-363-29950773171689/AnsiballZ_file.py'
Nov 25 23:40:50 compute-0 sudo[146312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:40:50 compute-0 python3.9[146314]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:40:50 compute-0 sudo[146312]: pam_unix(sudo:session): session closed for user root
Nov 25 23:40:51 compute-0 ceph-mon[75654]: pgmap v321: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:40:51 compute-0 sudo[146464]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgkglhfspbzwiuvgwkkwswekwjdsvbwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114050.958119-363-99247618629160/AnsiballZ_stat.py'
Nov 25 23:40:51 compute-0 sudo[146464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:40:51 compute-0 python3.9[146466]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:40:51 compute-0 sudo[146464]: pam_unix(sudo:session): session closed for user root
Nov 25 23:40:51 compute-0 sudo[146542]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qckozovzauvejqrfbtvczpddyilwwkfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114050.958119-363-99247618629160/AnsiballZ_file.py'
Nov 25 23:40:51 compute-0 sudo[146542]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:40:52 compute-0 python3.9[146544]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:40:52 compute-0 sudo[146542]: pam_unix(sudo:session): session closed for user root
Nov 25 23:40:52 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v322: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:40:52 compute-0 sudo[146694]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzrxgtxaucqmknokslbbawtqwajksqal ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114052.3060453-386-4467613752098/AnsiballZ_file.py'
Nov 25 23:40:52 compute-0 sudo[146694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:40:52 compute-0 python3.9[146696]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:40:52 compute-0 sudo[146694]: pam_unix(sudo:session): session closed for user root
Nov 25 23:40:53 compute-0 ceph-mon[75654]: pgmap v322: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:40:53 compute-0 sudo[146846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvwgxtjfdvsctafcbantdezyqqcjieaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114053.2083106-394-105074643739750/AnsiballZ_stat.py'
Nov 25 23:40:53 compute-0 sudo[146846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:40:53 compute-0 python3.9[146848]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:40:53 compute-0 sudo[146846]: pam_unix(sudo:session): session closed for user root
Nov 25 23:40:54 compute-0 sudo[146924]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vburszyoyucmagxgphhbrgpgxsfsiccy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114053.2083106-394-105074643739750/AnsiballZ_file.py'
Nov 25 23:40:54 compute-0 sudo[146924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:40:54 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v323: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:40:54 compute-0 python3.9[146926]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:40:54 compute-0 sudo[146924]: pam_unix(sudo:session): session closed for user root
Nov 25 23:40:55 compute-0 sudo[147076]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qksueqbjixulwaglnbzvfkfqdeqbyzun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114054.6493907-406-213977402401881/AnsiballZ_stat.py'
Nov 25 23:40:55 compute-0 sudo[147076]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:40:55 compute-0 python3.9[147078]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:40:55 compute-0 sudo[147076]: pam_unix(sudo:session): session closed for user root
Nov 25 23:40:55 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:40:55 compute-0 ceph-mon[75654]: pgmap v323: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:40:55 compute-0 sudo[147154]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rpvbqysjaqkmgoytmaimzivvyivltyxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114054.6493907-406-213977402401881/AnsiballZ_file.py'
Nov 25 23:40:55 compute-0 sudo[147154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:40:55 compute-0 python3.9[147156]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:40:55 compute-0 sudo[147154]: pam_unix(sudo:session): session closed for user root
Nov 25 23:40:56 compute-0 ceph-mgr[75954]: [balancer INFO root] Optimize plan auto_2025-11-25_23:40:56
Nov 25 23:40:56 compute-0 ceph-mgr[75954]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 23:40:56 compute-0 ceph-mgr[75954]: [balancer INFO root] do_upmap
Nov 25 23:40:56 compute-0 ceph-mgr[75954]: [balancer INFO root] pools ['vms', '.mgr', 'images', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'volumes', 'backups']
Nov 25 23:40:56 compute-0 ceph-mgr[75954]: [balancer INFO root] prepared 0/10 changes
Nov 25 23:40:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:40:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:40:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:40:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:40:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:40:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:40:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 23:40:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 23:40:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 23:40:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 23:40:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 23:40:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 23:40:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 23:40:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 23:40:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 23:40:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 23:40:56 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v324: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:40:56 compute-0 sudo[147306]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcxhhxmzmqevbjfjftvzbbegidsuhuwe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114055.972339-418-228683009632868/AnsiballZ_systemd.py'
Nov 25 23:40:56 compute-0 sudo[147306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:40:56 compute-0 python3.9[147308]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 23:40:56 compute-0 systemd[1]: Reloading.
Nov 25 23:40:56 compute-0 systemd-sysv-generator[147336]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 23:40:56 compute-0 systemd-rc-local-generator[147333]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 23:40:57 compute-0 sudo[147306]: pam_unix(sudo:session): session closed for user root
Nov 25 23:40:57 compute-0 ceph-mon[75654]: pgmap v324: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:40:57 compute-0 sudo[147495]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzfcluucvgaaaqlyhswhktftiurctocn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114057.3105772-426-207663152823043/AnsiballZ_stat.py'
Nov 25 23:40:57 compute-0 sudo[147495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:40:57 compute-0 python3.9[147497]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:40:57 compute-0 sudo[147495]: pam_unix(sudo:session): session closed for user root
Nov 25 23:40:58 compute-0 sudo[147573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jeiqkqynyqdynuhbaxiqzcuowhlpomia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114057.3105772-426-207663152823043/AnsiballZ_file.py'
Nov 25 23:40:58 compute-0 sudo[147573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:40:58 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v325: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:40:58 compute-0 python3.9[147575]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:40:58 compute-0 sudo[147573]: pam_unix(sudo:session): session closed for user root
Nov 25 23:40:59 compute-0 sudo[147725]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlcmbhopwqkhczjhsoydfthqfbdzyajr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114058.701175-438-150640458426986/AnsiballZ_stat.py'
Nov 25 23:40:59 compute-0 sudo[147725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:40:59 compute-0 python3.9[147727]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:40:59 compute-0 sudo[147725]: pam_unix(sudo:session): session closed for user root
Nov 25 23:40:59 compute-0 ceph-mon[75654]: pgmap v325: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:40:59 compute-0 sudo[147803]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrgawqxnlofbnabrjjvgzevdneicqnwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114058.701175-438-150640458426986/AnsiballZ_file.py'
Nov 25 23:40:59 compute-0 sudo[147803]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:40:59 compute-0 python3.9[147805]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:40:59 compute-0 sudo[147803]: pam_unix(sudo:session): session closed for user root
Nov 25 23:41:00 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v326: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:41:00 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:41:00 compute-0 sudo[147955]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vokrlicepdjsbwmylonryvljwybitgro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114060.1061313-450-28059273080463/AnsiballZ_systemd.py'
Nov 25 23:41:00 compute-0 sudo[147955]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:41:00 compute-0 python3.9[147957]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 23:41:00 compute-0 systemd[1]: Reloading.
Nov 25 23:41:01 compute-0 systemd-rc-local-generator[147984]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 23:41:01 compute-0 systemd-sysv-generator[147988]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 23:41:01 compute-0 systemd[1]: Starting Create netns directory...
Nov 25 23:41:01 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 25 23:41:01 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 25 23:41:01 compute-0 systemd[1]: Finished Create netns directory.
Nov 25 23:41:01 compute-0 sudo[147955]: pam_unix(sudo:session): session closed for user root
Nov 25 23:41:01 compute-0 ceph-mon[75654]: pgmap v326: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:41:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 23:41:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:41:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 23:41:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:41:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:41:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:41:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:41:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:41:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:41:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:41:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:41:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:41:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 23:41:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:41:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:41:02 compute-0 sudo[148148]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrliuazcftsqaftoedcstacjdbzcyzit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114061.6828387-460-185875375215054/AnsiballZ_file.py'
Nov 25 23:41:02 compute-0 sudo[148148]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:41:02 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v327: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:41:02 compute-0 python3.9[148150]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:41:02 compute-0 sudo[148148]: pam_unix(sudo:session): session closed for user root
Nov 25 23:41:02 compute-0 sudo[148300]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onmhgafqpiylqwgcoqfbpsvvujhezjvl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114062.4464407-468-242865811011017/AnsiballZ_stat.py'
Nov 25 23:41:02 compute-0 sudo[148300]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:41:03 compute-0 python3.9[148302]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:41:03 compute-0 sudo[148300]: pam_unix(sudo:session): session closed for user root
Nov 25 23:41:03 compute-0 ceph-mon[75654]: pgmap v327: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:41:03 compute-0 sudo[148423]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkdiweahgcmdrsdshryyashalcnjlzaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114062.4464407-468-242865811011017/AnsiballZ_copy.py'
Nov 25 23:41:03 compute-0 sudo[148423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:41:03 compute-0 python3.9[148425]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764114062.4464407-468-242865811011017/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:41:03 compute-0 sudo[148423]: pam_unix(sudo:session): session closed for user root
Nov 25 23:41:04 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v328: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:41:04 compute-0 sudo[148575]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwazrroyyyyuymvrbvgvqztesbuwduqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114064.2508492-485-165911397041971/AnsiballZ_file.py'
Nov 25 23:41:04 compute-0 sudo[148575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:41:04 compute-0 python3.9[148577]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:41:04 compute-0 sudo[148575]: pam_unix(sudo:session): session closed for user root
Nov 25 23:41:05 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:41:05 compute-0 ceph-mon[75654]: pgmap v328: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:41:05 compute-0 sudo[148727]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmulsoyafwakkoqhnpsaqintyvemnyis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114065.1357996-493-123878201658838/AnsiballZ_stat.py'
Nov 25 23:41:05 compute-0 sudo[148727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:41:05 compute-0 python3.9[148729]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:41:05 compute-0 sudo[148727]: pam_unix(sudo:session): session closed for user root
Nov 25 23:41:06 compute-0 sudo[148850]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aakrwvwziqrwindfjylvuypqtegywuyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114065.1357996-493-123878201658838/AnsiballZ_copy.py'
Nov 25 23:41:06 compute-0 sudo[148850]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:41:06 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v329: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:41:06 compute-0 python3.9[148852]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764114065.1357996-493-123878201658838/.source.json _original_basename=.4hjznkpx follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:41:06 compute-0 sudo[148850]: pam_unix(sudo:session): session closed for user root
Nov 25 23:41:07 compute-0 sudo[149002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwepzdubxamqysiqskfaiimlqbspkuld ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114066.6588893-508-116006683365855/AnsiballZ_file.py'
Nov 25 23:41:07 compute-0 sudo[149002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:41:07 compute-0 python3.9[149004]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:41:07 compute-0 sudo[149002]: pam_unix(sudo:session): session closed for user root
Nov 25 23:41:07 compute-0 ceph-mon[75654]: pgmap v329: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:41:07 compute-0 sudo[149154]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-muvioemvgdzqphztsbotshyoeysnntbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114067.5303848-516-269269197352072/AnsiballZ_stat.py'
Nov 25 23:41:07 compute-0 sudo[149154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:41:08 compute-0 sudo[149154]: pam_unix(sudo:session): session closed for user root
Nov 25 23:41:08 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v330: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:41:08 compute-0 sudo[149277]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrsvrzxkpbqoubcybhbqejoykeolvram ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114067.5303848-516-269269197352072/AnsiballZ_copy.py'
Nov 25 23:41:08 compute-0 sudo[149277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:41:08 compute-0 sudo[149277]: pam_unix(sudo:session): session closed for user root
Nov 25 23:41:09 compute-0 ceph-mon[75654]: pgmap v330: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:41:09 compute-0 sudo[149429]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agqgnqmtzuhhtpefgnlfzghjojxjziwr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114069.1230671-533-187956668385793/AnsiballZ_container_config_data.py'
Nov 25 23:41:09 compute-0 sudo[149429]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:41:09 compute-0 python3.9[149431]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Nov 25 23:41:09 compute-0 sudo[149429]: pam_unix(sudo:session): session closed for user root
Nov 25 23:41:10 compute-0 ceph-mon[75654]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 23:41:10 compute-0 ceph-mon[75654]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Cumulative writes: 1768 writes, 7654 keys, 1768 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.01 MB/s
                                           Cumulative WAL: 1768 writes, 1768 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1768 writes, 7654 keys, 1768 commit groups, 1.0 writes per commit group, ingest: 7.98 MB, 0.01 MB/s
                                           Interval WAL: 1768 writes, 1768 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    125.6      0.05              0.02         3    0.016       0      0       0.0       0.0
                                             L6      1/0    4.25 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.6    180.5    154.6      0.06              0.04         2    0.031    6053    769       0.0       0.0
                                            Sum      1/0    4.25 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6    103.0    142.1      0.11              0.06         5    0.022    6053    769       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.7    105.7    145.4      0.11              0.06         4    0.026    6053    769       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    180.5    154.6      0.06              0.04         2    0.031    6053    769       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    132.4      0.04              0.02         2    0.022       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     18.8      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.006, interval 0.006
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.02 GB write, 0.03 MB/s write, 0.01 GB read, 0.02 MB/s read, 0.1 seconds
                                           Interval compaction: 0.02 GB write, 0.03 MB/s write, 0.01 GB read, 0.02 MB/s read, 0.1 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f0edcc31f0#2 capacity: 308.00 MB usage: 574.48 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 6.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(38,504.27 KB,0.159885%) FilterBlock(6,24.30 KB,0.00770371%) IndexBlock(6,45.92 KB,0.0145603%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 25 23:41:10 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v331: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:41:10 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:41:10 compute-0 sudo[149581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfgnpttxpagskiyxarlmjpzfwtujypao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114070.1442215-542-205207998698341/AnsiballZ_container_config_hash.py'
Nov 25 23:41:10 compute-0 sudo[149581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:41:10 compute-0 python3.9[149583]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 25 23:41:10 compute-0 sudo[149581]: pam_unix(sudo:session): session closed for user root
Nov 25 23:41:11 compute-0 ceph-mon[75654]: pgmap v331: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:41:11 compute-0 sudo[149733]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tushvycwpeehkbmkuqaicntmvhxgrfeb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114071.1869495-551-197021211403055/AnsiballZ_podman_container_info.py'
Nov 25 23:41:11 compute-0 sudo[149733]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:41:12 compute-0 python3.9[149735]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 25 23:41:12 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v332: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:41:12 compute-0 sudo[149733]: pam_unix(sudo:session): session closed for user root
Nov 25 23:41:13 compute-0 sudo[149911]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttoadtepecsetcdrajzsgmijgcfcthpt ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764114072.9406316-564-193603947923932/AnsiballZ_edpm_container_manage.py'
Nov 25 23:41:13 compute-0 sudo[149911]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:41:13 compute-0 ceph-mon[75654]: pgmap v332: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:41:13 compute-0 python3[149913]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 25 23:41:14 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v333: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:41:15 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:41:15 compute-0 ceph-mon[75654]: pgmap v333: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:41:16 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v334: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:41:17 compute-0 ceph-mon[75654]: pgmap v334: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:41:18 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v335: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:41:18 compute-0 podman[149926]: 2025-11-25 23:41:18.760340429 +0000 UTC m=+4.900654331 image pull 197857ba4b35dfe0da58eb2e9c37f91c8a1d2b66c0967b4c66656aa6329b870c quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Nov 25 23:41:18 compute-0 podman[150044]: 2025-11-25 23:41:18.941006169 +0000 UTC m=+0.055269042 container create 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 23:41:18 compute-0 podman[150044]: 2025-11-25 23:41:18.909455839 +0000 UTC m=+0.023718802 image pull 197857ba4b35dfe0da58eb2e9c37f91c8a1d2b66c0967b4c66656aa6329b870c quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Nov 25 23:41:18 compute-0 python3[149913]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Nov 25 23:41:19 compute-0 sudo[149911]: pam_unix(sudo:session): session closed for user root
Nov 25 23:41:19 compute-0 ceph-mon[75654]: pgmap v335: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:41:19 compute-0 sudo[150232]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dibwrjafregqobpcuejubyyztnikybpz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114079.3872256-572-192710231626506/AnsiballZ_stat.py'
Nov 25 23:41:19 compute-0 sudo[150232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:41:20 compute-0 python3.9[150234]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 23:41:20 compute-0 sudo[150232]: pam_unix(sudo:session): session closed for user root
Nov 25 23:41:20 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v336: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:41:20 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:41:21 compute-0 ceph-mon[75654]: pgmap v336: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:41:21 compute-0 sudo[150387]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxebjjltpgebvqymtvyobsixcsvolxvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114080.4180012-581-151107834243632/AnsiballZ_file.py'
Nov 25 23:41:21 compute-0 sudo[150387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:41:21 compute-0 python3.9[150389]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:41:21 compute-0 sudo[150387]: pam_unix(sudo:session): session closed for user root
Nov 25 23:41:21 compute-0 sudo[150463]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chuxjevcjtcoiinnkpbrgpxwtemmmlts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114080.4180012-581-151107834243632/AnsiballZ_stat.py'
Nov 25 23:41:21 compute-0 sudo[150463]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:41:21 compute-0 python3.9[150465]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 23:41:21 compute-0 sudo[150463]: pam_unix(sudo:session): session closed for user root
Nov 25 23:41:22 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v337: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:41:22 compute-0 sudo[150615]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgjksrlckjvgeekfuzotmtkhtqhopyph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114082.0639093-581-178525452514858/AnsiballZ_copy.py'
Nov 25 23:41:22 compute-0 sudo[150615]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:41:22 compute-0 python3.9[150617]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764114082.0639093-581-178525452514858/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:41:22 compute-0 sudo[150615]: pam_unix(sudo:session): session closed for user root
Nov 25 23:41:23 compute-0 sudo[150691]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihstqdikkjupshogrthsxwdiawojbigv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114082.0639093-581-178525452514858/AnsiballZ_systemd.py'
Nov 25 23:41:23 compute-0 sudo[150691]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:41:23 compute-0 ceph-mon[75654]: pgmap v337: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:41:23 compute-0 python3.9[150693]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 23:41:23 compute-0 systemd[1]: Reloading.
Nov 25 23:41:23 compute-0 systemd-rc-local-generator[150721]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 23:41:23 compute-0 systemd-sysv-generator[150725]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 23:41:23 compute-0 sudo[150691]: pam_unix(sudo:session): session closed for user root
Nov 25 23:41:24 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v338: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:41:24 compute-0 sudo[150802]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-meiakvhzfyyglvuiqvhndoqcdfpmtwvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114082.0639093-581-178525452514858/AnsiballZ_systemd.py'
Nov 25 23:41:24 compute-0 sudo[150802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:41:24 compute-0 python3.9[150804]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 23:41:24 compute-0 systemd[1]: Reloading.
Nov 25 23:41:24 compute-0 systemd-rc-local-generator[150834]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 23:41:24 compute-0 systemd-sysv-generator[150837]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 23:41:24 compute-0 systemd[1]: Starting ovn_controller container...
Nov 25 23:41:25 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:41:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/627ed15ed15167141774381955816e4567fc2f6e5c3b0e5c37325dd8a7c71b23/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Nov 25 23:41:25 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58.
Nov 25 23:41:25 compute-0 podman[150845]: 2025-11-25 23:41:25.199706784 +0000 UTC m=+0.184227186 container init 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true)
Nov 25 23:41:25 compute-0 ovn_controller[150860]: + sudo -E kolla_set_configs
Nov 25 23:41:25 compute-0 podman[150845]: 2025-11-25 23:41:25.235364854 +0000 UTC m=+0.219885206 container start 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, container_name=ovn_controller)
Nov 25 23:41:25 compute-0 edpm-start-podman-container[150845]: ovn_controller
Nov 25 23:41:25 compute-0 systemd[1]: Created slice User Slice of UID 0.
Nov 25 23:41:25 compute-0 ceph-mon[75654]: pgmap v338: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:41:25 compute-0 systemd[1]: Starting User Runtime Directory /run/user/0...
Nov 25 23:41:25 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:41:25 compute-0 systemd[1]: Finished User Runtime Directory /run/user/0.
Nov 25 23:41:25 compute-0 systemd[1]: Starting User Manager for UID 0...
Nov 25 23:41:25 compute-0 systemd[150898]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Nov 25 23:41:25 compute-0 edpm-start-podman-container[150844]: Creating additional drop-in dependency for "ovn_controller" (668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58)
Nov 25 23:41:25 compute-0 podman[150867]: 2025-11-25 23:41:25.360123385 +0000 UTC m=+0.107895203 container health_status 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 23:41:25 compute-0 systemd[1]: 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58-55876ed103f2e330.service: Main process exited, code=exited, status=1/FAILURE
Nov 25 23:41:25 compute-0 systemd[1]: 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58-55876ed103f2e330.service: Failed with result 'exit-code'.
Nov 25 23:41:25 compute-0 systemd[1]: Reloading.
Nov 25 23:41:25 compute-0 systemd-sysv-generator[150944]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 23:41:25 compute-0 systemd-rc-local-generator[150941]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 23:41:25 compute-0 systemd[150898]: Queued start job for default target Main User Target.
Nov 25 23:41:25 compute-0 systemd[150898]: Created slice User Application Slice.
Nov 25 23:41:25 compute-0 systemd[150898]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Nov 25 23:41:25 compute-0 systemd[150898]: Started Daily Cleanup of User's Temporary Directories.
Nov 25 23:41:25 compute-0 systemd[150898]: Reached target Paths.
Nov 25 23:41:25 compute-0 systemd[150898]: Reached target Timers.
Nov 25 23:41:25 compute-0 systemd[150898]: Starting D-Bus User Message Bus Socket...
Nov 25 23:41:25 compute-0 systemd[150898]: Starting Create User's Volatile Files and Directories...
Nov 25 23:41:25 compute-0 systemd[150898]: Finished Create User's Volatile Files and Directories.
Nov 25 23:41:25 compute-0 systemd[150898]: Listening on D-Bus User Message Bus Socket.
Nov 25 23:41:25 compute-0 systemd[150898]: Reached target Sockets.
Nov 25 23:41:25 compute-0 systemd[150898]: Reached target Basic System.
Nov 25 23:41:25 compute-0 systemd[150898]: Reached target Main User Target.
Nov 25 23:41:25 compute-0 systemd[150898]: Startup finished in 188ms.
Nov 25 23:41:25 compute-0 systemd[1]: Started User Manager for UID 0.
Nov 25 23:41:25 compute-0 systemd[1]: Started ovn_controller container.
Nov 25 23:41:25 compute-0 systemd[1]: Started Session c1 of User root.
Nov 25 23:41:25 compute-0 sudo[150802]: pam_unix(sudo:session): session closed for user root
Nov 25 23:41:25 compute-0 ovn_controller[150860]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 25 23:41:25 compute-0 ovn_controller[150860]: INFO:__main__:Validating config file
Nov 25 23:41:25 compute-0 ovn_controller[150860]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 25 23:41:25 compute-0 ovn_controller[150860]: INFO:__main__:Writing out command to execute
Nov 25 23:41:25 compute-0 systemd[1]: session-c1.scope: Deactivated successfully.
Nov 25 23:41:25 compute-0 ovn_controller[150860]: ++ cat /run_command
Nov 25 23:41:25 compute-0 ovn_controller[150860]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Nov 25 23:41:25 compute-0 ovn_controller[150860]: + ARGS=
Nov 25 23:41:25 compute-0 ovn_controller[150860]: + sudo kolla_copy_cacerts
Nov 25 23:41:25 compute-0 systemd[1]: Started Session c2 of User root.
Nov 25 23:41:25 compute-0 systemd[1]: session-c2.scope: Deactivated successfully.
Nov 25 23:41:25 compute-0 ovn_controller[150860]: + [[ ! -n '' ]]
Nov 25 23:41:25 compute-0 ovn_controller[150860]: + . kolla_extend_start
Nov 25 23:41:25 compute-0 ovn_controller[150860]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Nov 25 23:41:25 compute-0 ovn_controller[150860]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Nov 25 23:41:25 compute-0 ovn_controller[150860]: + umask 0022
Nov 25 23:41:25 compute-0 ovn_controller[150860]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Nov 25 23:41:25 compute-0 ovn_controller[150860]: 2025-11-25T23:41:25Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Nov 25 23:41:25 compute-0 ovn_controller[150860]: 2025-11-25T23:41:25Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Nov 25 23:41:25 compute-0 ovn_controller[150860]: 2025-11-25T23:41:25Z|00003|main|INFO|OVN internal version is : [24.03.7-20.33.0-76.8]
Nov 25 23:41:25 compute-0 ovn_controller[150860]: 2025-11-25T23:41:25Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Nov 25 23:41:25 compute-0 ovn_controller[150860]: 2025-11-25T23:41:25Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Nov 25 23:41:25 compute-0 ovn_controller[150860]: 2025-11-25T23:41:25Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Nov 25 23:41:25 compute-0 NetworkManager[49538]: <info>  [1764114085.8493] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Nov 25 23:41:25 compute-0 NetworkManager[49538]: <info>  [1764114085.8499] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 23:41:25 compute-0 NetworkManager[49538]: <info>  [1764114085.8508] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Nov 25 23:41:25 compute-0 NetworkManager[49538]: <info>  [1764114085.8513] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Nov 25 23:41:25 compute-0 NetworkManager[49538]: <info>  [1764114085.8516] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 25 23:41:25 compute-0 kernel: br-int: entered promiscuous mode
Nov 25 23:41:25 compute-0 ovn_controller[150860]: 2025-11-25T23:41:25Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Nov 25 23:41:25 compute-0 ovn_controller[150860]: 2025-11-25T23:41:25Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 25 23:41:25 compute-0 ovn_controller[150860]: 2025-11-25T23:41:25Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 25 23:41:25 compute-0 ovn_controller[150860]: 2025-11-25T23:41:25Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Nov 25 23:41:25 compute-0 ovn_controller[150860]: 2025-11-25T23:41:25Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Nov 25 23:41:25 compute-0 ovn_controller[150860]: 2025-11-25T23:41:25Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Nov 25 23:41:25 compute-0 ovn_controller[150860]: 2025-11-25T23:41:25Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Nov 25 23:41:25 compute-0 ovn_controller[150860]: 2025-11-25T23:41:25Z|00014|main|INFO|OVS feature set changed, force recompute.
Nov 25 23:41:25 compute-0 ovn_controller[150860]: 2025-11-25T23:41:25Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 25 23:41:25 compute-0 ovn_controller[150860]: 2025-11-25T23:41:25Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 25 23:41:25 compute-0 ovn_controller[150860]: 2025-11-25T23:41:25Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 25 23:41:25 compute-0 ovn_controller[150860]: 2025-11-25T23:41:25Z|00018|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Nov 25 23:41:25 compute-0 ovn_controller[150860]: 2025-11-25T23:41:25Z|00019|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 25 23:41:25 compute-0 ovn_controller[150860]: 2025-11-25T23:41:25Z|00020|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Nov 25 23:41:25 compute-0 ovn_controller[150860]: 2025-11-25T23:41:25Z|00021|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Nov 25 23:41:25 compute-0 ovn_controller[150860]: 2025-11-25T23:41:25Z|00022|main|INFO|OVS feature set changed, force recompute.
Nov 25 23:41:25 compute-0 ovn_controller[150860]: 2025-11-25T23:41:25Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Nov 25 23:41:25 compute-0 ovn_controller[150860]: 2025-11-25T23:41:25Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Nov 25 23:41:25 compute-0 systemd-udevd[150991]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 23:41:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:41:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:41:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:41:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:41:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:41:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:41:26 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v339: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:41:26 compute-0 ovn_controller[150860]: 2025-11-25T23:41:26Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 25 23:41:26 compute-0 ovn_controller[150860]: 2025-11-25T23:41:26Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 25 23:41:26 compute-0 ovn_controller[150860]: 2025-11-25T23:41:26Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 25 23:41:26 compute-0 ovn_controller[150860]: 2025-11-25T23:41:26Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 25 23:41:26 compute-0 ovn_controller[150860]: 2025-11-25T23:41:26Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 25 23:41:26 compute-0 ovn_controller[150860]: 2025-11-25T23:41:26Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 25 23:41:26 compute-0 NetworkManager[49538]: <info>  [1764114086.2819] manager: (ovn-c439b2-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Nov 25 23:41:26 compute-0 sudo[151118]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vazqfhdtqkfpgukznsolbqoexhfndlcw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114085.9337535-609-121017184776364/AnsiballZ_command.py'
Nov 25 23:41:26 compute-0 sudo[151118]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:41:26 compute-0 kernel: genev_sys_6081: entered promiscuous mode
Nov 25 23:41:26 compute-0 systemd-udevd[151005]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 23:41:26 compute-0 NetworkManager[49538]: <info>  [1764114086.2984] device (genev_sys_6081): carrier: link connected
Nov 25 23:41:26 compute-0 NetworkManager[49538]: <info>  [1764114086.2987] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Nov 25 23:41:26 compute-0 python3.9[151121]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:41:26 compute-0 ovs-vsctl[151123]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Nov 25 23:41:26 compute-0 sudo[151118]: pam_unix(sudo:session): session closed for user root
Nov 25 23:41:27 compute-0 sudo[151273]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmeclzzqjtdzbidapawsmuvhqrrxfilx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114086.7164223-617-45703430813344/AnsiballZ_command.py'
Nov 25 23:41:27 compute-0 sudo[151273]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:41:27 compute-0 python3.9[151275]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:41:27 compute-0 ovs-vsctl[151277]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Nov 25 23:41:27 compute-0 sudo[151273]: pam_unix(sudo:session): session closed for user root
Nov 25 23:41:27 compute-0 ceph-mon[75654]: pgmap v339: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:41:28 compute-0 sudo[151435]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvhdiifyuckqqccbgjluggdygeenhafq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114087.797533-631-140278479374295/AnsiballZ_command.py'
Nov 25 23:41:28 compute-0 sudo[151435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:41:28 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v340: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:41:28 compute-0 python3.9[151437]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:41:28 compute-0 ovs-vsctl[151438]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Nov 25 23:41:28 compute-0 sudo[151435]: pam_unix(sudo:session): session closed for user root
Nov 25 23:41:29 compute-0 sshd-session[139362]: Connection closed by 192.168.122.30 port 60068
Nov 25 23:41:29 compute-0 sshd-session[139359]: pam_unix(sshd:session): session closed for user zuul
Nov 25 23:41:29 compute-0 systemd[1]: session-46.scope: Deactivated successfully.
Nov 25 23:41:29 compute-0 systemd[1]: session-46.scope: Consumed 1min 5.700s CPU time.
Nov 25 23:41:29 compute-0 systemd-logind[789]: Session 46 logged out. Waiting for processes to exit.
Nov 25 23:41:29 compute-0 systemd-logind[789]: Removed session 46.
Nov 25 23:41:29 compute-0 ceph-mon[75654]: pgmap v340: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:41:30 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v341: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:41:30 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:41:31 compute-0 ceph-mon[75654]: pgmap v341: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:41:32 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v342: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:41:33 compute-0 ceph-mon[75654]: pgmap v342: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:41:34 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v343: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:41:34 compute-0 sshd-session[151463]: Accepted publickey for zuul from 192.168.122.30 port 43610 ssh2: ECDSA SHA256:7fdySOwLsSW4/MY3qn2D8coSBpPDcuftesbV+ivPlmY
Nov 25 23:41:34 compute-0 systemd-logind[789]: New session 48 of user zuul.
Nov 25 23:41:34 compute-0 systemd[1]: Started Session 48 of User zuul.
Nov 25 23:41:34 compute-0 sshd-session[151463]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 23:41:35 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:41:35 compute-0 ceph-mon[75654]: pgmap v343: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:41:35 compute-0 python3.9[151616]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 23:41:35 compute-0 systemd[1]: Stopping User Manager for UID 0...
Nov 25 23:41:35 compute-0 systemd[150898]: Activating special unit Exit the Session...
Nov 25 23:41:35 compute-0 systemd[150898]: Stopped target Main User Target.
Nov 25 23:41:35 compute-0 systemd[150898]: Stopped target Basic System.
Nov 25 23:41:35 compute-0 systemd[150898]: Stopped target Paths.
Nov 25 23:41:35 compute-0 systemd[150898]: Stopped target Sockets.
Nov 25 23:41:35 compute-0 systemd[150898]: Stopped target Timers.
Nov 25 23:41:35 compute-0 systemd[150898]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 25 23:41:35 compute-0 systemd[150898]: Closed D-Bus User Message Bus Socket.
Nov 25 23:41:35 compute-0 systemd[150898]: Stopped Create User's Volatile Files and Directories.
Nov 25 23:41:35 compute-0 systemd[150898]: Removed slice User Application Slice.
Nov 25 23:41:35 compute-0 systemd[150898]: Reached target Shutdown.
Nov 25 23:41:35 compute-0 systemd[150898]: Finished Exit the Session.
Nov 25 23:41:35 compute-0 systemd[150898]: Reached target Exit the Session.
Nov 25 23:41:35 compute-0 systemd[1]: user@0.service: Deactivated successfully.
Nov 25 23:41:35 compute-0 systemd[1]: Stopped User Manager for UID 0.
Nov 25 23:41:35 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/0...
Nov 25 23:41:35 compute-0 systemd[1]: run-user-0.mount: Deactivated successfully.
Nov 25 23:41:35 compute-0 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Nov 25 23:41:35 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/0.
Nov 25 23:41:35 compute-0 systemd[1]: Removed slice User Slice of UID 0.
Nov 25 23:41:36 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v344: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:41:36 compute-0 sudo[151774]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhkvokuhqiuiuauvizvbwcrsspcnathn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114096.263625-34-231641243181914/AnsiballZ_file.py'
Nov 25 23:41:36 compute-0 sudo[151774]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:41:36 compute-0 python3.9[151776]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:41:37 compute-0 sudo[151774]: pam_unix(sudo:session): session closed for user root
Nov 25 23:41:37 compute-0 sudo[151926]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dowyohtfuzybwzatyixfimhwtrjppyru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114097.1931431-34-16818318121088/AnsiballZ_file.py'
Nov 25 23:41:37 compute-0 sudo[151926]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:41:37 compute-0 ceph-mon[75654]: pgmap v344: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:41:37 compute-0 python3.9[151928]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:41:37 compute-0 sudo[151926]: pam_unix(sudo:session): session closed for user root
Nov 25 23:41:38 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v345: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:41:38 compute-0 sudo[152078]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-daxwxqnicydxsbrfssrwywgctjabhsfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114097.9290493-34-165973753884502/AnsiballZ_file.py'
Nov 25 23:41:38 compute-0 sudo[152078]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:41:38 compute-0 python3.9[152080]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:41:38 compute-0 sudo[152078]: pam_unix(sudo:session): session closed for user root
Nov 25 23:41:39 compute-0 sudo[152232]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxvronjapjgemoepfqtdptsjbzhcuinh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114098.7720823-34-175215284253440/AnsiballZ_file.py'
Nov 25 23:41:39 compute-0 sudo[152232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:41:39 compute-0 python3.9[152234]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:41:39 compute-0 sudo[152232]: pam_unix(sudo:session): session closed for user root
Nov 25 23:41:39 compute-0 ceph-mon[75654]: pgmap v345: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:41:40 compute-0 sudo[152384]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfznydaysvnigjlehstqtueqtfhyphyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114099.6276393-34-262586339596311/AnsiballZ_file.py'
Nov 25 23:41:40 compute-0 sudo[152384]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:41:40 compute-0 sshd-session[152158]: Invalid user python from 202.83.162.167 port 40728
Nov 25 23:41:40 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v346: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:41:40 compute-0 python3.9[152386]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:41:40 compute-0 sudo[152384]: pam_unix(sudo:session): session closed for user root
Nov 25 23:41:40 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:41:40 compute-0 sshd-session[152158]: Received disconnect from 202.83.162.167 port 40728:11: Bye Bye [preauth]
Nov 25 23:41:40 compute-0 sshd-session[152158]: Disconnected from invalid user python 202.83.162.167 port 40728 [preauth]
Nov 25 23:41:41 compute-0 python3.9[152536]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 23:41:41 compute-0 ceph-mon[75654]: pgmap v346: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:41:41 compute-0 sudo[152686]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdpqgwasvrafqwnzkserhuhucucderat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114101.3110065-78-42241325712133/AnsiballZ_seboolean.py'
Nov 25 23:41:41 compute-0 sudo[152686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:41:42 compute-0 python3.9[152688]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Nov 25 23:41:42 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v347: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:41:42 compute-0 sudo[152686]: pam_unix(sudo:session): session closed for user root
Nov 25 23:41:43 compute-0 ceph-mon[75654]: pgmap v347: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:41:43 compute-0 python3.9[152838]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:41:44 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v348: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:41:44 compute-0 python3.9[152960]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764114102.9222944-86-82250326519734/.source follow=False _original_basename=haproxy.j2 checksum=95c62e64c8f82dd9393a560d1b052dc98d38f810 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:41:44 compute-0 sudo[152961]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:41:44 compute-0 sudo[152961]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:41:44 compute-0 sudo[152961]: pam_unix(sudo:session): session closed for user root
Nov 25 23:41:44 compute-0 sudo[152987]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:41:44 compute-0 sudo[152987]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:41:44 compute-0 sudo[152987]: pam_unix(sudo:session): session closed for user root
Nov 25 23:41:44 compute-0 sudo[153035]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:41:44 compute-0 sudo[153035]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:41:44 compute-0 sudo[153035]: pam_unix(sudo:session): session closed for user root
Nov 25 23:41:44 compute-0 sudo[153083]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 23:41:44 compute-0 sudo[153083]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:41:45 compute-0 python3.9[153228]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:41:45 compute-0 sudo[153083]: pam_unix(sudo:session): session closed for user root
Nov 25 23:41:45 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 23:41:45 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:41:45 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 23:41:45 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 23:41:45 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 23:41:45 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:41:45 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev 8a7a95d5-eb27-4e95-9106-7eb8b662ae20 does not exist
Nov 25 23:41:45 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev cce6d471-92ff-4ceb-b40a-7dbbb2a362ed does not exist
Nov 25 23:41:45 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev 34eb893b-d82d-46a7-9eb9-2dad6073013b does not exist
Nov 25 23:41:45 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 23:41:45 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 23:41:45 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 23:41:45 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 23:41:45 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 23:41:45 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:41:45 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:41:45 compute-0 sudo[153251]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:41:45 compute-0 sudo[153251]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:41:45 compute-0 sudo[153251]: pam_unix(sudo:session): session closed for user root
Nov 25 23:41:45 compute-0 sudo[153303]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:41:45 compute-0 sudo[153303]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:41:45 compute-0 sudo[153303]: pam_unix(sudo:session): session closed for user root
Nov 25 23:41:45 compute-0 sudo[153357]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:41:45 compute-0 sudo[153357]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:41:45 compute-0 sudo[153357]: pam_unix(sudo:session): session closed for user root
Nov 25 23:41:45 compute-0 sudo[153406]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 23:41:45 compute-0 sudo[153406]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:41:45 compute-0 ceph-mon[75654]: pgmap v348: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:41:45 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:41:45 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 23:41:45 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:41:45 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 23:41:45 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 23:41:45 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:41:45 compute-0 python3.9[153463]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764114104.6651092-101-93927688971522/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:41:46 compute-0 podman[153516]: 2025-11-25 23:41:46.046338604 +0000 UTC m=+0.049496819 container create b8c3e13388c38e345edebaac6d163b085592a2dfbd484cc490dede6d6008e4ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 23:41:46 compute-0 systemd[1]: Started libpod-conmon-b8c3e13388c38e345edebaac6d163b085592a2dfbd484cc490dede6d6008e4ff.scope.
Nov 25 23:41:46 compute-0 podman[153516]: 2025-11-25 23:41:46.025490839 +0000 UTC m=+0.028649054 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:41:46 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:41:46 compute-0 podman[153516]: 2025-11-25 23:41:46.144563669 +0000 UTC m=+0.147721884 container init b8c3e13388c38e345edebaac6d163b085592a2dfbd484cc490dede6d6008e4ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_euler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:41:46 compute-0 podman[153516]: 2025-11-25 23:41:46.159547668 +0000 UTC m=+0.162705923 container start b8c3e13388c38e345edebaac6d163b085592a2dfbd484cc490dede6d6008e4ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:41:46 compute-0 podman[153516]: 2025-11-25 23:41:46.163799581 +0000 UTC m=+0.166957806 container attach b8c3e13388c38e345edebaac6d163b085592a2dfbd484cc490dede6d6008e4ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_euler, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 23:41:46 compute-0 compassionate_euler[153543]: 167 167
Nov 25 23:41:46 compute-0 systemd[1]: libpod-b8c3e13388c38e345edebaac6d163b085592a2dfbd484cc490dede6d6008e4ff.scope: Deactivated successfully.
Nov 25 23:41:46 compute-0 podman[153516]: 2025-11-25 23:41:46.167978432 +0000 UTC m=+0.171136667 container died b8c3e13388c38e345edebaac6d163b085592a2dfbd484cc490dede6d6008e4ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_euler, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 23:41:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-542aa3fd2f2f76c5cda50bc0f50f7eb12aa584fba27f1b2bf1e0cf705f250614-merged.mount: Deactivated successfully.
Nov 25 23:41:46 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v349: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:41:46 compute-0 podman[153516]: 2025-11-25 23:41:46.222528635 +0000 UTC m=+0.225686870 container remove b8c3e13388c38e345edebaac6d163b085592a2dfbd484cc490dede6d6008e4ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_euler, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:41:46 compute-0 systemd[1]: libpod-conmon-b8c3e13388c38e345edebaac6d163b085592a2dfbd484cc490dede6d6008e4ff.scope: Deactivated successfully.
Nov 25 23:41:46 compute-0 podman[153642]: 2025-11-25 23:41:46.44172881 +0000 UTC m=+0.052615521 container create f6cbd738f7afed9049272fbb280a5cf4ff36657ea18c2962653966e163a2b34b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_villani, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:41:46 compute-0 systemd[1]: Started libpod-conmon-f6cbd738f7afed9049272fbb280a5cf4ff36657ea18c2962653966e163a2b34b.scope.
Nov 25 23:41:46 compute-0 podman[153642]: 2025-11-25 23:41:46.413013286 +0000 UTC m=+0.023899917 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:41:46 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:41:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4fc54f6fa6afcc21cd9c25180986f7cf34da41a0571e4fdaf6e4d7e612ddb52/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 23:41:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4fc54f6fa6afcc21cd9c25180986f7cf34da41a0571e4fdaf6e4d7e612ddb52/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:41:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4fc54f6fa6afcc21cd9c25180986f7cf34da41a0571e4fdaf6e4d7e612ddb52/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:41:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4fc54f6fa6afcc21cd9c25180986f7cf34da41a0571e4fdaf6e4d7e612ddb52/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:41:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4fc54f6fa6afcc21cd9c25180986f7cf34da41a0571e4fdaf6e4d7e612ddb52/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 23:41:46 compute-0 podman[153642]: 2025-11-25 23:41:46.554446141 +0000 UTC m=+0.165332772 container init f6cbd738f7afed9049272fbb280a5cf4ff36657ea18c2962653966e163a2b34b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_villani, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 25 23:41:46 compute-0 podman[153642]: 2025-11-25 23:41:46.569964734 +0000 UTC m=+0.180851295 container start f6cbd738f7afed9049272fbb280a5cf4ff36657ea18c2962653966e163a2b34b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_villani, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 23:41:46 compute-0 podman[153642]: 2025-11-25 23:41:46.574162056 +0000 UTC m=+0.185048707 container attach f6cbd738f7afed9049272fbb280a5cf4ff36657ea18c2962653966e163a2b34b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_villani, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef)
Nov 25 23:41:46 compute-0 sudo[153713]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzuetpdfkdnblxyexwhtbvwthmanhquk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114106.2054152-118-69606392078933/AnsiballZ_setup.py'
Nov 25 23:41:46 compute-0 sudo[153713]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:41:46 compute-0 python3.9[153715]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 23:41:47 compute-0 sudo[153713]: pam_unix(sudo:session): session closed for user root
Nov 25 23:41:47 compute-0 sudo[153815]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymtuyovvorgmfncmgxcniaqvjyizhqyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114106.2054152-118-69606392078933/AnsiballZ_dnf.py'
Nov 25 23:41:47 compute-0 sudo[153815]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:41:47 compute-0 ceph-mon[75654]: pgmap v349: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:41:47 compute-0 magical_villani[153676]: --> passed data devices: 0 physical, 3 LVM
Nov 25 23:41:47 compute-0 magical_villani[153676]: --> relative data size: 1.0
Nov 25 23:41:47 compute-0 magical_villani[153676]: --> All data devices are unavailable
Nov 25 23:41:47 compute-0 podman[153642]: 2025-11-25 23:41:47.817146758 +0000 UTC m=+1.428033319 container died f6cbd738f7afed9049272fbb280a5cf4ff36657ea18c2962653966e163a2b34b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_villani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 25 23:41:47 compute-0 systemd[1]: libpod-f6cbd738f7afed9049272fbb280a5cf4ff36657ea18c2962653966e163a2b34b.scope: Deactivated successfully.
Nov 25 23:41:47 compute-0 systemd[1]: libpod-f6cbd738f7afed9049272fbb280a5cf4ff36657ea18c2962653966e163a2b34b.scope: Consumed 1.185s CPU time.
Nov 25 23:41:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-d4fc54f6fa6afcc21cd9c25180986f7cf34da41a0571e4fdaf6e4d7e612ddb52-merged.mount: Deactivated successfully.
Nov 25 23:41:47 compute-0 python3.9[153819]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 23:41:47 compute-0 podman[153642]: 2025-11-25 23:41:47.90396318 +0000 UTC m=+1.514849741 container remove f6cbd738f7afed9049272fbb280a5cf4ff36657ea18c2962653966e163a2b34b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_villani, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:41:47 compute-0 systemd[1]: libpod-conmon-f6cbd738f7afed9049272fbb280a5cf4ff36657ea18c2962653966e163a2b34b.scope: Deactivated successfully.
Nov 25 23:41:47 compute-0 sudo[153406]: pam_unix(sudo:session): session closed for user root
Nov 25 23:41:48 compute-0 sudo[153837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:41:48 compute-0 sudo[153837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:41:48 compute-0 sudo[153837]: pam_unix(sudo:session): session closed for user root
Nov 25 23:41:48 compute-0 sudo[153862]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:41:48 compute-0 sudo[153862]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:41:48 compute-0 sudo[153862]: pam_unix(sudo:session): session closed for user root
Nov 25 23:41:48 compute-0 sudo[153887]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:41:48 compute-0 sudo[153887]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:41:48 compute-0 sudo[153887]: pam_unix(sudo:session): session closed for user root
Nov 25 23:41:48 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v350: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:41:48 compute-0 sudo[153912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 -- lvm list --format json
Nov 25 23:41:48 compute-0 sudo[153912]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:41:48 compute-0 podman[153978]: 2025-11-25 23:41:48.682124236 +0000 UTC m=+0.066645425 container create 73ca71321baa85237cd941b09c232e9336728648f55feacdcc2198b14f453dbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_matsumoto, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 23:41:48 compute-0 systemd[1]: Started libpod-conmon-73ca71321baa85237cd941b09c232e9336728648f55feacdcc2198b14f453dbe.scope.
Nov 25 23:41:48 compute-0 podman[153978]: 2025-11-25 23:41:48.652256021 +0000 UTC m=+0.036777270 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:41:48 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:41:48 compute-0 podman[153978]: 2025-11-25 23:41:48.784909093 +0000 UTC m=+0.169430272 container init 73ca71321baa85237cd941b09c232e9336728648f55feacdcc2198b14f453dbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_matsumoto, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:41:48 compute-0 podman[153978]: 2025-11-25 23:41:48.795652929 +0000 UTC m=+0.180174098 container start 73ca71321baa85237cd941b09c232e9336728648f55feacdcc2198b14f453dbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_matsumoto, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:41:48 compute-0 podman[153978]: 2025-11-25 23:41:48.799434269 +0000 UTC m=+0.183955448 container attach 73ca71321baa85237cd941b09c232e9336728648f55feacdcc2198b14f453dbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_matsumoto, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 25 23:41:48 compute-0 funny_matsumoto[153994]: 167 167
Nov 25 23:41:48 compute-0 systemd[1]: libpod-73ca71321baa85237cd941b09c232e9336728648f55feacdcc2198b14f453dbe.scope: Deactivated successfully.
Nov 25 23:41:48 compute-0 podman[153978]: 2025-11-25 23:41:48.807666708 +0000 UTC m=+0.192187907 container died 73ca71321baa85237cd941b09c232e9336728648f55feacdcc2198b14f453dbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_matsumoto, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 23:41:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-b77008bb2652e724603a4d4ae5689974f2d6bdce1e54d30f437633702af31dbe-merged.mount: Deactivated successfully.
Nov 25 23:41:48 compute-0 podman[153978]: 2025-11-25 23:41:48.862625851 +0000 UTC m=+0.247147040 container remove 73ca71321baa85237cd941b09c232e9336728648f55feacdcc2198b14f453dbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:41:48 compute-0 systemd[1]: libpod-conmon-73ca71321baa85237cd941b09c232e9336728648f55feacdcc2198b14f453dbe.scope: Deactivated successfully.
Nov 25 23:41:49 compute-0 podman[154019]: 2025-11-25 23:41:49.108246831 +0000 UTC m=+0.069784369 container create 9fad36932d8c01c0209102a615ffcf9f5ecc2fb7c03a52f4a163de56cd0cf37e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_bhabha, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 25 23:41:49 compute-0 sudo[153815]: pam_unix(sudo:session): session closed for user root
Nov 25 23:41:49 compute-0 podman[154019]: 2025-11-25 23:41:49.079503015 +0000 UTC m=+0.041040693 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:41:49 compute-0 systemd[1]: Started libpod-conmon-9fad36932d8c01c0209102a615ffcf9f5ecc2fb7c03a52f4a163de56cd0cf37e.scope.
Nov 25 23:41:49 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:41:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bcae8f743279f1f664315ffab0c3d807a898848aaf2bc3be773f731878c45bf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 23:41:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bcae8f743279f1f664315ffab0c3d807a898848aaf2bc3be773f731878c45bf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:41:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bcae8f743279f1f664315ffab0c3d807a898848aaf2bc3be773f731878c45bf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:41:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bcae8f743279f1f664315ffab0c3d807a898848aaf2bc3be773f731878c45bf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:41:49 compute-0 podman[154019]: 2025-11-25 23:41:49.27612098 +0000 UTC m=+0.237658548 container init 9fad36932d8c01c0209102a615ffcf9f5ecc2fb7c03a52f4a163de56cd0cf37e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_bhabha, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 23:41:49 compute-0 podman[154019]: 2025-11-25 23:41:49.287812611 +0000 UTC m=+0.249350119 container start 9fad36932d8c01c0209102a615ffcf9f5ecc2fb7c03a52f4a163de56cd0cf37e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_bhabha, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 23:41:49 compute-0 podman[154019]: 2025-11-25 23:41:49.291267084 +0000 UTC m=+0.252804682 container attach 9fad36932d8c01c0209102a615ffcf9f5ecc2fb7c03a52f4a163de56cd0cf37e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_bhabha, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:41:49 compute-0 ceph-mon[75654]: pgmap v350: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:41:50 compute-0 sweet_bhabha[154042]: {
Nov 25 23:41:50 compute-0 sweet_bhabha[154042]:     "0": [
Nov 25 23:41:50 compute-0 sweet_bhabha[154042]:         {
Nov 25 23:41:50 compute-0 sweet_bhabha[154042]:             "devices": [
Nov 25 23:41:50 compute-0 sweet_bhabha[154042]:                 "/dev/loop3"
Nov 25 23:41:50 compute-0 sweet_bhabha[154042]:             ],
Nov 25 23:41:50 compute-0 sweet_bhabha[154042]:             "lv_name": "ceph_lv0",
Nov 25 23:41:50 compute-0 sweet_bhabha[154042]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 23:41:50 compute-0 sweet_bhabha[154042]:             "lv_size": "21470642176",
Nov 25 23:41:50 compute-0 sweet_bhabha[154042]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c5ddf08a-6193-41e0-8332-60b5083aa62e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 23:41:50 compute-0 sweet_bhabha[154042]:             "lv_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 23:41:50 compute-0 sweet_bhabha[154042]:             "name": "ceph_lv0",
Nov 25 23:41:50 compute-0 sweet_bhabha[154042]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 23:41:50 compute-0 sweet_bhabha[154042]:             "tags": {
Nov 25 23:41:50 compute-0 sweet_bhabha[154042]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 23:41:50 compute-0 sweet_bhabha[154042]:                 "ceph.block_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 23:41:50 compute-0 sweet_bhabha[154042]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 23:41:50 compute-0 sweet_bhabha[154042]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:41:50 compute-0 sweet_bhabha[154042]:                 "ceph.cluster_name": "ceph",
Nov 25 23:41:50 compute-0 sweet_bhabha[154042]:                 "ceph.crush_device_class": "",
Nov 25 23:41:50 compute-0 sweet_bhabha[154042]:                 "ceph.encrypted": "0",
Nov 25 23:41:50 compute-0 sweet_bhabha[154042]:                 "ceph.osd_fsid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 23:41:50 compute-0 sweet_bhabha[154042]:                 "ceph.osd_id": "0",
Nov 25 23:41:50 compute-0 sweet_bhabha[154042]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 23:41:50 compute-0 sweet_bhabha[154042]:                 "ceph.type": "block",
Nov 25 23:41:50 compute-0 sweet_bhabha[154042]:                 "ceph.vdo": "0"
Nov 25 23:41:50 compute-0 sweet_bhabha[154042]:             },
Nov 25 23:41:50 compute-0 sweet_bhabha[154042]:             "type": "block",
Nov 25 23:41:50 compute-0 sweet_bhabha[154042]:             "vg_name": "ceph_vg0"
Nov 25 23:41:50 compute-0 sweet_bhabha[154042]:         }
Nov 25 23:41:50 compute-0 sweet_bhabha[154042]:     ],
Nov 25 23:41:50 compute-0 sweet_bhabha[154042]:     "1": [
Nov 25 23:41:50 compute-0 sweet_bhabha[154042]:         {
Nov 25 23:41:50 compute-0 sweet_bhabha[154042]:             "devices": [
Nov 25 23:41:50 compute-0 sweet_bhabha[154042]:                 "/dev/loop4"
Nov 25 23:41:50 compute-0 sweet_bhabha[154042]:             ],
Nov 25 23:41:50 compute-0 sweet_bhabha[154042]:             "lv_name": "ceph_lv1",
Nov 25 23:41:50 compute-0 sweet_bhabha[154042]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 23:41:50 compute-0 sweet_bhabha[154042]:             "lv_size": "21470642176",
Nov 25 23:41:50 compute-0 sweet_bhabha[154042]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21bbab34-bea3-466b-8bf7-812749fcef47,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 23:41:50 compute-0 sweet_bhabha[154042]:             "lv_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 23:41:50 compute-0 sweet_bhabha[154042]:             "name": "ceph_lv1",
Nov 25 23:41:50 compute-0 sweet_bhabha[154042]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 23:41:50 compute-0 sweet_bhabha[154042]:             "tags": {
Nov 25 23:41:50 compute-0 sweet_bhabha[154042]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 23:41:50 compute-0 sweet_bhabha[154042]:                 "ceph.block_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 23:41:50 compute-0 sweet_bhabha[154042]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 23:41:50 compute-0 sweet_bhabha[154042]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:41:50 compute-0 sweet_bhabha[154042]:                 "ceph.cluster_name": "ceph",
Nov 25 23:41:50 compute-0 sweet_bhabha[154042]:                 "ceph.crush_device_class": "",
Nov 25 23:41:50 compute-0 sweet_bhabha[154042]:                 "ceph.encrypted": "0",
Nov 25 23:41:50 compute-0 sweet_bhabha[154042]:                 "ceph.osd_fsid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 23:41:50 compute-0 sweet_bhabha[154042]:                 "ceph.osd_id": "1",
Nov 25 23:41:50 compute-0 sweet_bhabha[154042]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 23:41:50 compute-0 sweet_bhabha[154042]:                 "ceph.type": "block",
Nov 25 23:41:50 compute-0 sweet_bhabha[154042]:                 "ceph.vdo": "0"
Nov 25 23:41:50 compute-0 sweet_bhabha[154042]:             },
Nov 25 23:41:50 compute-0 sweet_bhabha[154042]:             "type": "block",
Nov 25 23:41:50 compute-0 sweet_bhabha[154042]:             "vg_name": "ceph_vg1"
Nov 25 23:41:50 compute-0 sweet_bhabha[154042]:         }
Nov 25 23:41:50 compute-0 sweet_bhabha[154042]:     ],
Nov 25 23:41:50 compute-0 sweet_bhabha[154042]:     "2": [
Nov 25 23:41:50 compute-0 sweet_bhabha[154042]:         {
Nov 25 23:41:50 compute-0 sweet_bhabha[154042]:             "devices": [
Nov 25 23:41:50 compute-0 sweet_bhabha[154042]:                 "/dev/loop5"
Nov 25 23:41:50 compute-0 sweet_bhabha[154042]:             ],
Nov 25 23:41:50 compute-0 sweet_bhabha[154042]:             "lv_name": "ceph_lv2",
Nov 25 23:41:50 compute-0 sweet_bhabha[154042]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 23:41:50 compute-0 sweet_bhabha[154042]:             "lv_size": "21470642176",
Nov 25 23:41:50 compute-0 sweet_bhabha[154042]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=019d967b-1a56-4e90-8682-a890da577e20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 23:41:50 compute-0 sweet_bhabha[154042]:             "lv_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 23:41:50 compute-0 sweet_bhabha[154042]:             "name": "ceph_lv2",
Nov 25 23:41:50 compute-0 sweet_bhabha[154042]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 23:41:50 compute-0 sweet_bhabha[154042]:             "tags": {
Nov 25 23:41:50 compute-0 sweet_bhabha[154042]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 23:41:50 compute-0 sweet_bhabha[154042]:                 "ceph.block_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 23:41:50 compute-0 sweet_bhabha[154042]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 23:41:50 compute-0 sweet_bhabha[154042]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:41:50 compute-0 sweet_bhabha[154042]:                 "ceph.cluster_name": "ceph",
Nov 25 23:41:50 compute-0 sweet_bhabha[154042]:                 "ceph.crush_device_class": "",
Nov 25 23:41:50 compute-0 sweet_bhabha[154042]:                 "ceph.encrypted": "0",
Nov 25 23:41:50 compute-0 sweet_bhabha[154042]:                 "ceph.osd_fsid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 23:41:50 compute-0 sweet_bhabha[154042]:                 "ceph.osd_id": "2",
Nov 25 23:41:50 compute-0 sweet_bhabha[154042]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 23:41:50 compute-0 sweet_bhabha[154042]:                 "ceph.type": "block",
Nov 25 23:41:50 compute-0 sweet_bhabha[154042]:                 "ceph.vdo": "0"
Nov 25 23:41:50 compute-0 sweet_bhabha[154042]:             },
Nov 25 23:41:50 compute-0 sweet_bhabha[154042]:             "type": "block",
Nov 25 23:41:50 compute-0 sweet_bhabha[154042]:             "vg_name": "ceph_vg2"
Nov 25 23:41:50 compute-0 sweet_bhabha[154042]:         }
Nov 25 23:41:50 compute-0 sweet_bhabha[154042]:     ]
Nov 25 23:41:50 compute-0 sweet_bhabha[154042]: }
Nov 25 23:41:50 compute-0 systemd[1]: libpod-9fad36932d8c01c0209102a615ffcf9f5ecc2fb7c03a52f4a163de56cd0cf37e.scope: Deactivated successfully.
Nov 25 23:41:50 compute-0 podman[154019]: 2025-11-25 23:41:50.137457871 +0000 UTC m=+1.098995409 container died 9fad36932d8c01c0209102a615ffcf9f5ecc2fb7c03a52f4a163de56cd0cf37e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_bhabha, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:41:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-6bcae8f743279f1f664315ffab0c3d807a898848aaf2bc3be773f731878c45bf-merged.mount: Deactivated successfully.
Nov 25 23:41:50 compute-0 podman[154019]: 2025-11-25 23:41:50.201029504 +0000 UTC m=+1.162567012 container remove 9fad36932d8c01c0209102a615ffcf9f5ecc2fb7c03a52f4a163de56cd0cf37e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_bhabha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 23:41:50 compute-0 systemd[1]: libpod-conmon-9fad36932d8c01c0209102a615ffcf9f5ecc2fb7c03a52f4a163de56cd0cf37e.scope: Deactivated successfully.
Nov 25 23:41:50 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v351: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:41:50 compute-0 sudo[153912]: pam_unix(sudo:session): session closed for user root
Nov 25 23:41:50 compute-0 sudo[154205]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bprjniwaocjjsygynrebvuozibfyjopx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114109.4084678-130-55991748606715/AnsiballZ_systemd.py'
Nov 25 23:41:50 compute-0 sudo[154205]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:41:50 compute-0 sudo[154207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:41:50 compute-0 sudo[154207]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:41:50 compute-0 sudo[154207]: pam_unix(sudo:session): session closed for user root
Nov 25 23:41:50 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:41:50 compute-0 sudo[154233]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:41:50 compute-0 sudo[154233]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:41:50 compute-0 sudo[154233]: pam_unix(sudo:session): session closed for user root
Nov 25 23:41:50 compute-0 sudo[154258]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:41:50 compute-0 sudo[154258]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:41:50 compute-0 sudo[154258]: pam_unix(sudo:session): session closed for user root
Nov 25 23:41:50 compute-0 sudo[154283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 -- raw list --format json
Nov 25 23:41:50 compute-0 sudo[154283]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:41:50 compute-0 python3.9[154208]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 25 23:41:50 compute-0 podman[154350]: 2025-11-25 23:41:50.898098292 +0000 UTC m=+0.055951531 container create 07af0fb88af2b7d7bc99ccd34053a65f660ca757ed85bfc9bf2ac743f63885c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:41:50 compute-0 systemd[1]: Started libpod-conmon-07af0fb88af2b7d7bc99ccd34053a65f660ca757ed85bfc9bf2ac743f63885c3.scope.
Nov 25 23:41:50 compute-0 podman[154350]: 2025-11-25 23:41:50.878473469 +0000 UTC m=+0.036326738 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:41:50 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:41:50 compute-0 podman[154350]: 2025-11-25 23:41:50.998657569 +0000 UTC m=+0.156510908 container init 07af0fb88af2b7d7bc99ccd34053a65f660ca757ed85bfc9bf2ac743f63885c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:41:51 compute-0 podman[154350]: 2025-11-25 23:41:51.010426802 +0000 UTC m=+0.168280071 container start 07af0fb88af2b7d7bc99ccd34053a65f660ca757ed85bfc9bf2ac743f63885c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_chandrasekhar, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:41:51 compute-0 podman[154350]: 2025-11-25 23:41:51.014767568 +0000 UTC m=+0.172620877 container attach 07af0fb88af2b7d7bc99ccd34053a65f660ca757ed85bfc9bf2ac743f63885c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_chandrasekhar, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 23:41:51 compute-0 boring_chandrasekhar[154366]: 167 167
Nov 25 23:41:51 compute-0 systemd[1]: libpod-07af0fb88af2b7d7bc99ccd34053a65f660ca757ed85bfc9bf2ac743f63885c3.scope: Deactivated successfully.
Nov 25 23:41:51 compute-0 podman[154350]: 2025-11-25 23:41:51.018795535 +0000 UTC m=+0.176648794 container died 07af0fb88af2b7d7bc99ccd34053a65f660ca757ed85bfc9bf2ac743f63885c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_chandrasekhar, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 23:41:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-7a46b925513607ef717b033115caca92c974bd6ac7f1c32c1f0ca9eb951ed1c2-merged.mount: Deactivated successfully.
Nov 25 23:41:51 compute-0 podman[154350]: 2025-11-25 23:41:51.07642275 +0000 UTC m=+0.234276019 container remove 07af0fb88af2b7d7bc99ccd34053a65f660ca757ed85bfc9bf2ac743f63885c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_chandrasekhar, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 23:41:51 compute-0 systemd[1]: libpod-conmon-07af0fb88af2b7d7bc99ccd34053a65f660ca757ed85bfc9bf2ac743f63885c3.scope: Deactivated successfully.
Nov 25 23:41:51 compute-0 podman[154390]: 2025-11-25 23:41:51.3487462 +0000 UTC m=+0.070328574 container create 364f7702b1ecab9a3f1a03ed3c27795d4b954e85e5bd4cb2d67f4f3e73d4c9c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_pascal, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 23:41:51 compute-0 systemd[1]: Started libpod-conmon-364f7702b1ecab9a3f1a03ed3c27795d4b954e85e5bd4cb2d67f4f3e73d4c9c4.scope.
Nov 25 23:41:51 compute-0 podman[154390]: 2025-11-25 23:41:51.32208108 +0000 UTC m=+0.043663454 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:41:51 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:41:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ea96ebd31a9cf08b17d339236a69d58a518d55a42701d092ef3daacd539c60b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 23:41:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ea96ebd31a9cf08b17d339236a69d58a518d55a42701d092ef3daacd539c60b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:41:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ea96ebd31a9cf08b17d339236a69d58a518d55a42701d092ef3daacd539c60b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:41:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ea96ebd31a9cf08b17d339236a69d58a518d55a42701d092ef3daacd539c60b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:41:51 compute-0 podman[154390]: 2025-11-25 23:41:51.465100067 +0000 UTC m=+0.186682481 container init 364f7702b1ecab9a3f1a03ed3c27795d4b954e85e5bd4cb2d67f4f3e73d4c9c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507)
Nov 25 23:41:51 compute-0 podman[154390]: 2025-11-25 23:41:51.476605763 +0000 UTC m=+0.198188137 container start 364f7702b1ecab9a3f1a03ed3c27795d4b954e85e5bd4cb2d67f4f3e73d4c9c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_pascal, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:41:51 compute-0 podman[154390]: 2025-11-25 23:41:51.480731543 +0000 UTC m=+0.202313907 container attach 364f7702b1ecab9a3f1a03ed3c27795d4b954e85e5bd4cb2d67f4f3e73d4c9c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_pascal, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 23:41:51 compute-0 sudo[154205]: pam_unix(sudo:session): session closed for user root
Nov 25 23:41:51 compute-0 ceph-mon[75654]: pgmap v351: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:41:52 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v352: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:41:52 compute-0 python3.9[154572]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:41:52 compute-0 intelligent_pascal[154406]: {
Nov 25 23:41:52 compute-0 intelligent_pascal[154406]:     "019d967b-1a56-4e90-8682-a890da577e20": {
Nov 25 23:41:52 compute-0 intelligent_pascal[154406]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:41:52 compute-0 intelligent_pascal[154406]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 23:41:52 compute-0 intelligent_pascal[154406]:         "osd_id": 2,
Nov 25 23:41:52 compute-0 intelligent_pascal[154406]:         "osd_uuid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 23:41:52 compute-0 intelligent_pascal[154406]:         "type": "bluestore"
Nov 25 23:41:52 compute-0 intelligent_pascal[154406]:     },
Nov 25 23:41:52 compute-0 intelligent_pascal[154406]:     "21bbab34-bea3-466b-8bf7-812749fcef47": {
Nov 25 23:41:52 compute-0 intelligent_pascal[154406]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:41:52 compute-0 intelligent_pascal[154406]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 23:41:52 compute-0 intelligent_pascal[154406]:         "osd_id": 1,
Nov 25 23:41:52 compute-0 intelligent_pascal[154406]:         "osd_uuid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 23:41:52 compute-0 intelligent_pascal[154406]:         "type": "bluestore"
Nov 25 23:41:52 compute-0 intelligent_pascal[154406]:     },
Nov 25 23:41:52 compute-0 intelligent_pascal[154406]:     "c5ddf08a-6193-41e0-8332-60b5083aa62e": {
Nov 25 23:41:52 compute-0 intelligent_pascal[154406]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:41:52 compute-0 intelligent_pascal[154406]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 23:41:52 compute-0 intelligent_pascal[154406]:         "osd_id": 0,
Nov 25 23:41:52 compute-0 intelligent_pascal[154406]:         "osd_uuid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 23:41:52 compute-0 intelligent_pascal[154406]:         "type": "bluestore"
Nov 25 23:41:52 compute-0 intelligent_pascal[154406]:     }
Nov 25 23:41:52 compute-0 intelligent_pascal[154406]: }
Nov 25 23:41:52 compute-0 systemd[1]: libpod-364f7702b1ecab9a3f1a03ed3c27795d4b954e85e5bd4cb2d67f4f3e73d4c9c4.scope: Deactivated successfully.
Nov 25 23:41:52 compute-0 systemd[1]: libpod-364f7702b1ecab9a3f1a03ed3c27795d4b954e85e5bd4cb2d67f4f3e73d4c9c4.scope: Consumed 1.171s CPU time.
Nov 25 23:41:52 compute-0 podman[154390]: 2025-11-25 23:41:52.643367376 +0000 UTC m=+1.364949840 container died 364f7702b1ecab9a3f1a03ed3c27795d4b954e85e5bd4cb2d67f4f3e73d4c9c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_pascal, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 23:41:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-7ea96ebd31a9cf08b17d339236a69d58a518d55a42701d092ef3daacd539c60b-merged.mount: Deactivated successfully.
Nov 25 23:41:52 compute-0 podman[154390]: 2025-11-25 23:41:52.718227689 +0000 UTC m=+1.439810033 container remove 364f7702b1ecab9a3f1a03ed3c27795d4b954e85e5bd4cb2d67f4f3e73d4c9c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_pascal, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:41:52 compute-0 systemd[1]: libpod-conmon-364f7702b1ecab9a3f1a03ed3c27795d4b954e85e5bd4cb2d67f4f3e73d4c9c4.scope: Deactivated successfully.
Nov 25 23:41:52 compute-0 sudo[154283]: pam_unix(sudo:session): session closed for user root
Nov 25 23:41:52 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 23:41:52 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:41:52 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 23:41:52 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:41:52 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev f8368813-b703-4d44-856d-af49e7ce8254 does not exist
Nov 25 23:41:52 compute-0 sudo[154666]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:41:52 compute-0 sudo[154666]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:41:52 compute-0 sudo[154666]: pam_unix(sudo:session): session closed for user root
Nov 25 23:41:52 compute-0 sudo[154696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 23:41:52 compute-0 sudo[154696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:41:52 compute-0 sudo[154696]: pam_unix(sudo:session): session closed for user root
Nov 25 23:41:53 compute-0 python3.9[154771]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764114111.9067004-138-510763543191/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:41:53 compute-0 ceph-mon[75654]: pgmap v352: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:41:53 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:41:53 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:41:54 compute-0 python3.9[154921]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:41:54 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v353: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:41:54 compute-0 python3.9[155042]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764114113.45278-138-204972015910014/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:41:55 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:41:55 compute-0 ceph-mon[75654]: pgmap v353: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:41:55 compute-0 ovn_controller[150860]: 2025-11-25T23:41:55Z|00025|memory|INFO|17280 kB peak resident set size after 30.1 seconds
Nov 25 23:41:55 compute-0 ovn_controller[150860]: 2025-11-25T23:41:55Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:528 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Nov 25 23:41:55 compute-0 podman[155166]: 2025-11-25 23:41:55.983567902 +0000 UTC m=+0.145209446 container health_status 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 25 23:41:56 compute-0 ceph-mgr[75954]: [balancer INFO root] Optimize plan auto_2025-11-25_23:41:56
Nov 25 23:41:56 compute-0 ceph-mgr[75954]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 23:41:56 compute-0 ceph-mgr[75954]: [balancer INFO root] do_upmap
Nov 25 23:41:56 compute-0 ceph-mgr[75954]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'backups', 'cephfs.cephfs.data', '.mgr', 'volumes', 'vms', 'images']
Nov 25 23:41:56 compute-0 ceph-mgr[75954]: [balancer INFO root] prepared 0/10 changes
Nov 25 23:41:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:41:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:41:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:41:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:41:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:41:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:41:56 compute-0 python3.9[155204]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:41:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 23:41:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 23:41:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 23:41:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 23:41:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 23:41:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 23:41:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 23:41:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 23:41:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 23:41:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 23:41:56 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v354: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:41:56 compute-0 python3.9[155339]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764114115.4942293-182-274171676532637/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:41:56 compute-0 ceph-mon[75654]: pgmap v354: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:41:57 compute-0 python3.9[155489]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:41:58 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v355: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:41:58 compute-0 python3.9[155610]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764114116.948464-182-201418477778343/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:41:59 compute-0 python3.9[155760]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 23:41:59 compute-0 ceph-mon[75654]: pgmap v355: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:41:59 compute-0 sudo[155914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wakhewfqjqoxpxwgfikbicorllbtcuni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114119.372824-220-34981346968500/AnsiballZ_file.py'
Nov 25 23:41:59 compute-0 sudo[155914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:42:00 compute-0 python3.9[155916]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:42:00 compute-0 sudo[155914]: pam_unix(sudo:session): session closed for user root
Nov 25 23:42:00 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v356: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:42:00 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:42:00 compute-0 sudo[156066]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzpqfqlajlgzexmssbuaxhnhnjzexhsj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114120.3136263-228-93770647565213/AnsiballZ_stat.py'
Nov 25 23:42:00 compute-0 sudo[156066]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:42:00 compute-0 sshd-session[155810]: Invalid user user10 from 171.244.140.22 port 53408
Nov 25 23:42:00 compute-0 python3.9[156068]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:42:00 compute-0 sudo[156066]: pam_unix(sudo:session): session closed for user root
Nov 25 23:42:00 compute-0 sshd-session[155810]: Received disconnect from 171.244.140.22 port 53408:11: Bye Bye [preauth]
Nov 25 23:42:00 compute-0 sshd-session[155810]: Disconnected from invalid user user10 171.244.140.22 port 53408 [preauth]
Nov 25 23:42:01 compute-0 sudo[156144]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufiwkftzijynfdehimfkbiuwkrdlfjml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114120.3136263-228-93770647565213/AnsiballZ_file.py'
Nov 25 23:42:01 compute-0 sudo[156144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:42:01 compute-0 ceph-mon[75654]: pgmap v356: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:42:01 compute-0 python3.9[156146]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:42:01 compute-0 sudo[156144]: pam_unix(sudo:session): session closed for user root
Nov 25 23:42:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 23:42:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:42:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 23:42:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:42:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:42:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:42:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:42:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:42:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:42:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:42:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:42:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:42:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 23:42:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:42:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:42:01 compute-0 sudo[156296]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qallgihyspfnzkzrlnzhijbjgfstffwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114121.4760742-228-232578989140409/AnsiballZ_stat.py'
Nov 25 23:42:01 compute-0 sudo[156296]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:42:01 compute-0 python3.9[156298]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:42:02 compute-0 sudo[156296]: pam_unix(sudo:session): session closed for user root
Nov 25 23:42:02 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v357: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:42:02 compute-0 sudo[156374]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdbqnjtgegkoicfktzielavbymeqqfre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114121.4760742-228-232578989140409/AnsiballZ_file.py'
Nov 25 23:42:02 compute-0 sudo[156374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:42:02 compute-0 python3.9[156376]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:42:02 compute-0 sudo[156374]: pam_unix(sudo:session): session closed for user root
Nov 25 23:42:03 compute-0 sudo[156526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cchtvwignxjvqzdavbjmdtzijyuhgkaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114122.7341368-251-48900824536718/AnsiballZ_file.py'
Nov 25 23:42:03 compute-0 sudo[156526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:42:03 compute-0 python3.9[156528]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:42:03 compute-0 ceph-mon[75654]: pgmap v357: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:42:03 compute-0 sudo[156526]: pam_unix(sudo:session): session closed for user root
Nov 25 23:42:03 compute-0 sudo[156678]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zktiiuggmbfeoteviczqhtdnlqhqvvjh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114123.5583718-259-94918868929125/AnsiballZ_stat.py'
Nov 25 23:42:03 compute-0 sudo[156678]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:42:04 compute-0 python3.9[156680]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:42:04 compute-0 sudo[156678]: pam_unix(sudo:session): session closed for user root
Nov 25 23:42:04 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v358: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:42:04 compute-0 sudo[156756]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znnkvpaotlnqjbyvuajhryfvexrsrprr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114123.5583718-259-94918868929125/AnsiballZ_file.py'
Nov 25 23:42:04 compute-0 sudo[156756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:42:04 compute-0 python3.9[156758]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:42:04 compute-0 sudo[156756]: pam_unix(sudo:session): session closed for user root
Nov 25 23:42:05 compute-0 sudo[156908]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbiipgznvnmfolrcthiydiuhsctddrzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114124.953084-271-119703927341038/AnsiballZ_stat.py'
Nov 25 23:42:05 compute-0 sudo[156908]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:42:05 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:42:05 compute-0 ceph-mon[75654]: pgmap v358: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:42:05 compute-0 python3.9[156910]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:42:05 compute-0 sudo[156908]: pam_unix(sudo:session): session closed for user root
Nov 25 23:42:05 compute-0 sudo[156986]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfawrksevnnmltayyhzswlcmlttlycze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114124.953084-271-119703927341038/AnsiballZ_file.py'
Nov 25 23:42:05 compute-0 sudo[156986]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:42:05 compute-0 python3.9[156988]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:42:05 compute-0 sudo[156986]: pam_unix(sudo:session): session closed for user root
Nov 25 23:42:06 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v359: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:42:06 compute-0 sudo[157138]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdygeljudbrajkshfiwdgezaryawjkdy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114126.1573427-283-42735662016800/AnsiballZ_systemd.py'
Nov 25 23:42:06 compute-0 sudo[157138]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:42:06 compute-0 python3.9[157140]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 23:42:06 compute-0 systemd[1]: Reloading.
Nov 25 23:42:07 compute-0 systemd-rc-local-generator[157168]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 23:42:07 compute-0 systemd-sysv-generator[157172]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 23:42:07 compute-0 sudo[157138]: pam_unix(sudo:session): session closed for user root
Nov 25 23:42:07 compute-0 ceph-mon[75654]: pgmap v359: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:42:07 compute-0 sudo[157327]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksokmnfkubqxtxneyiotxfbmjgoawfef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114127.492233-291-128777205521559/AnsiballZ_stat.py'
Nov 25 23:42:07 compute-0 sudo[157327]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:42:08 compute-0 python3.9[157329]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:42:08 compute-0 sudo[157327]: pam_unix(sudo:session): session closed for user root
Nov 25 23:42:08 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v360: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:42:08 compute-0 sudo[157405]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdsyyfqehhewuwulswhwbnlgwshspdwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114127.492233-291-128777205521559/AnsiballZ_file.py'
Nov 25 23:42:08 compute-0 sudo[157405]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:42:08 compute-0 python3.9[157407]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:42:08 compute-0 sudo[157405]: pam_unix(sudo:session): session closed for user root
Nov 25 23:42:09 compute-0 sudo[157557]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygeevymnvlabqnqwypuztlmvryepxvmh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114128.8243911-303-102240590191244/AnsiballZ_stat.py'
Nov 25 23:42:09 compute-0 sudo[157557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:42:09 compute-0 ceph-mon[75654]: pgmap v360: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:42:09 compute-0 python3.9[157559]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:42:09 compute-0 sudo[157557]: pam_unix(sudo:session): session closed for user root
Nov 25 23:42:09 compute-0 sudo[157635]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxywforfivcyxusjtahuikxmrgojhhbl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114128.8243911-303-102240590191244/AnsiballZ_file.py'
Nov 25 23:42:09 compute-0 sudo[157635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:42:09 compute-0 python3.9[157637]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:42:09 compute-0 sudo[157635]: pam_unix(sudo:session): session closed for user root
Nov 25 23:42:10 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v361: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:42:10 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:42:10 compute-0 sudo[157789]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgttzfvvbyndahasfpqokyatzsfnnsev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114130.1100948-315-199368552509166/AnsiballZ_systemd.py'
Nov 25 23:42:10 compute-0 sudo[157789]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:42:10 compute-0 python3.9[157791]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 23:42:10 compute-0 systemd[1]: Reloading.
Nov 25 23:42:10 compute-0 systemd-rc-local-generator[157819]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 23:42:10 compute-0 systemd-sysv-generator[157824]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 23:42:11 compute-0 sshd-session[157638]: Received disconnect from 124.18.141.70 port 33564:11: Bye Bye [preauth]
Nov 25 23:42:11 compute-0 sshd-session[157638]: Disconnected from authenticating user root 124.18.141.70 port 33564 [preauth]
Nov 25 23:42:11 compute-0 systemd[1]: Starting Create netns directory...
Nov 25 23:42:11 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 25 23:42:11 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 25 23:42:11 compute-0 systemd[1]: Finished Create netns directory.
Nov 25 23:42:11 compute-0 sudo[157789]: pam_unix(sudo:session): session closed for user root
Nov 25 23:42:11 compute-0 ceph-mon[75654]: pgmap v361: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:42:12 compute-0 sudo[157983]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyslbzadmmfvlhpgnbqkpqjpoqzclarj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114131.589301-325-32554139359057/AnsiballZ_file.py'
Nov 25 23:42:12 compute-0 sudo[157983]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:42:12 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v362: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:42:12 compute-0 python3.9[157985]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:42:12 compute-0 sudo[157983]: pam_unix(sudo:session): session closed for user root
Nov 25 23:42:12 compute-0 sudo[158135]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qchdjzkevppdzwssdqbfuiuozrojjvwz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114132.5297968-333-6380289767988/AnsiballZ_stat.py'
Nov 25 23:42:12 compute-0 sudo[158135]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:42:13 compute-0 python3.9[158137]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:42:13 compute-0 sudo[158135]: pam_unix(sudo:session): session closed for user root
Nov 25 23:42:13 compute-0 ceph-mon[75654]: pgmap v362: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:42:13 compute-0 sudo[158258]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drnblljaqltzmaonkyjjiijmkhgtsvdi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114132.5297968-333-6380289767988/AnsiballZ_copy.py'
Nov 25 23:42:13 compute-0 sudo[158258]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:42:13 compute-0 python3.9[158260]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764114132.5297968-333-6380289767988/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:42:13 compute-0 sudo[158258]: pam_unix(sudo:session): session closed for user root
Nov 25 23:42:14 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v363: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:42:14 compute-0 sudo[158410]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zioaftnvrrysskepdmldcndhzrevnbal ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114134.4285793-350-274130213228164/AnsiballZ_file.py'
Nov 25 23:42:14 compute-0 sudo[158410]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:42:15 compute-0 python3.9[158412]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:42:15 compute-0 sudo[158410]: pam_unix(sudo:session): session closed for user root
Nov 25 23:42:15 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:42:15 compute-0 ceph-mon[75654]: pgmap v363: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:42:15 compute-0 sudo[158562]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grkbaruaqhjlpaksfrgbwktqbyahevhw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114135.3597665-358-271186454324317/AnsiballZ_stat.py'
Nov 25 23:42:15 compute-0 sudo[158562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:42:15 compute-0 python3.9[158564]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:42:15 compute-0 sudo[158562]: pam_unix(sudo:session): session closed for user root
Nov 25 23:42:16 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v364: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:42:16 compute-0 sudo[158685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgjrsnqgvabifjqzzdffavkeruinefsq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114135.3597665-358-271186454324317/AnsiballZ_copy.py'
Nov 25 23:42:16 compute-0 sudo[158685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:42:16 compute-0 python3.9[158687]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764114135.3597665-358-271186454324317/.source.json _original_basename=.og3_1hbb follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:42:16 compute-0 sudo[158685]: pam_unix(sudo:session): session closed for user root
Nov 25 23:42:17 compute-0 sudo[158837]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhqezbzltsbxbylxugkosfdvfuquqviq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114136.9106133-373-254408298424436/AnsiballZ_file.py'
Nov 25 23:42:17 compute-0 sudo[158837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:42:17 compute-0 ceph-mon[75654]: pgmap v364: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:42:17 compute-0 python3.9[158839]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:42:17 compute-0 sudo[158837]: pam_unix(sudo:session): session closed for user root
Nov 25 23:42:18 compute-0 sudo[158989]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omcqqxeajypneajmczsgzabenmowqefe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114137.8176153-381-75374681249973/AnsiballZ_stat.py'
Nov 25 23:42:18 compute-0 sudo[158989]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:42:18 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v365: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:42:18 compute-0 sudo[158989]: pam_unix(sudo:session): session closed for user root
Nov 25 23:42:18 compute-0 sudo[159112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlihgesmlttyzdsfstirizbkxehgsghr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114137.8176153-381-75374681249973/AnsiballZ_copy.py'
Nov 25 23:42:18 compute-0 sudo[159112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:42:19 compute-0 sudo[159112]: pam_unix(sudo:session): session closed for user root
Nov 25 23:42:19 compute-0 ceph-mon[75654]: pgmap v365: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:42:19 compute-0 sudo[159264]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmdjpbazzfutikvlxojjgxpfpjaiopuz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114139.4536269-398-131924875879205/AnsiballZ_container_config_data.py'
Nov 25 23:42:19 compute-0 sudo[159264]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:42:20 compute-0 python3.9[159266]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Nov 25 23:42:20 compute-0 sudo[159264]: pam_unix(sudo:session): session closed for user root
Nov 25 23:42:20 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v366: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:42:20 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:42:21 compute-0 sudo[159416]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbzokcnhpulhrmxfvposrgjbrjxxcyeh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114140.5771148-407-184938732860076/AnsiballZ_container_config_hash.py'
Nov 25 23:42:21 compute-0 sudo[159416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:42:21 compute-0 python3.9[159418]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 25 23:42:21 compute-0 sudo[159416]: pam_unix(sudo:session): session closed for user root
Nov 25 23:42:21 compute-0 ceph-mon[75654]: pgmap v366: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:42:22 compute-0 sudo[159568]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppccdbkkallbxxbjhlmppgjtdjtrpule ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114141.581234-416-145417432198093/AnsiballZ_podman_container_info.py'
Nov 25 23:42:22 compute-0 sudo[159568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:42:22 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v367: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:42:22 compute-0 python3.9[159570]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 25 23:42:22 compute-0 sudo[159568]: pam_unix(sudo:session): session closed for user root
Nov 25 23:42:23 compute-0 ceph-mon[75654]: pgmap v367: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:42:23 compute-0 sudo[159747]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwlfptvxluuizrcsnksnxcmgzxftudgt ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764114143.2129571-429-26705252088859/AnsiballZ_edpm_container_manage.py'
Nov 25 23:42:23 compute-0 sudo[159747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:42:24 compute-0 python3[159749]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 25 23:42:24 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v368: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:42:25 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:42:25 compute-0 ceph-mon[75654]: pgmap v368: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:42:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:42:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:42:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:42:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:42:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:42:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:42:26 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v369: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:42:27 compute-0 ceph-mon[75654]: pgmap v369: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:42:27 compute-0 podman[159812]: 2025-11-25 23:42:27.640665986 +0000 UTC m=+1.464897029 container health_status 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Nov 25 23:42:28 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v370: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:42:29 compute-0 ceph-mon[75654]: pgmap v370: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:42:30 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v371: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:42:30 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:42:30 compute-0 ceph-mon[75654]: pgmap v371: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:42:32 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v372: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:42:32 compute-0 podman[159763]: 2025-11-25 23:42:32.687879205 +0000 UTC m=+8.604175134 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 25 23:42:32 compute-0 podman[159906]: 2025-11-25 23:42:32.928256653 +0000 UTC m=+0.071665813 container create 9c33443d6f21f9126b440289b7d384a59ef92406382d0e40a2b11f1da5dc47d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118)
Nov 25 23:42:32 compute-0 podman[159906]: 2025-11-25 23:42:32.895123499 +0000 UTC m=+0.038532669 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 25 23:42:32 compute-0 python3[159749]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 25 23:42:33 compute-0 sudo[159747]: pam_unix(sudo:session): session closed for user root
Nov 25 23:42:33 compute-0 ceph-mon[75654]: pgmap v372: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:42:33 compute-0 sudo[160094]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvahpajpexdraixwaxfedjqylpsuvius ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114153.3869061-437-119341006901188/AnsiballZ_stat.py'
Nov 25 23:42:33 compute-0 sudo[160094]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:42:33 compute-0 python3.9[160096]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 23:42:34 compute-0 sudo[160094]: pam_unix(sudo:session): session closed for user root
Nov 25 23:42:34 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v373: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:42:34 compute-0 sudo[160248]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykmgvuvxqegmsqzxddrnjydhwehrvtke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114154.3315384-446-39864312588282/AnsiballZ_file.py'
Nov 25 23:42:34 compute-0 sudo[160248]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:42:34 compute-0 python3.9[160250]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:42:34 compute-0 sudo[160248]: pam_unix(sudo:session): session closed for user root
Nov 25 23:42:35 compute-0 sudo[160324]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yavowortafhjyrzqyvlylgwztkafbfjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114154.3315384-446-39864312588282/AnsiballZ_stat.py'
Nov 25 23:42:35 compute-0 sudo[160324]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:42:35 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:42:35 compute-0 ceph-mon[75654]: pgmap v373: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:42:35 compute-0 python3.9[160326]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 23:42:35 compute-0 sudo[160324]: pam_unix(sudo:session): session closed for user root
Nov 25 23:42:36 compute-0 auditd[698]: Audit daemon rotating log files
Nov 25 23:42:36 compute-0 sudo[160475]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gderotfipwjcnuiglkktwwncurhdppdq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114155.5486336-446-252190717482420/AnsiballZ_copy.py'
Nov 25 23:42:36 compute-0 sudo[160475]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:42:36 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v374: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:42:36 compute-0 python3.9[160477]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764114155.5486336-446-252190717482420/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:42:36 compute-0 sudo[160475]: pam_unix(sudo:session): session closed for user root
Nov 25 23:42:36 compute-0 sudo[160551]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnfkpauqzgxdklcdavrilzvrtnnyoibl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114155.5486336-446-252190717482420/AnsiballZ_systemd.py'
Nov 25 23:42:36 compute-0 sudo[160551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:42:37 compute-0 python3.9[160553]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 23:42:37 compute-0 systemd[1]: Reloading.
Nov 25 23:42:37 compute-0 systemd-rc-local-generator[160577]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 23:42:37 compute-0 systemd-sysv-generator[160583]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 23:42:37 compute-0 ceph-mon[75654]: pgmap v374: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:42:37 compute-0 sudo[160551]: pam_unix(sudo:session): session closed for user root
Nov 25 23:42:37 compute-0 sudo[160662]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wifbirjmkzxnimqqznepwilyvwtyxsgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114155.5486336-446-252190717482420/AnsiballZ_systemd.py'
Nov 25 23:42:37 compute-0 sudo[160662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:42:38 compute-0 python3.9[160664]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 23:42:38 compute-0 systemd[1]: Reloading.
Nov 25 23:42:38 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v375: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:42:38 compute-0 systemd-rc-local-generator[160695]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 23:42:38 compute-0 systemd-sysv-generator[160698]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 23:42:38 compute-0 systemd[1]: Starting ovn_metadata_agent container...
Nov 25 23:42:38 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:42:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f4d416747d1b87463ccd2a289bdde2cb2156a91081f47ed60d52fd569c26de4/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Nov 25 23:42:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f4d416747d1b87463ccd2a289bdde2cb2156a91081f47ed60d52fd569c26de4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 23:42:38 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 9c33443d6f21f9126b440289b7d384a59ef92406382d0e40a2b11f1da5dc47d9.
Nov 25 23:42:38 compute-0 podman[160705]: 2025-11-25 23:42:38.779837522 +0000 UTC m=+0.243921502 container init 9c33443d6f21f9126b440289b7d384a59ef92406382d0e40a2b11f1da5dc47d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Nov 25 23:42:38 compute-0 ovn_metadata_agent[160720]: + sudo -E kolla_set_configs
Nov 25 23:42:38 compute-0 podman[160705]: 2025-11-25 23:42:38.822049698 +0000 UTC m=+0.286133598 container start 9c33443d6f21f9126b440289b7d384a59ef92406382d0e40a2b11f1da5dc47d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 25 23:42:38 compute-0 edpm-start-podman-container[160705]: ovn_metadata_agent
Nov 25 23:42:38 compute-0 ovn_metadata_agent[160720]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 25 23:42:38 compute-0 ovn_metadata_agent[160720]: INFO:__main__:Validating config file
Nov 25 23:42:38 compute-0 ovn_metadata_agent[160720]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 25 23:42:38 compute-0 ovn_metadata_agent[160720]: INFO:__main__:Copying service configuration files
Nov 25 23:42:38 compute-0 ovn_metadata_agent[160720]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Nov 25 23:42:38 compute-0 ovn_metadata_agent[160720]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Nov 25 23:42:38 compute-0 ovn_metadata_agent[160720]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Nov 25 23:42:38 compute-0 ovn_metadata_agent[160720]: INFO:__main__:Writing out command to execute
Nov 25 23:42:38 compute-0 ovn_metadata_agent[160720]: INFO:__main__:Setting permission for /var/lib/neutron
Nov 25 23:42:38 compute-0 ovn_metadata_agent[160720]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Nov 25 23:42:38 compute-0 ovn_metadata_agent[160720]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Nov 25 23:42:38 compute-0 ovn_metadata_agent[160720]: INFO:__main__:Setting permission for /var/lib/neutron/external
Nov 25 23:42:38 compute-0 ovn_metadata_agent[160720]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Nov 25 23:42:38 compute-0 ovn_metadata_agent[160720]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Nov 25 23:42:38 compute-0 ovn_metadata_agent[160720]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Nov 25 23:42:38 compute-0 ovn_metadata_agent[160720]: ++ cat /run_command
Nov 25 23:42:38 compute-0 ovn_metadata_agent[160720]: + CMD=neutron-ovn-metadata-agent
Nov 25 23:42:38 compute-0 ovn_metadata_agent[160720]: + ARGS=
Nov 25 23:42:38 compute-0 ovn_metadata_agent[160720]: + sudo kolla_copy_cacerts
Nov 25 23:42:38 compute-0 edpm-start-podman-container[160704]: Creating additional drop-in dependency for "ovn_metadata_agent" (9c33443d6f21f9126b440289b7d384a59ef92406382d0e40a2b11f1da5dc47d9)
Nov 25 23:42:38 compute-0 podman[160727]: 2025-11-25 23:42:38.948790416 +0000 UTC m=+0.106188471 container health_status 9c33443d6f21f9126b440289b7d384a59ef92406382d0e40a2b11f1da5dc47d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 23:42:38 compute-0 ovn_metadata_agent[160720]: + [[ ! -n '' ]]
Nov 25 23:42:38 compute-0 ovn_metadata_agent[160720]: + . kolla_extend_start
Nov 25 23:42:38 compute-0 ovn_metadata_agent[160720]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Nov 25 23:42:38 compute-0 ovn_metadata_agent[160720]: Running command: 'neutron-ovn-metadata-agent'
Nov 25 23:42:38 compute-0 ovn_metadata_agent[160720]: + umask 0022
Nov 25 23:42:38 compute-0 ovn_metadata_agent[160720]: + exec neutron-ovn-metadata-agent
Nov 25 23:42:38 compute-0 systemd[1]: Reloading.
Nov 25 23:42:39 compute-0 systemd-rc-local-generator[160794]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 23:42:39 compute-0 systemd-sysv-generator[160799]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 23:42:39 compute-0 systemd[1]: Started ovn_metadata_agent container.
Nov 25 23:42:39 compute-0 sudo[160662]: pam_unix(sudo:session): session closed for user root
Nov 25 23:42:39 compute-0 ceph-mon[75654]: pgmap v375: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:42:39 compute-0 sshd-session[151466]: Connection closed by 192.168.122.30 port 43610
Nov 25 23:42:39 compute-0 sshd-session[151463]: pam_unix(sshd:session): session closed for user zuul
Nov 25 23:42:39 compute-0 systemd[1]: session-48.scope: Deactivated successfully.
Nov 25 23:42:39 compute-0 systemd[1]: session-48.scope: Consumed 1min 2.729s CPU time.
Nov 25 23:42:39 compute-0 systemd-logind[789]: Session 48 logged out. Waiting for processes to exit.
Nov 25 23:42:39 compute-0 systemd-logind[789]: Removed session 48.
Nov 25 23:42:40 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v376: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:42:40 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.712 160725 INFO neutron.common.config [-] Logging enabled!
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.712 160725 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.713 160725 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.713 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.713 160725 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.713 160725 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.714 160725 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.714 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.714 160725 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.714 160725 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.714 160725 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.714 160725 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.714 160725 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.714 160725 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.714 160725 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.715 160725 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.715 160725 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.715 160725 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.715 160725 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.715 160725 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.715 160725 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.715 160725 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.715 160725 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.715 160725 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.716 160725 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.716 160725 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.716 160725 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.716 160725 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.716 160725 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.716 160725 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.716 160725 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.716 160725 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.716 160725 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.716 160725 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.717 160725 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.717 160725 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.717 160725 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.717 160725 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.717 160725 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.717 160725 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.717 160725 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.717 160725 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.718 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.718 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.718 160725 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.718 160725 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.718 160725 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.718 160725 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.718 160725 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.718 160725 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.718 160725 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.718 160725 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.719 160725 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.719 160725 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.719 160725 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.719 160725 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.719 160725 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.719 160725 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.719 160725 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.719 160725 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.719 160725 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.719 160725 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.720 160725 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.720 160725 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.720 160725 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.720 160725 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.720 160725 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.720 160725 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.720 160725 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.720 160725 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.721 160725 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.721 160725 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.721 160725 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.721 160725 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.721 160725 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.721 160725 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.721 160725 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.721 160725 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.721 160725 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.722 160725 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.722 160725 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.722 160725 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.722 160725 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.722 160725 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.722 160725 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.722 160725 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.723 160725 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.723 160725 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.723 160725 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.723 160725 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.723 160725 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.723 160725 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.723 160725 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.723 160725 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.724 160725 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.724 160725 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.724 160725 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.724 160725 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.724 160725 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.724 160725 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.724 160725 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.724 160725 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.725 160725 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.725 160725 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.725 160725 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.725 160725 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.725 160725 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.725 160725 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.725 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.725 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.726 160725 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.726 160725 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.726 160725 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.726 160725 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.726 160725 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.726 160725 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.726 160725 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.726 160725 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.727 160725 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.727 160725 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.727 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.727 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.727 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.727 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.727 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.727 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.727 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.728 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.728 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.728 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.728 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.728 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.728 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.728 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.728 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.728 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.729 160725 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.729 160725 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.729 160725 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.729 160725 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.729 160725 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.729 160725 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.729 160725 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.729 160725 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.729 160725 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.729 160725 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.730 160725 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.730 160725 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.730 160725 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.730 160725 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.730 160725 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.730 160725 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.730 160725 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.731 160725 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.731 160725 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.731 160725 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.731 160725 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.731 160725 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.731 160725 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.731 160725 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.731 160725 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.731 160725 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.732 160725 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.732 160725 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.732 160725 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.732 160725 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.732 160725 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.732 160725 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.732 160725 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.732 160725 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.732 160725 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.733 160725 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.733 160725 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.733 160725 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.733 160725 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.733 160725 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.733 160725 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.733 160725 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.733 160725 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.734 160725 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.734 160725 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.734 160725 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.734 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.734 160725 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.734 160725 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.735 160725 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.735 160725 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.735 160725 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.735 160725 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.735 160725 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.735 160725 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.735 160725 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.735 160725 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.735 160725 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.735 160725 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.736 160725 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.736 160725 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.736 160725 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.736 160725 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.736 160725 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.736 160725 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.736 160725 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.736 160725 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.737 160725 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.737 160725 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.737 160725 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.737 160725 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.737 160725 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.737 160725 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.737 160725 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.737 160725 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.737 160725 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.737 160725 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.738 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.738 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.738 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.738 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.738 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.738 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.738 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.738 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.738 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.739 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.739 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.739 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.739 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.739 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.739 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.739 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.739 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.739 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.740 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.740 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.740 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.740 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.740 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.740 160725 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.740 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.740 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.740 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.741 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.741 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.741 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.741 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.741 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.741 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.741 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.741 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.741 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.741 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.742 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.742 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.742 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.742 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.742 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.742 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.742 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.742 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.742 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.743 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.743 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.743 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.743 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.743 160725 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.743 160725 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.743 160725 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.743 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.743 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.744 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.744 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.744 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.744 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.744 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.744 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.744 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.744 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.744 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.745 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.745 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.745 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.745 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.745 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.745 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.745 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.746 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.746 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.746 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.746 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.746 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.746 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.746 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.746 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.746 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.746 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.747 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.747 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.747 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.747 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.747 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.747 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.747 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.747 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.747 160725 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.748 160725 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.758 160725 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.758 160725 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.758 160725 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.758 160725 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.759 160725 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.772 160725 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 2ba84045-48af-49e3-86f7-35b32300977f (UUID: 2ba84045-48af-49e3-86f7-35b32300977f) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.801 160725 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.802 160725 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.802 160725 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.802 160725 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.806 160725 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.812 160725 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.819 160725 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '2ba84045-48af-49e3-86f7-35b32300977f'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f9d3abb7cd0>], external_ids={}, name=2ba84045-48af-49e3-86f7-35b32300977f, nb_cfg_timestamp=1764114093880, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.820 160725 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7f9d3abbab20>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.821 160725 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.821 160725 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.821 160725 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.821 160725 INFO oslo_service.service [-] Starting 1 workers
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.827 160725 DEBUG oslo_service.service [-] Started child 160834 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.830 160725 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmp3no26x0o/privsep.sock']
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.831 160834 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-365978'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.855 160834 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.855 160834 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.856 160834 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.859 160834 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.865 160834 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Nov 25 23:42:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:40.873 160834 INFO eventlet.wsgi.server [-] (160834) wsgi starting up on http:/var/lib/neutron/metadata_proxy
Nov 25 23:42:41 compute-0 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Nov 25 23:42:41 compute-0 ceph-mon[75654]: pgmap v376: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:42:41 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:41.549 160725 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Nov 25 23:42:41 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:41.551 160725 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp3no26x0o/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Nov 25 23:42:41 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:41.407 160839 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 25 23:42:41 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:41.414 160839 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 25 23:42:41 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:41.417 160839 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Nov 25 23:42:41 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:41.418 160839 INFO oslo.privsep.daemon [-] privsep daemon running as pid 160839
Nov 25 23:42:41 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:41.556 160839 DEBUG oslo.privsep.daemon [-] privsep: reply[b4a75ff2-3ffe-495f-a540-800c41d881a7]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.028 160839 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.028 160839 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.028 160839 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 23:42:42 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v377: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.527 160839 DEBUG oslo.privsep.daemon [-] privsep: reply[c6f6e6f3-d8c1-45d4-ad44-2d1c0e0bd92d]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.530 160725 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=2ba84045-48af-49e3-86f7-35b32300977f, column=external_ids, values=({'neutron:ovn-metadata-id': 'e7c6ed73-f602-5301-8e5d-9f4191b6b114'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.699 160725 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=2ba84045-48af-49e3-86f7-35b32300977f, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.746 160725 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.746 160725 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.747 160725 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.747 160725 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.747 160725 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.747 160725 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.748 160725 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.748 160725 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.748 160725 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.748 160725 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.749 160725 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.749 160725 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.749 160725 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.749 160725 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.749 160725 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.750 160725 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.750 160725 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.750 160725 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.751 160725 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.751 160725 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.751 160725 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.751 160725 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.752 160725 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.752 160725 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.752 160725 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.752 160725 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.753 160725 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.753 160725 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.753 160725 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.754 160725 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.754 160725 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.754 160725 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.754 160725 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.755 160725 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.755 160725 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.755 160725 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.755 160725 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.756 160725 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.756 160725 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.756 160725 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.757 160725 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.757 160725 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.757 160725 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.757 160725 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.758 160725 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.758 160725 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.758 160725 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.758 160725 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.759 160725 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.759 160725 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.759 160725 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.759 160725 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.760 160725 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.760 160725 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.760 160725 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.760 160725 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.761 160725 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.761 160725 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.761 160725 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.761 160725 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.761 160725 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.762 160725 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.762 160725 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.762 160725 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.762 160725 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.763 160725 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.763 160725 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.763 160725 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.763 160725 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.764 160725 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.764 160725 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.764 160725 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.764 160725 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.765 160725 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.765 160725 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.765 160725 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.765 160725 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.765 160725 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.766 160725 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.766 160725 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.766 160725 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.766 160725 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.767 160725 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.767 160725 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.767 160725 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.767 160725 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.768 160725 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.768 160725 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.768 160725 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.768 160725 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.769 160725 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.769 160725 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.769 160725 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.769 160725 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.769 160725 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.770 160725 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.770 160725 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.770 160725 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.770 160725 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.770 160725 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.771 160725 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.771 160725 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.771 160725 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.771 160725 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.771 160725 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.772 160725 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.772 160725 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.772 160725 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.772 160725 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.773 160725 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.773 160725 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.773 160725 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.774 160725 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.774 160725 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.774 160725 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.774 160725 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.774 160725 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.775 160725 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.775 160725 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.775 160725 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.775 160725 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.776 160725 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.776 160725 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.776 160725 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.776 160725 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.777 160725 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.777 160725 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.777 160725 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.778 160725 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.778 160725 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.778 160725 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.778 160725 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.778 160725 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.779 160725 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.779 160725 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.779 160725 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.779 160725 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.780 160725 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.780 160725 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.780 160725 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.781 160725 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.781 160725 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.781 160725 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.781 160725 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.782 160725 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.782 160725 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.782 160725 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.782 160725 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.782 160725 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.783 160725 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.783 160725 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.783 160725 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.783 160725 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.784 160725 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.784 160725 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.784 160725 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.784 160725 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.784 160725 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.784 160725 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.785 160725 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.785 160725 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.785 160725 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.785 160725 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.786 160725 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.786 160725 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.786 160725 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.786 160725 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.786 160725 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.787 160725 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.787 160725 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.787 160725 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.787 160725 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.787 160725 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.788 160725 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.788 160725 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.788 160725 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.788 160725 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.789 160725 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.789 160725 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.789 160725 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.789 160725 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.789 160725 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.790 160725 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.790 160725 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.790 160725 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.790 160725 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.791 160725 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.791 160725 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.791 160725 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.791 160725 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.792 160725 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.792 160725 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.792 160725 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.792 160725 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.792 160725 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.793 160725 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.793 160725 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.793 160725 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.793 160725 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.794 160725 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.794 160725 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.794 160725 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.794 160725 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.795 160725 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.795 160725 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.795 160725 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.795 160725 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.796 160725 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.796 160725 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.796 160725 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.796 160725 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.796 160725 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.797 160725 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.797 160725 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.797 160725 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.797 160725 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.798 160725 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.798 160725 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.798 160725 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.798 160725 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.798 160725 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.798 160725 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.799 160725 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.799 160725 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.799 160725 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.799 160725 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.799 160725 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.800 160725 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.800 160725 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.800 160725 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.800 160725 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.800 160725 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.800 160725 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.800 160725 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.801 160725 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.801 160725 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.801 160725 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.801 160725 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.801 160725 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.801 160725 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.801 160725 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.802 160725 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.802 160725 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.802 160725 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.802 160725 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.802 160725 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.802 160725 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.802 160725 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.803 160725 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.803 160725 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.803 160725 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.803 160725 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.803 160725 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.803 160725 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.804 160725 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.804 160725 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.804 160725 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.804 160725 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.804 160725 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.804 160725 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.804 160725 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.805 160725 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.805 160725 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.805 160725 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.805 160725 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.805 160725 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.806 160725 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.806 160725 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.807 160725 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.807 160725 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.807 160725 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.807 160725 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.808 160725 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.808 160725 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.808 160725 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.808 160725 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.809 160725 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.809 160725 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.809 160725 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.809 160725 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.810 160725 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.810 160725 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.810 160725 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.810 160725 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.810 160725 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.810 160725 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.811 160725 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.811 160725 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.811 160725 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.811 160725 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.811 160725 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.811 160725 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.811 160725 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.812 160725 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.812 160725 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.812 160725 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.812 160725 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.812 160725 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.812 160725 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.812 160725 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.813 160725 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.813 160725 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.813 160725 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:42:42 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:42:42.813 160725 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 25 23:42:43 compute-0 ceph-mon[75654]: pgmap v377: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:42:44 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v378: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:42:45 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:42:45 compute-0 ceph-mon[75654]: pgmap v378: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:42:45 compute-0 sshd-session[160846]: Accepted publickey for zuul from 192.168.122.30 port 53664 ssh2: ECDSA SHA256:7fdySOwLsSW4/MY3qn2D8coSBpPDcuftesbV+ivPlmY
Nov 25 23:42:45 compute-0 systemd-logind[789]: New session 49 of user zuul.
Nov 25 23:42:45 compute-0 systemd[1]: Started Session 49 of User zuul.
Nov 25 23:42:45 compute-0 sshd-session[160846]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 23:42:46 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v379: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:42:46 compute-0 sshd-session[160844]: Received disconnect from 45.78.216.132 port 36926:11: Bye Bye [preauth]
Nov 25 23:42:46 compute-0 sshd-session[160844]: Disconnected from authenticating user root 45.78.216.132 port 36926 [preauth]
Nov 25 23:42:47 compute-0 python3.9[160999]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 23:42:47 compute-0 ceph-mon[75654]: pgmap v379: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:42:48 compute-0 sudo[161153]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkuefjwkofwadllcqkudhlbufuelbmzj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114167.7348437-34-70165635130166/AnsiballZ_command.py'
Nov 25 23:42:48 compute-0 sudo[161153]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:42:48 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v380: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:42:48 compute-0 python3.9[161155]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:42:48 compute-0 sudo[161153]: pam_unix(sudo:session): session closed for user root
Nov 25 23:42:49 compute-0 ceph-mon[75654]: pgmap v380: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:42:49 compute-0 sudo[161318]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-orcgzzeefmditejdqgmomjxmgcymgyff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114168.9832973-45-36355723896304/AnsiballZ_systemd_service.py'
Nov 25 23:42:49 compute-0 sudo[161318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:42:49 compute-0 python3.9[161320]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 23:42:49 compute-0 systemd[1]: Reloading.
Nov 25 23:42:50 compute-0 systemd-sysv-generator[161351]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 23:42:50 compute-0 systemd-rc-local-generator[161348]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 23:42:50 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v381: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:42:50 compute-0 sudo[161318]: pam_unix(sudo:session): session closed for user root
Nov 25 23:42:50 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:42:51 compute-0 python3.9[161505]: ansible-ansible.builtin.service_facts Invoked
Nov 25 23:42:51 compute-0 network[161522]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 25 23:42:51 compute-0 network[161523]: 'network-scripts' will be removed from distribution in near future.
Nov 25 23:42:51 compute-0 network[161524]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 25 23:42:51 compute-0 ceph-osd[89044]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 23:42:51 compute-0 ceph-osd[89044]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 4366 writes, 20K keys, 4366 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 4366 writes, 458 syncs, 9.53 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4366 writes, 20K keys, 4366 commit groups, 1.0 writes per commit group, ingest: 16.39 MB, 0.03 MB/s
                                           Interval WAL: 4366 writes, 458 syncs, 9.53 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a4eba691f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a4eba691f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a4eba691f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a4eba691f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a4eba691f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a4eba691f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a4eba691f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a4eba69090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a4eba69090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a4eba69090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a4eba691f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a4eba691f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 25 23:42:51 compute-0 ceph-mon[75654]: pgmap v381: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:42:52 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v382: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:42:53 compute-0 sudo[161563]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:42:53 compute-0 sudo[161563]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:42:53 compute-0 sudo[161563]: pam_unix(sudo:session): session closed for user root
Nov 25 23:42:53 compute-0 sudo[161591]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:42:53 compute-0 sudo[161591]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:42:53 compute-0 sudo[161591]: pam_unix(sudo:session): session closed for user root
Nov 25 23:42:53 compute-0 sudo[161620]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:42:53 compute-0 sudo[161620]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:42:53 compute-0 sudo[161620]: pam_unix(sudo:session): session closed for user root
Nov 25 23:42:53 compute-0 sudo[161648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 25 23:42:53 compute-0 sudo[161648]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:42:53 compute-0 ceph-mon[75654]: pgmap v382: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:42:53 compute-0 sudo[161648]: pam_unix(sudo:session): session closed for user root
Nov 25 23:42:53 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 23:42:53 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:42:53 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 23:42:53 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:42:53 compute-0 sudo[161707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:42:53 compute-0 sudo[161707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:42:53 compute-0 sudo[161707]: pam_unix(sudo:session): session closed for user root
Nov 25 23:42:53 compute-0 sudo[161735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:42:53 compute-0 sudo[161735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:42:53 compute-0 sudo[161735]: pam_unix(sudo:session): session closed for user root
Nov 25 23:42:53 compute-0 sudo[161763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:42:53 compute-0 sudo[161763]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:42:53 compute-0 sudo[161763]: pam_unix(sudo:session): session closed for user root
Nov 25 23:42:54 compute-0 sudo[161791]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 23:42:54 compute-0 sudo[161791]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:42:54 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v383: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:42:54 compute-0 sudo[161791]: pam_unix(sudo:session): session closed for user root
Nov 25 23:42:54 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 23:42:54 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:42:54 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 23:42:54 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 23:42:54 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 23:42:54 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:42:54 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev a5763175-0e10-4177-9cec-90ff04bfda29 does not exist
Nov 25 23:42:54 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev 74f39a0b-e517-41be-b053-b6965b223d44 does not exist
Nov 25 23:42:54 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev c0815e81-357e-43c7-909e-6b42d2efb6e4 does not exist
Nov 25 23:42:54 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 23:42:54 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 23:42:54 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 23:42:54 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 23:42:54 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 23:42:54 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:42:54 compute-0 sudo[161870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:42:54 compute-0 sudo[161870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:42:54 compute-0 sudo[161870]: pam_unix(sudo:session): session closed for user root
Nov 25 23:42:54 compute-0 sudo[161895]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:42:54 compute-0 sudo[161895]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:42:54 compute-0 sudo[161895]: pam_unix(sudo:session): session closed for user root
Nov 25 23:42:54 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:42:54 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:42:54 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:42:54 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 23:42:54 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:42:54 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 23:42:54 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 23:42:54 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:42:54 compute-0 sudo[161920]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:42:54 compute-0 sudo[161920]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:42:54 compute-0 sudo[161920]: pam_unix(sudo:session): session closed for user root
Nov 25 23:42:54 compute-0 sudo[161945]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 23:42:54 compute-0 sudo[161945]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:42:55 compute-0 podman[162011]: 2025-11-25 23:42:55.189028136 +0000 UTC m=+0.059246681 container create 4066e5cf6e327be7f983f733d9c536f59b445d69ae64a97f39e2579d30fd61f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_nash, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 25 23:42:55 compute-0 systemd[1]: Started libpod-conmon-4066e5cf6e327be7f983f733d9c536f59b445d69ae64a97f39e2579d30fd61f6.scope.
Nov 25 23:42:55 compute-0 podman[162011]: 2025-11-25 23:42:55.161211075 +0000 UTC m=+0.031429720 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:42:55 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:42:55 compute-0 podman[162011]: 2025-11-25 23:42:55.290947522 +0000 UTC m=+0.161166097 container init 4066e5cf6e327be7f983f733d9c536f59b445d69ae64a97f39e2579d30fd61f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_nash, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 23:42:55 compute-0 podman[162011]: 2025-11-25 23:42:55.301203976 +0000 UTC m=+0.171422531 container start 4066e5cf6e327be7f983f733d9c536f59b445d69ae64a97f39e2579d30fd61f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_nash, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:42:55 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:42:55 compute-0 podman[162011]: 2025-11-25 23:42:55.304880794 +0000 UTC m=+0.175099429 container attach 4066e5cf6e327be7f983f733d9c536f59b445d69ae64a97f39e2579d30fd61f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:42:55 compute-0 cranky_nash[162027]: 167 167
Nov 25 23:42:55 compute-0 systemd[1]: libpod-4066e5cf6e327be7f983f733d9c536f59b445d69ae64a97f39e2579d30fd61f6.scope: Deactivated successfully.
Nov 25 23:42:55 compute-0 podman[162011]: 2025-11-25 23:42:55.309343052 +0000 UTC m=+0.179561597 container died 4066e5cf6e327be7f983f733d9c536f59b445d69ae64a97f39e2579d30fd61f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:42:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-8511940e529114739810384d7ebc24944ba7cca3e07053ae0c879b24c276f9b4-merged.mount: Deactivated successfully.
Nov 25 23:42:55 compute-0 podman[162011]: 2025-11-25 23:42:55.351509687 +0000 UTC m=+0.221728272 container remove 4066e5cf6e327be7f983f733d9c536f59b445d69ae64a97f39e2579d30fd61f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_nash, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 25 23:42:55 compute-0 systemd[1]: libpod-conmon-4066e5cf6e327be7f983f733d9c536f59b445d69ae64a97f39e2579d30fd61f6.scope: Deactivated successfully.
Nov 25 23:42:55 compute-0 podman[162053]: 2025-11-25 23:42:55.593055855 +0000 UTC m=+0.060064542 container create 671a4595637b5253177c187ea1a2ada26e78cc495123bc1ee3994ae368d16a64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 25 23:42:55 compute-0 systemd[1]: Started libpod-conmon-671a4595637b5253177c187ea1a2ada26e78cc495123bc1ee3994ae368d16a64.scope.
Nov 25 23:42:55 compute-0 podman[162053]: 2025-11-25 23:42:55.562039309 +0000 UTC m=+0.029048076 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:42:55 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:42:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07a631ab8a28f611bdbb8177e7352b16f49fa967489f6c3454ab618bd370432a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 23:42:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07a631ab8a28f611bdbb8177e7352b16f49fa967489f6c3454ab618bd370432a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:42:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07a631ab8a28f611bdbb8177e7352b16f49fa967489f6c3454ab618bd370432a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:42:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07a631ab8a28f611bdbb8177e7352b16f49fa967489f6c3454ab618bd370432a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:42:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07a631ab8a28f611bdbb8177e7352b16f49fa967489f6c3454ab618bd370432a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 23:42:55 compute-0 podman[162053]: 2025-11-25 23:42:55.713106365 +0000 UTC m=+0.180115102 container init 671a4595637b5253177c187ea1a2ada26e78cc495123bc1ee3994ae368d16a64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_neumann, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Nov 25 23:42:55 compute-0 ceph-mon[75654]: pgmap v383: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:42:55 compute-0 podman[162053]: 2025-11-25 23:42:55.733496718 +0000 UTC m=+0.200505435 container start 671a4595637b5253177c187ea1a2ada26e78cc495123bc1ee3994ae368d16a64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_neumann, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 23:42:55 compute-0 podman[162053]: 2025-11-25 23:42:55.737628408 +0000 UTC m=+0.204637125 container attach 671a4595637b5253177c187ea1a2ada26e78cc495123bc1ee3994ae368d16a64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 23:42:55 compute-0 ceph-osd[90055]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 23:42:55 compute-0 ceph-osd[90055]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Cumulative writes: 4489 writes, 20K keys, 4489 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 4489 writes, 490 syncs, 9.16 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4489 writes, 20K keys, 4489 commit groups, 1.0 writes per commit group, ingest: 16.54 MB, 0.03 MB/s
                                           Interval WAL: 4489 writes, 490 syncs, 9.16 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613e96171f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613e96171f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613e96171f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613e96171f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613e96171f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613e96171f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613e96171f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613e9617090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613e9617090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613e9617090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613e96171f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613e96171f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 25 23:42:56 compute-0 ceph-mgr[75954]: [balancer INFO root] Optimize plan auto_2025-11-25_23:42:56
Nov 25 23:42:56 compute-0 ceph-mgr[75954]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 23:42:56 compute-0 ceph-mgr[75954]: [balancer INFO root] do_upmap
Nov 25 23:42:56 compute-0 ceph-mgr[75954]: [balancer INFO root] pools ['cephfs.cephfs.data', 'vms', 'cephfs.cephfs.meta', 'backups', 'images', 'volumes', '.mgr']
Nov 25 23:42:56 compute-0 ceph-mgr[75954]: [balancer INFO root] prepared 0/10 changes
Nov 25 23:42:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:42:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:42:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:42:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:42:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:42:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:42:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 23:42:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 23:42:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 23:42:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 23:42:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 23:42:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 23:42:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 23:42:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 23:42:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 23:42:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 23:42:56 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v384: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:42:56 compute-0 sudo[162258]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmsecxqkwaqklrcnfubkzprpsyclhvay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114176.3544147-64-207984933233649/AnsiballZ_systemd_service.py'
Nov 25 23:42:56 compute-0 sudo[162258]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:42:56 compute-0 magical_neumann[162074]: --> passed data devices: 0 physical, 3 LVM
Nov 25 23:42:56 compute-0 magical_neumann[162074]: --> relative data size: 1.0
Nov 25 23:42:56 compute-0 magical_neumann[162074]: --> All data devices are unavailable
Nov 25 23:42:56 compute-0 systemd[1]: libpod-671a4595637b5253177c187ea1a2ada26e78cc495123bc1ee3994ae368d16a64.scope: Deactivated successfully.
Nov 25 23:42:56 compute-0 systemd[1]: libpod-671a4595637b5253177c187ea1a2ada26e78cc495123bc1ee3994ae368d16a64.scope: Consumed 1.056s CPU time.
Nov 25 23:42:56 compute-0 podman[162053]: 2025-11-25 23:42:56.851739726 +0000 UTC m=+1.318748433 container died 671a4595637b5253177c187ea1a2ada26e78cc495123bc1ee3994ae368d16a64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:42:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-07a631ab8a28f611bdbb8177e7352b16f49fa967489f6c3454ab618bd370432a-merged.mount: Deactivated successfully.
Nov 25 23:42:56 compute-0 podman[162053]: 2025-11-25 23:42:56.930618258 +0000 UTC m=+1.397626945 container remove 671a4595637b5253177c187ea1a2ada26e78cc495123bc1ee3994ae368d16a64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_neumann, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:42:56 compute-0 systemd[1]: libpod-conmon-671a4595637b5253177c187ea1a2ada26e78cc495123bc1ee3994ae368d16a64.scope: Deactivated successfully.
Nov 25 23:42:56 compute-0 sudo[161945]: pam_unix(sudo:session): session closed for user root
Nov 25 23:42:57 compute-0 sudo[162280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:42:57 compute-0 sudo[162280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:42:57 compute-0 sudo[162280]: pam_unix(sudo:session): session closed for user root
Nov 25 23:42:57 compute-0 python3.9[162260]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 23:42:57 compute-0 sudo[162305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:42:57 compute-0 sudo[162258]: pam_unix(sudo:session): session closed for user root
Nov 25 23:42:57 compute-0 sudo[162305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:42:57 compute-0 sudo[162305]: pam_unix(sudo:session): session closed for user root
Nov 25 23:42:57 compute-0 sudo[162331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:42:57 compute-0 sudo[162331]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:42:57 compute-0 sudo[162331]: pam_unix(sudo:session): session closed for user root
Nov 25 23:42:57 compute-0 sudo[162380]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 -- lvm list --format json
Nov 25 23:42:57 compute-0 sudo[162380]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:42:57 compute-0 sudo[162581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yognxxjdycwihbrozlnkmktcwfufousu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114177.253574-64-222388153682425/AnsiballZ_systemd_service.py'
Nov 25 23:42:57 compute-0 sudo[162581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:42:57 compute-0 podman[162544]: 2025-11-25 23:42:57.677115007 +0000 UTC m=+0.077182779 container create 66b5331ec6ed3cfd03deda2f6dd47a99d4e3019919b8c7cd15a698c09358e996 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 23:42:57 compute-0 systemd[1]: Started libpod-conmon-66b5331ec6ed3cfd03deda2f6dd47a99d4e3019919b8c7cd15a698c09358e996.scope.
Nov 25 23:42:57 compute-0 ceph-mon[75654]: pgmap v384: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:42:57 compute-0 podman[162544]: 2025-11-25 23:42:57.64572303 +0000 UTC m=+0.045790892 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:42:57 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:42:57 compute-0 podman[162544]: 2025-11-25 23:42:57.776759153 +0000 UTC m=+0.176826945 container init 66b5331ec6ed3cfd03deda2f6dd47a99d4e3019919b8c7cd15a698c09358e996 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_hodgkin, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:42:57 compute-0 podman[162544]: 2025-11-25 23:42:57.784656963 +0000 UTC m=+0.184724755 container start 66b5331ec6ed3cfd03deda2f6dd47a99d4e3019919b8c7cd15a698c09358e996 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_hodgkin, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:42:57 compute-0 podman[162544]: 2025-11-25 23:42:57.788612608 +0000 UTC m=+0.188680380 container attach 66b5331ec6ed3cfd03deda2f6dd47a99d4e3019919b8c7cd15a698c09358e996 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_hodgkin, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:42:57 compute-0 clever_hodgkin[162586]: 167 167
Nov 25 23:42:57 compute-0 systemd[1]: libpod-66b5331ec6ed3cfd03deda2f6dd47a99d4e3019919b8c7cd15a698c09358e996.scope: Deactivated successfully.
Nov 25 23:42:57 compute-0 podman[162544]: 2025-11-25 23:42:57.792909053 +0000 UTC m=+0.192976915 container died 66b5331ec6ed3cfd03deda2f6dd47a99d4e3019919b8c7cd15a698c09358e996 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_hodgkin, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507)
Nov 25 23:42:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-8be8344db39848e910f31907dce76d96bb9b9795ee2d4d3db2e1868a50c51e47-merged.mount: Deactivated successfully.
Nov 25 23:42:57 compute-0 podman[162544]: 2025-11-25 23:42:57.842489625 +0000 UTC m=+0.242557407 container remove 66b5331ec6ed3cfd03deda2f6dd47a99d4e3019919b8c7cd15a698c09358e996 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_hodgkin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:42:57 compute-0 systemd[1]: libpod-conmon-66b5331ec6ed3cfd03deda2f6dd47a99d4e3019919b8c7cd15a698c09358e996.scope: Deactivated successfully.
Nov 25 23:42:57 compute-0 python3.9[162583]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 23:42:58 compute-0 sudo[162581]: pam_unix(sudo:session): session closed for user root
Nov 25 23:42:58 compute-0 podman[162612]: 2025-11-25 23:42:58.058875042 +0000 UTC m=+0.053639770 container create f5865a00d79015c2931c244c06b50000a0bda3e47129a9bb846795b911d50466 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_cohen, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:42:58 compute-0 systemd[1]: Started libpod-conmon-f5865a00d79015c2931c244c06b50000a0bda3e47129a9bb846795b911d50466.scope.
Nov 25 23:42:58 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:42:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4741fe2e3ae45e7e8e9a5c27c66dffd861908a5e9e33dbcff28625ec81af1ed4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 23:42:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4741fe2e3ae45e7e8e9a5c27c66dffd861908a5e9e33dbcff28625ec81af1ed4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:42:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4741fe2e3ae45e7e8e9a5c27c66dffd861908a5e9e33dbcff28625ec81af1ed4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:42:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4741fe2e3ae45e7e8e9a5c27c66dffd861908a5e9e33dbcff28625ec81af1ed4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:42:58 compute-0 podman[162612]: 2025-11-25 23:42:58.036018403 +0000 UTC m=+0.030783211 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:42:58 compute-0 podman[162612]: 2025-11-25 23:42:58.134758125 +0000 UTC m=+0.129522873 container init f5865a00d79015c2931c244c06b50000a0bda3e47129a9bb846795b911d50466 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_cohen, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 25 23:42:58 compute-0 podman[162612]: 2025-11-25 23:42:58.149352945 +0000 UTC m=+0.144117713 container start f5865a00d79015c2931c244c06b50000a0bda3e47129a9bb846795b911d50466 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_cohen, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:42:58 compute-0 podman[162612]: 2025-11-25 23:42:58.156507935 +0000 UTC m=+0.151272683 container attach f5865a00d79015c2931c244c06b50000a0bda3e47129a9bb846795b911d50466 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_cohen, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:42:58 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v385: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:42:58 compute-0 sudo[162783]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvbmtzchhnqhletjngaobuipruivclbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114178.1943932-64-25443165790747/AnsiballZ_systemd_service.py'
Nov 25 23:42:58 compute-0 sudo[162783]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:42:58 compute-0 ceph-mon[75654]: pgmap v385: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:42:58 compute-0 python3.9[162785]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 23:42:58 compute-0 sudo[162783]: pam_unix(sudo:session): session closed for user root
Nov 25 23:42:58 compute-0 reverent_cohen[162652]: {
Nov 25 23:42:58 compute-0 reverent_cohen[162652]:     "0": [
Nov 25 23:42:58 compute-0 reverent_cohen[162652]:         {
Nov 25 23:42:58 compute-0 reverent_cohen[162652]:             "devices": [
Nov 25 23:42:58 compute-0 reverent_cohen[162652]:                 "/dev/loop3"
Nov 25 23:42:58 compute-0 reverent_cohen[162652]:             ],
Nov 25 23:42:58 compute-0 reverent_cohen[162652]:             "lv_name": "ceph_lv0",
Nov 25 23:42:58 compute-0 reverent_cohen[162652]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 23:42:58 compute-0 reverent_cohen[162652]:             "lv_size": "21470642176",
Nov 25 23:42:58 compute-0 reverent_cohen[162652]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c5ddf08a-6193-41e0-8332-60b5083aa62e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 23:42:58 compute-0 reverent_cohen[162652]:             "lv_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 23:42:58 compute-0 reverent_cohen[162652]:             "name": "ceph_lv0",
Nov 25 23:42:58 compute-0 reverent_cohen[162652]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 23:42:58 compute-0 reverent_cohen[162652]:             "tags": {
Nov 25 23:42:58 compute-0 reverent_cohen[162652]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 23:42:58 compute-0 reverent_cohen[162652]:                 "ceph.block_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 23:42:58 compute-0 reverent_cohen[162652]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 23:42:58 compute-0 reverent_cohen[162652]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:42:58 compute-0 reverent_cohen[162652]:                 "ceph.cluster_name": "ceph",
Nov 25 23:42:58 compute-0 reverent_cohen[162652]:                 "ceph.crush_device_class": "",
Nov 25 23:42:58 compute-0 reverent_cohen[162652]:                 "ceph.encrypted": "0",
Nov 25 23:42:58 compute-0 reverent_cohen[162652]:                 "ceph.osd_fsid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 23:42:58 compute-0 reverent_cohen[162652]:                 "ceph.osd_id": "0",
Nov 25 23:42:58 compute-0 reverent_cohen[162652]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 23:42:58 compute-0 reverent_cohen[162652]:                 "ceph.type": "block",
Nov 25 23:42:58 compute-0 reverent_cohen[162652]:                 "ceph.vdo": "0"
Nov 25 23:42:58 compute-0 reverent_cohen[162652]:             },
Nov 25 23:42:58 compute-0 reverent_cohen[162652]:             "type": "block",
Nov 25 23:42:58 compute-0 reverent_cohen[162652]:             "vg_name": "ceph_vg0"
Nov 25 23:42:58 compute-0 reverent_cohen[162652]:         }
Nov 25 23:42:58 compute-0 reverent_cohen[162652]:     ],
Nov 25 23:42:58 compute-0 reverent_cohen[162652]:     "1": [
Nov 25 23:42:58 compute-0 reverent_cohen[162652]:         {
Nov 25 23:42:58 compute-0 reverent_cohen[162652]:             "devices": [
Nov 25 23:42:58 compute-0 reverent_cohen[162652]:                 "/dev/loop4"
Nov 25 23:42:58 compute-0 reverent_cohen[162652]:             ],
Nov 25 23:42:58 compute-0 reverent_cohen[162652]:             "lv_name": "ceph_lv1",
Nov 25 23:42:58 compute-0 reverent_cohen[162652]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 23:42:58 compute-0 reverent_cohen[162652]:             "lv_size": "21470642176",
Nov 25 23:42:58 compute-0 reverent_cohen[162652]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21bbab34-bea3-466b-8bf7-812749fcef47,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 23:42:58 compute-0 reverent_cohen[162652]:             "lv_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 23:42:58 compute-0 reverent_cohen[162652]:             "name": "ceph_lv1",
Nov 25 23:42:58 compute-0 reverent_cohen[162652]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 23:42:58 compute-0 reverent_cohen[162652]:             "tags": {
Nov 25 23:42:58 compute-0 reverent_cohen[162652]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 23:42:58 compute-0 reverent_cohen[162652]:                 "ceph.block_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 23:42:58 compute-0 reverent_cohen[162652]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 23:42:58 compute-0 reverent_cohen[162652]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:42:58 compute-0 reverent_cohen[162652]:                 "ceph.cluster_name": "ceph",
Nov 25 23:42:58 compute-0 reverent_cohen[162652]:                 "ceph.crush_device_class": "",
Nov 25 23:42:58 compute-0 reverent_cohen[162652]:                 "ceph.encrypted": "0",
Nov 25 23:42:58 compute-0 reverent_cohen[162652]:                 "ceph.osd_fsid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 23:42:58 compute-0 reverent_cohen[162652]:                 "ceph.osd_id": "1",
Nov 25 23:42:58 compute-0 reverent_cohen[162652]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 23:42:58 compute-0 reverent_cohen[162652]:                 "ceph.type": "block",
Nov 25 23:42:58 compute-0 reverent_cohen[162652]:                 "ceph.vdo": "0"
Nov 25 23:42:58 compute-0 reverent_cohen[162652]:             },
Nov 25 23:42:58 compute-0 reverent_cohen[162652]:             "type": "block",
Nov 25 23:42:58 compute-0 reverent_cohen[162652]:             "vg_name": "ceph_vg1"
Nov 25 23:42:58 compute-0 reverent_cohen[162652]:         }
Nov 25 23:42:58 compute-0 reverent_cohen[162652]:     ],
Nov 25 23:42:58 compute-0 reverent_cohen[162652]:     "2": [
Nov 25 23:42:58 compute-0 reverent_cohen[162652]:         {
Nov 25 23:42:58 compute-0 reverent_cohen[162652]:             "devices": [
Nov 25 23:42:58 compute-0 reverent_cohen[162652]:                 "/dev/loop5"
Nov 25 23:42:58 compute-0 reverent_cohen[162652]:             ],
Nov 25 23:42:58 compute-0 reverent_cohen[162652]:             "lv_name": "ceph_lv2",
Nov 25 23:42:58 compute-0 reverent_cohen[162652]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 23:42:58 compute-0 reverent_cohen[162652]:             "lv_size": "21470642176",
Nov 25 23:42:58 compute-0 reverent_cohen[162652]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=019d967b-1a56-4e90-8682-a890da577e20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 23:42:58 compute-0 reverent_cohen[162652]:             "lv_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 23:42:58 compute-0 reverent_cohen[162652]:             "name": "ceph_lv2",
Nov 25 23:42:58 compute-0 reverent_cohen[162652]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 23:42:58 compute-0 reverent_cohen[162652]:             "tags": {
Nov 25 23:42:58 compute-0 reverent_cohen[162652]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 23:42:58 compute-0 reverent_cohen[162652]:                 "ceph.block_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 23:42:58 compute-0 reverent_cohen[162652]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 23:42:58 compute-0 reverent_cohen[162652]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:42:58 compute-0 reverent_cohen[162652]:                 "ceph.cluster_name": "ceph",
Nov 25 23:42:58 compute-0 reverent_cohen[162652]:                 "ceph.crush_device_class": "",
Nov 25 23:42:58 compute-0 reverent_cohen[162652]:                 "ceph.encrypted": "0",
Nov 25 23:42:58 compute-0 reverent_cohen[162652]:                 "ceph.osd_fsid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 23:42:58 compute-0 reverent_cohen[162652]:                 "ceph.osd_id": "2",
Nov 25 23:42:58 compute-0 reverent_cohen[162652]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 23:42:58 compute-0 reverent_cohen[162652]:                 "ceph.type": "block",
Nov 25 23:42:58 compute-0 reverent_cohen[162652]:                 "ceph.vdo": "0"
Nov 25 23:42:58 compute-0 reverent_cohen[162652]:             },
Nov 25 23:42:58 compute-0 reverent_cohen[162652]:             "type": "block",
Nov 25 23:42:58 compute-0 reverent_cohen[162652]:             "vg_name": "ceph_vg2"
Nov 25 23:42:58 compute-0 reverent_cohen[162652]:         }
Nov 25 23:42:58 compute-0 reverent_cohen[162652]:     ]
Nov 25 23:42:58 compute-0 reverent_cohen[162652]: }
Nov 25 23:42:58 compute-0 systemd[1]: libpod-f5865a00d79015c2931c244c06b50000a0bda3e47129a9bb846795b911d50466.scope: Deactivated successfully.
Nov 25 23:42:58 compute-0 podman[162612]: 2025-11-25 23:42:58.931649818 +0000 UTC m=+0.926414556 container died f5865a00d79015c2931c244c06b50000a0bda3e47129a9bb846795b911d50466 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_cohen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Nov 25 23:42:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-4741fe2e3ae45e7e8e9a5c27c66dffd861908a5e9e33dbcff28625ec81af1ed4-merged.mount: Deactivated successfully.
Nov 25 23:42:58 compute-0 podman[162612]: 2025-11-25 23:42:58.98880402 +0000 UTC m=+0.983568758 container remove f5865a00d79015c2931c244c06b50000a0bda3e47129a9bb846795b911d50466 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_cohen, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:42:59 compute-0 systemd[1]: libpod-conmon-f5865a00d79015c2931c244c06b50000a0bda3e47129a9bb846795b911d50466.scope: Deactivated successfully.
Nov 25 23:42:59 compute-0 sudo[162380]: pam_unix(sudo:session): session closed for user root
Nov 25 23:42:59 compute-0 sudo[162850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:42:59 compute-0 sudo[162850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:42:59 compute-0 sudo[162850]: pam_unix(sudo:session): session closed for user root
Nov 25 23:42:59 compute-0 sudo[162904]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:42:59 compute-0 sudo[162904]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:42:59 compute-0 sudo[162904]: pam_unix(sudo:session): session closed for user root
Nov 25 23:42:59 compute-0 sudo[162952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:42:59 compute-0 sudo[162952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:42:59 compute-0 sudo[162952]: pam_unix(sudo:session): session closed for user root
Nov 25 23:42:59 compute-0 sudo[162984]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 -- raw list --format json
Nov 25 23:42:59 compute-0 sudo[162984]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:42:59 compute-0 sudo[163052]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjswphxstuyfamytudcwgnlqcxwqzlft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114179.0180655-64-92827684263051/AnsiballZ_systemd_service.py'
Nov 25 23:42:59 compute-0 sudo[163052]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:42:59 compute-0 podman[163095]: 2025-11-25 23:42:59.65616266 +0000 UTC m=+0.054356010 container create cfca735e5f925913110645e9978431c357ce78ea0951b594fa52a9a3084730bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_volhard, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:42:59 compute-0 systemd[1]: Started libpod-conmon-cfca735e5f925913110645e9978431c357ce78ea0951b594fa52a9a3084730bb.scope.
Nov 25 23:42:59 compute-0 python3.9[163054]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 23:42:59 compute-0 podman[163095]: 2025-11-25 23:42:59.627610589 +0000 UTC m=+0.025803999 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:42:59 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:42:59 compute-0 podman[163095]: 2025-11-25 23:42:59.739961474 +0000 UTC m=+0.138154834 container init cfca735e5f925913110645e9978431c357ce78ea0951b594fa52a9a3084730bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_volhard, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:42:59 compute-0 podman[163095]: 2025-11-25 23:42:59.746914829 +0000 UTC m=+0.145108149 container start cfca735e5f925913110645e9978431c357ce78ea0951b594fa52a9a3084730bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_volhard, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:42:59 compute-0 ecstatic_volhard[163112]: 167 167
Nov 25 23:42:59 compute-0 systemd[1]: libpod-cfca735e5f925913110645e9978431c357ce78ea0951b594fa52a9a3084730bb.scope: Deactivated successfully.
Nov 25 23:42:59 compute-0 sudo[163052]: pam_unix(sudo:session): session closed for user root
Nov 25 23:42:59 compute-0 podman[163095]: 2025-11-25 23:42:59.753281848 +0000 UTC m=+0.151475168 container attach cfca735e5f925913110645e9978431c357ce78ea0951b594fa52a9a3084730bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_volhard, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:42:59 compute-0 podman[163095]: 2025-11-25 23:42:59.753732261 +0000 UTC m=+0.151925611 container died cfca735e5f925913110645e9978431c357ce78ea0951b594fa52a9a3084730bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_volhard, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 23:42:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-84c9189166d0620fd0df381c12a47a1dd669e203940ea309cb9dc39ed5d73dfd-merged.mount: Deactivated successfully.
Nov 25 23:42:59 compute-0 podman[163095]: 2025-11-25 23:42:59.797914508 +0000 UTC m=+0.196107858 container remove cfca735e5f925913110645e9978431c357ce78ea0951b594fa52a9a3084730bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_volhard, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:42:59 compute-0 systemd[1]: libpod-conmon-cfca735e5f925913110645e9978431c357ce78ea0951b594fa52a9a3084730bb.scope: Deactivated successfully.
Nov 25 23:42:59 compute-0 podman[163185]: 2025-11-25 23:42:59.967318424 +0000 UTC m=+0.034881781 container create a452608d41f42ac41f4055553768b06bbf770b2009c2ff0480d6d44b103f6a12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_kapitsa, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 23:43:00 compute-0 systemd[1]: Started libpod-conmon-a452608d41f42ac41f4055553768b06bbf770b2009c2ff0480d6d44b103f6a12.scope.
Nov 25 23:43:00 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:43:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e71035ee13088e866fd522fc49f876771cd19dd8d8422b39d5b583428d80f2f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 23:43:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e71035ee13088e866fd522fc49f876771cd19dd8d8422b39d5b583428d80f2f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:43:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e71035ee13088e866fd522fc49f876771cd19dd8d8422b39d5b583428d80f2f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:43:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e71035ee13088e866fd522fc49f876771cd19dd8d8422b39d5b583428d80f2f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:43:00 compute-0 podman[163185]: 2025-11-25 23:42:59.952786987 +0000 UTC m=+0.020350374 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:43:00 compute-0 podman[163185]: 2025-11-25 23:43:00.056147622 +0000 UTC m=+0.123711069 container init a452608d41f42ac41f4055553768b06bbf770b2009c2ff0480d6d44b103f6a12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_kapitsa, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 23:43:00 compute-0 podman[163185]: 2025-11-25 23:43:00.071147861 +0000 UTC m=+0.138711218 container start a452608d41f42ac41f4055553768b06bbf770b2009c2ff0480d6d44b103f6a12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_kapitsa, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:43:00 compute-0 podman[163185]: 2025-11-25 23:43:00.073787552 +0000 UTC m=+0.141350999 container attach a452608d41f42ac41f4055553768b06bbf770b2009c2ff0480d6d44b103f6a12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:43:00 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v386: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:43:00 compute-0 sudo[163308]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zifsnhpcfklurtzbbwlgqjjpwtxrwanz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114179.9099464-64-194850697185755/AnsiballZ_systemd_service.py'
Nov 25 23:43:00 compute-0 sudo[163308]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:43:00 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:43:00 compute-0 python3.9[163310]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 23:43:00 compute-0 sudo[163308]: pam_unix(sudo:session): session closed for user root
Nov 25 23:43:01 compute-0 blissful_kapitsa[163230]: {
Nov 25 23:43:01 compute-0 blissful_kapitsa[163230]:     "019d967b-1a56-4e90-8682-a890da577e20": {
Nov 25 23:43:01 compute-0 blissful_kapitsa[163230]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:43:01 compute-0 blissful_kapitsa[163230]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 23:43:01 compute-0 blissful_kapitsa[163230]:         "osd_id": 2,
Nov 25 23:43:01 compute-0 blissful_kapitsa[163230]:         "osd_uuid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 23:43:01 compute-0 blissful_kapitsa[163230]:         "type": "bluestore"
Nov 25 23:43:01 compute-0 blissful_kapitsa[163230]:     },
Nov 25 23:43:01 compute-0 blissful_kapitsa[163230]:     "21bbab34-bea3-466b-8bf7-812749fcef47": {
Nov 25 23:43:01 compute-0 blissful_kapitsa[163230]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:43:01 compute-0 blissful_kapitsa[163230]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 23:43:01 compute-0 blissful_kapitsa[163230]:         "osd_id": 1,
Nov 25 23:43:01 compute-0 blissful_kapitsa[163230]:         "osd_uuid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 23:43:01 compute-0 blissful_kapitsa[163230]:         "type": "bluestore"
Nov 25 23:43:01 compute-0 blissful_kapitsa[163230]:     },
Nov 25 23:43:01 compute-0 blissful_kapitsa[163230]:     "c5ddf08a-6193-41e0-8332-60b5083aa62e": {
Nov 25 23:43:01 compute-0 blissful_kapitsa[163230]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:43:01 compute-0 blissful_kapitsa[163230]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 23:43:01 compute-0 blissful_kapitsa[163230]:         "osd_id": 0,
Nov 25 23:43:01 compute-0 blissful_kapitsa[163230]:         "osd_uuid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 23:43:01 compute-0 blissful_kapitsa[163230]:         "type": "bluestore"
Nov 25 23:43:01 compute-0 blissful_kapitsa[163230]:     }
Nov 25 23:43:01 compute-0 blissful_kapitsa[163230]: }
Nov 25 23:43:01 compute-0 ceph-osd[91111]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 23:43:01 compute-0 ceph-osd[91111]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 4172 writes, 19K keys, 4172 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 4172 writes, 365 syncs, 11.43 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4172 writes, 19K keys, 4172 commit groups, 1.0 writes per commit group, ingest: 16.16 MB, 0.03 MB/s
                                           Interval WAL: 4172 writes, 365 syncs, 11.43 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56223ceb91f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56223ceb91f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56223ceb91f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56223ceb91f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56223ceb91f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56223ceb91f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56223ceb91f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56223ceb9090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56223ceb9090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56223ceb9090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56223ceb91f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56223ceb91f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 25 23:43:01 compute-0 systemd[1]: libpod-a452608d41f42ac41f4055553768b06bbf770b2009c2ff0480d6d44b103f6a12.scope: Deactivated successfully.
Nov 25 23:43:01 compute-0 podman[163185]: 2025-11-25 23:43:01.109684385 +0000 UTC m=+1.177247742 container died a452608d41f42ac41f4055553768b06bbf770b2009c2ff0480d6d44b103f6a12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_kapitsa, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:43:01 compute-0 systemd[1]: libpod-a452608d41f42ac41f4055553768b06bbf770b2009c2ff0480d6d44b103f6a12.scope: Consumed 1.044s CPU time.
Nov 25 23:43:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-1e71035ee13088e866fd522fc49f876771cd19dd8d8422b39d5b583428d80f2f-merged.mount: Deactivated successfully.
Nov 25 23:43:01 compute-0 podman[163185]: 2025-11-25 23:43:01.161845125 +0000 UTC m=+1.229408482 container remove a452608d41f42ac41f4055553768b06bbf770b2009c2ff0480d6d44b103f6a12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 23:43:01 compute-0 systemd[1]: libpod-conmon-a452608d41f42ac41f4055553768b06bbf770b2009c2ff0480d6d44b103f6a12.scope: Deactivated successfully.
Nov 25 23:43:01 compute-0 sudo[162984]: pam_unix(sudo:session): session closed for user root
Nov 25 23:43:01 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 23:43:01 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:43:01 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 23:43:01 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:43:01 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev 7dbd1a74-3536-4a54-bf19-dca989a92876 does not exist
Nov 25 23:43:01 compute-0 sudo[163520]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slfgowcesbxsvrladivaocrsplexzdcc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114180.7945912-64-115589163664459/AnsiballZ_systemd_service.py'
Nov 25 23:43:01 compute-0 sudo[163520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:43:01 compute-0 podman[163442]: 2025-11-25 23:43:01.234373328 +0000 UTC m=+0.099452632 container health_status 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 25 23:43:01 compute-0 sudo[163526]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:43:01 compute-0 sudo[163526]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:43:01 compute-0 sudo[163526]: pam_unix(sudo:session): session closed for user root
Nov 25 23:43:01 compute-0 ceph-mon[75654]: pgmap v386: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:43:01 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:43:01 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:43:01 compute-0 sudo[163555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 23:43:01 compute-0 sudo[163555]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:43:01 compute-0 sudo[163555]: pam_unix(sudo:session): session closed for user root
Nov 25 23:43:01 compute-0 python3.9[163529]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 23:43:01 compute-0 sudo[163520]: pam_unix(sudo:session): session closed for user root
Nov 25 23:43:01 compute-0 ceph-mgr[75954]: [devicehealth INFO root] Check health
Nov 25 23:43:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 23:43:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:43:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 23:43:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:43:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:43:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:43:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:43:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:43:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:43:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:43:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:43:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:43:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 23:43:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:43:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:43:02 compute-0 sudo[163730]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpjcfqchvpwbfhezyckwbithdmcjpxzt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114181.6703835-64-127846144500015/AnsiballZ_systemd_service.py'
Nov 25 23:43:02 compute-0 sudo[163730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:43:02 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v387: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:43:02 compute-0 python3.9[163732]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 23:43:02 compute-0 sudo[163730]: pam_unix(sudo:session): session closed for user root
Nov 25 23:43:03 compute-0 sudo[163883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdwgckfsstuefdcsyqmmwpseptmbvhir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114182.781138-116-44745809664134/AnsiballZ_file.py'
Nov 25 23:43:03 compute-0 sudo[163883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:43:03 compute-0 ceph-mon[75654]: pgmap v387: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:43:03 compute-0 python3.9[163885]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:43:03 compute-0 sudo[163883]: pam_unix(sudo:session): session closed for user root
Nov 25 23:43:04 compute-0 sudo[164035]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hfbztndzprajshreuosddmsquaneqdgp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114183.6698215-116-261773472073951/AnsiballZ_file.py'
Nov 25 23:43:04 compute-0 sudo[164035]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:43:04 compute-0 python3.9[164037]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:43:04 compute-0 sudo[164035]: pam_unix(sudo:session): session closed for user root
Nov 25 23:43:04 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v388: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:43:04 compute-0 sudo[164193]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oubbbqxpxcijnykoqhpksiegdycskzyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114184.4500196-116-154052871720041/AnsiballZ_file.py'
Nov 25 23:43:04 compute-0 sudo[164193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:43:05 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:43:05 compute-0 ceph-mon[75654]: pgmap v388: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:43:05 compute-0 python3.9[164195]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:43:05 compute-0 sudo[164193]: pam_unix(sudo:session): session closed for user root
Nov 25 23:43:06 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v389: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:43:06 compute-0 sudo[164347]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sixhfgubrfixmmqtfctubprwguprbkkl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114185.9851046-116-261081638261394/AnsiballZ_file.py'
Nov 25 23:43:06 compute-0 sudo[164347]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:43:06 compute-0 sshd-session[164196]: Invalid user user1 from 202.83.162.167 port 53840
Nov 25 23:43:06 compute-0 python3.9[164349]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:43:06 compute-0 sudo[164347]: pam_unix(sudo:session): session closed for user root
Nov 25 23:43:06 compute-0 sshd-session[164196]: Received disconnect from 202.83.162.167 port 53840:11: Bye Bye [preauth]
Nov 25 23:43:06 compute-0 sshd-session[164196]: Disconnected from invalid user user1 202.83.162.167 port 53840 [preauth]
Nov 25 23:43:07 compute-0 sudo[164499]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xovitrtnjtjvhxikdpgnivhikcmeqtwy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114186.6859221-116-22485747435851/AnsiballZ_file.py'
Nov 25 23:43:07 compute-0 sudo[164499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:43:07 compute-0 python3.9[164501]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:43:07 compute-0 sudo[164499]: pam_unix(sudo:session): session closed for user root
Nov 25 23:43:07 compute-0 ceph-mon[75654]: pgmap v389: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:43:07 compute-0 sudo[164651]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdzghzdpudcdwqjtfbrsagusbcbqtbhf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114187.4871268-116-154129894697375/AnsiballZ_file.py'
Nov 25 23:43:07 compute-0 sudo[164651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:43:08 compute-0 python3.9[164653]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:43:08 compute-0 sudo[164651]: pam_unix(sudo:session): session closed for user root
Nov 25 23:43:08 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v390: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:43:08 compute-0 sudo[164803]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sownnqzznusirbsrnupvrmrjnfboiwvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114188.1699257-116-157511852186571/AnsiballZ_file.py'
Nov 25 23:43:08 compute-0 sudo[164803]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:43:08 compute-0 python3.9[164805]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:43:08 compute-0 sudo[164803]: pam_unix(sudo:session): session closed for user root
Nov 25 23:43:09 compute-0 sudo[164955]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnrrxsszfqzbkyzeacwvxvbwiayfmhxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114188.9577723-166-159409441570255/AnsiballZ_file.py'
Nov 25 23:43:09 compute-0 sudo[164955]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:43:09 compute-0 podman[164957]: 2025-11-25 23:43:09.428673706 +0000 UTC m=+0.085587023 container health_status 9c33443d6f21f9126b440289b7d384a59ef92406382d0e40a2b11f1da5dc47d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 23:43:09 compute-0 python3.9[164958]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:43:09 compute-0 sudo[164955]: pam_unix(sudo:session): session closed for user root
Nov 25 23:43:09 compute-0 ceph-mon[75654]: pgmap v390: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:43:10 compute-0 sudo[165126]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wsktejeezeetpzclduefoyylgsfgfrkl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114189.7198713-166-50591162610737/AnsiballZ_file.py'
Nov 25 23:43:10 compute-0 sudo[165126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:43:10 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v391: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:43:10 compute-0 python3.9[165128]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:43:10 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:43:10 compute-0 sudo[165126]: pam_unix(sudo:session): session closed for user root
Nov 25 23:43:10 compute-0 sudo[165278]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctqykhhnhyazaskvnpgjeigdpvndnqrm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114190.630326-166-49386815710783/AnsiballZ_file.py'
Nov 25 23:43:10 compute-0 sudo[165278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:43:11 compute-0 python3.9[165280]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:43:11 compute-0 sudo[165278]: pam_unix(sudo:session): session closed for user root
Nov 25 23:43:11 compute-0 ceph-mon[75654]: pgmap v391: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:43:11 compute-0 sudo[165430]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmjxhjuszrbfeekwaajxtgjqfbqzwbkd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114191.303037-166-73173860288355/AnsiballZ_file.py'
Nov 25 23:43:11 compute-0 sudo[165430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:43:11 compute-0 python3.9[165432]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:43:11 compute-0 sudo[165430]: pam_unix(sudo:session): session closed for user root
Nov 25 23:43:12 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v392: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:43:12 compute-0 sudo[165582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vamepmvmfghkkcceutbpssbhgqfgrqsp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114192.0416539-166-271184284989174/AnsiballZ_file.py'
Nov 25 23:43:12 compute-0 sudo[165582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:43:12 compute-0 python3.9[165584]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:43:12 compute-0 sudo[165582]: pam_unix(sudo:session): session closed for user root
Nov 25 23:43:13 compute-0 sudo[165734]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jairuqfccqzqlfxypqzczmkkmusnsyrz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114192.8360605-166-229225082752017/AnsiballZ_file.py'
Nov 25 23:43:13 compute-0 sudo[165734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:43:13 compute-0 python3.9[165736]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:43:13 compute-0 sudo[165734]: pam_unix(sudo:session): session closed for user root
Nov 25 23:43:13 compute-0 ceph-mon[75654]: pgmap v392: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:43:13 compute-0 sudo[165886]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgbltuqgoaycyzuziafgpsqhltsmnrfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114193.5821471-166-187721904106834/AnsiballZ_file.py'
Nov 25 23:43:13 compute-0 sudo[165886]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:43:14 compute-0 python3.9[165888]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:43:14 compute-0 sudo[165886]: pam_unix(sudo:session): session closed for user root
Nov 25 23:43:14 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v393: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:43:14 compute-0 sudo[166038]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqambydbgimhmrdoijqgrqjocbkmkkxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114194.4551227-217-168920177472206/AnsiballZ_command.py'
Nov 25 23:43:14 compute-0 sudo[166038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:43:15 compute-0 python3.9[166040]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:43:15 compute-0 sudo[166038]: pam_unix(sudo:session): session closed for user root
Nov 25 23:43:15 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:43:15 compute-0 ceph-mon[75654]: pgmap v393: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:43:16 compute-0 python3.9[166192]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 25 23:43:16 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v394: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:43:16 compute-0 sudo[166342]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cogwzzpxdfrsrnqjpvjxklmcjqeaeocu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114196.3322334-235-62131980622774/AnsiballZ_systemd_service.py'
Nov 25 23:43:16 compute-0 sudo[166342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:43:17 compute-0 python3.9[166344]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 23:43:17 compute-0 systemd[1]: Reloading.
Nov 25 23:43:17 compute-0 systemd-rc-local-generator[166371]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 23:43:17 compute-0 systemd-sysv-generator[166376]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 23:43:17 compute-0 sudo[166342]: pam_unix(sudo:session): session closed for user root
Nov 25 23:43:17 compute-0 ceph-mon[75654]: pgmap v394: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:43:18 compute-0 sudo[166529]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqrkloghatduzlrodfqkeycxpidddilm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114197.6918602-243-135628753544407/AnsiballZ_command.py'
Nov 25 23:43:18 compute-0 sudo[166529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:43:18 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v395: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:43:18 compute-0 python3.9[166531]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:43:18 compute-0 sudo[166529]: pam_unix(sudo:session): session closed for user root
Nov 25 23:43:18 compute-0 sudo[166682]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uokmeewbbbvytzfapfglxaompjomifio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114198.5053377-243-74552276862569/AnsiballZ_command.py'
Nov 25 23:43:18 compute-0 sudo[166682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:43:19 compute-0 python3.9[166684]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:43:19 compute-0 sudo[166682]: pam_unix(sudo:session): session closed for user root
Nov 25 23:43:19 compute-0 sudo[166835]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnnsdhifxwenhlmriwxqlxctcairpzdk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114199.269865-243-84131469565738/AnsiballZ_command.py'
Nov 25 23:43:20 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v396: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:43:20 compute-0 sudo[166835]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:43:20 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:43:20 compute-0 ceph-mon[75654]: pgmap v395: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:43:20 compute-0 python3.9[166837]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:43:20 compute-0 sudo[166835]: pam_unix(sudo:session): session closed for user root
Nov 25 23:43:21 compute-0 sudo[166988]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdlvsjbzgcubixvydmxbskarrmokfuzy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114200.7804213-243-147264968696450/AnsiballZ_command.py'
Nov 25 23:43:21 compute-0 sudo[166988]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:43:21 compute-0 python3.9[166990]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:43:21 compute-0 ceph-mon[75654]: pgmap v396: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:43:22 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v397: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:43:22 compute-0 sudo[166988]: pam_unix(sudo:session): session closed for user root
Nov 25 23:43:23 compute-0 sudo[167141]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gijytchcyakksbbpbxjiacibincbtnkp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114202.6420903-243-227449057307305/AnsiballZ_command.py'
Nov 25 23:43:23 compute-0 sudo[167141]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:43:23 compute-0 python3.9[167143]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:43:23 compute-0 sudo[167141]: pam_unix(sudo:session): session closed for user root
Nov 25 23:43:23 compute-0 ceph-mon[75654]: pgmap v397: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:43:23 compute-0 ceph-mon[75654]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Nov 25 23:43:23 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:43:23.486416) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 23:43:23 compute-0 ceph-mon[75654]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Nov 25 23:43:23 compute-0 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764114203486463, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 1446, "num_deletes": 251, "total_data_size": 1573542, "memory_usage": 1606592, "flush_reason": "Manual Compaction"}
Nov 25 23:43:23 compute-0 ceph-mon[75654]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Nov 25 23:43:23 compute-0 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764114203504604, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 1532904, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7623, "largest_seqno": 9068, "table_properties": {"data_size": 1526203, "index_size": 3840, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 13231, "raw_average_key_size": 18, "raw_value_size": 1512812, "raw_average_value_size": 2170, "num_data_blocks": 180, "num_entries": 697, "num_filter_entries": 697, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764114046, "oldest_key_time": 1764114046, "file_creation_time": 1764114203, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cf57a6b1-796f-4cfa-b350-53eb10a4554d", "db_session_id": "Q7VS70283MEZ1V621ZPR", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Nov 25 23:43:23 compute-0 ceph-mon[75654]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 18275 microseconds, and 9432 cpu microseconds.
Nov 25 23:43:23 compute-0 ceph-mon[75654]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 23:43:23 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:43:23.504685) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 1532904 bytes OK
Nov 25 23:43:23 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:43:23.504715) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Nov 25 23:43:23 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:43:23.507133) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Nov 25 23:43:23 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:43:23.507159) EVENT_LOG_v1 {"time_micros": 1764114203507151, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 23:43:23 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:43:23.507210) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 23:43:23 compute-0 ceph-mon[75654]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 1567201, prev total WAL file size 1567201, number of live WAL files 2.
Nov 25 23:43:23 compute-0 ceph-mon[75654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 23:43:23 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:43:23.508269) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Nov 25 23:43:23 compute-0 ceph-mon[75654]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 23:43:23 compute-0 ceph-mon[75654]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(1496KB)], [23(4357KB)]
Nov 25 23:43:23 compute-0 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764114203508338, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 5994564, "oldest_snapshot_seqno": -1}
Nov 25 23:43:23 compute-0 ceph-mon[75654]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 2810 keys, 4719855 bytes, temperature: kUnknown
Nov 25 23:43:23 compute-0 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764114203546824, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 4719855, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 4698610, "index_size": 13136, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7045, "raw_key_size": 65139, "raw_average_key_size": 23, "raw_value_size": 4645755, "raw_average_value_size": 1653, "num_data_blocks": 587, "num_entries": 2810, "num_filter_entries": 2810, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764113467, "oldest_key_time": 0, "file_creation_time": 1764114203, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cf57a6b1-796f-4cfa-b350-53eb10a4554d", "db_session_id": "Q7VS70283MEZ1V621ZPR", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Nov 25 23:43:23 compute-0 ceph-mon[75654]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 23:43:23 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:43:23.547092) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 4719855 bytes
Nov 25 23:43:23 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:43:23.548813) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 155.5 rd, 122.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 4.3 +0.0 blob) out(4.5 +0.0 blob), read-write-amplify(7.0) write-amplify(3.1) OK, records in: 3324, records dropped: 514 output_compression: NoCompression
Nov 25 23:43:23 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:43:23.548842) EVENT_LOG_v1 {"time_micros": 1764114203548828, "job": 8, "event": "compaction_finished", "compaction_time_micros": 38552, "compaction_time_cpu_micros": 22768, "output_level": 6, "num_output_files": 1, "total_output_size": 4719855, "num_input_records": 3324, "num_output_records": 2810, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 23:43:23 compute-0 ceph-mon[75654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 23:43:23 compute-0 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764114203549569, "job": 8, "event": "table_file_deletion", "file_number": 25}
Nov 25 23:43:23 compute-0 ceph-mon[75654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 23:43:23 compute-0 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764114203550855, "job": 8, "event": "table_file_deletion", "file_number": 23}
Nov 25 23:43:23 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:43:23.508099) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 23:43:23 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:43:23.550921) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 23:43:23 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:43:23.550929) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 23:43:23 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:43:23.550933) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 23:43:23 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:43:23.550936) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 23:43:23 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:43:23.550938) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 23:43:23 compute-0 sudo[167294]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztoanbxsepkuugkwgjqduxstaddfulwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114203.4109674-243-86754420080960/AnsiballZ_command.py'
Nov 25 23:43:23 compute-0 sudo[167294]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:43:24 compute-0 python3.9[167296]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:43:24 compute-0 sudo[167294]: pam_unix(sudo:session): session closed for user root
Nov 25 23:43:24 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v398: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:43:24 compute-0 sudo[167447]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjlcwdhiekmmakvwsfimqfshvsdpaleg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114204.2518945-243-140039138897265/AnsiballZ_command.py'
Nov 25 23:43:24 compute-0 sudo[167447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:43:24 compute-0 python3.9[167449]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:43:24 compute-0 sudo[167447]: pam_unix(sudo:session): session closed for user root
Nov 25 23:43:25 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:43:25 compute-0 ceph-mon[75654]: pgmap v398: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:43:25 compute-0 sudo[167600]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlfqnflquffmyjwuffvxrkrrripnohfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114205.3325062-297-218803577050194/AnsiballZ_getent.py'
Nov 25 23:43:25 compute-0 sudo[167600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:43:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:43:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:43:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:43:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:43:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:43:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:43:26 compute-0 python3.9[167602]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Nov 25 23:43:26 compute-0 sudo[167600]: pam_unix(sudo:session): session closed for user root
Nov 25 23:43:26 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v399: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:43:26 compute-0 sudo[167753]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-baensqbtelczxhfdnplnmhuopgnpoift ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114206.359067-305-265144566543652/AnsiballZ_group.py'
Nov 25 23:43:26 compute-0 sudo[167753]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:43:27 compute-0 python3.9[167755]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 25 23:43:27 compute-0 groupadd[167756]: group added to /etc/group: name=libvirt, GID=42473
Nov 25 23:43:27 compute-0 groupadd[167756]: group added to /etc/gshadow: name=libvirt
Nov 25 23:43:27 compute-0 groupadd[167756]: new group: name=libvirt, GID=42473
Nov 25 23:43:27 compute-0 sudo[167753]: pam_unix(sudo:session): session closed for user root
Nov 25 23:43:27 compute-0 ceph-mon[75654]: pgmap v399: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:43:27 compute-0 sudo[167911]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yutjykqmpulxmvqpinhedhpijkjriohu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114207.4114196-313-260018241117663/AnsiballZ_user.py'
Nov 25 23:43:27 compute-0 sudo[167911]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:43:28 compute-0 python3.9[167913]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 25 23:43:28 compute-0 useradd[167915]: new user: name=libvirt, UID=42473, GID=42473, home=/home/libvirt, shell=/sbin/nologin, from=/dev/pts/0
Nov 25 23:43:28 compute-0 rsyslogd[1001]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 23:43:28 compute-0 rsyslogd[1001]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 23:43:28 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v400: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:43:28 compute-0 sudo[167911]: pam_unix(sudo:session): session closed for user root
Nov 25 23:43:29 compute-0 sudo[168072]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxidqdrouuushzfbqdyvulclnfkangeg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114208.7079735-324-76902022128960/AnsiballZ_setup.py'
Nov 25 23:43:29 compute-0 sudo[168072]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:43:29 compute-0 python3.9[168074]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 23:43:29 compute-0 ceph-mon[75654]: pgmap v400: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:43:29 compute-0 sudo[168072]: pam_unix(sudo:session): session closed for user root
Nov 25 23:43:30 compute-0 sudo[168156]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwjgjagovqhimcexpbrcoxkcgdcaxlgt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114208.7079735-324-76902022128960/AnsiballZ_dnf.py'
Nov 25 23:43:30 compute-0 sudo[168156]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:43:30 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v401: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:43:30 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:43:30 compute-0 python3.9[168158]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 23:43:31 compute-0 ceph-mon[75654]: pgmap v401: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:43:32 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v402: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:43:32 compute-0 podman[168163]: 2025-11-25 23:43:32.374805909 +0000 UTC m=+0.193956514 container health_status 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 25 23:43:33 compute-0 ceph-mon[75654]: pgmap v402: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:43:34 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v403: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:43:35 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:43:35 compute-0 ceph-mon[75654]: pgmap v403: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:43:36 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v404: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:43:37 compute-0 ceph-mon[75654]: pgmap v404: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:43:38 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v405: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:43:39 compute-0 ceph-mon[75654]: pgmap v405: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:43:40 compute-0 podman[168275]: 2025-11-25 23:43:40.262022484 +0000 UTC m=+0.086081892 container health_status 9c33443d6f21f9126b440289b7d384a59ef92406382d0e40a2b11f1da5dc47d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 23:43:40 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v406: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:43:40 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:43:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:43:40.750 160725 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 23:43:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:43:40.751 160725 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 23:43:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:43:40.751 160725 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 23:43:41 compute-0 ceph-mon[75654]: pgmap v406: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:43:42 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v407: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:43:43 compute-0 ceph-mon[75654]: pgmap v407: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:43:44 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v408: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:43:45 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:43:45 compute-0 ceph-mon[75654]: pgmap v408: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:43:46 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v409: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:43:47 compute-0 sshd-session[168391]: Invalid user sftp from 124.18.141.70 port 56610
Nov 25 23:43:47 compute-0 sshd-session[168391]: Received disconnect from 124.18.141.70 port 56610:11: Bye Bye [preauth]
Nov 25 23:43:47 compute-0 sshd-session[168391]: Disconnected from invalid user sftp 124.18.141.70 port 56610 [preauth]
Nov 25 23:43:47 compute-0 ceph-mon[75654]: pgmap v409: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:43:48 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v410: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:43:49 compute-0 ceph-mon[75654]: pgmap v410: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:43:50 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v411: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:43:50 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:43:51 compute-0 ceph-mon[75654]: pgmap v411: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:43:52 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v412: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:43:53 compute-0 ceph-mon[75654]: pgmap v412: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:43:54 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v413: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:43:55 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:43:55 compute-0 ceph-mon[75654]: pgmap v413: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:43:56 compute-0 ceph-mgr[75954]: [balancer INFO root] Optimize plan auto_2025-11-25_23:43:56
Nov 25 23:43:56 compute-0 ceph-mgr[75954]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 23:43:56 compute-0 ceph-mgr[75954]: [balancer INFO root] do_upmap
Nov 25 23:43:56 compute-0 ceph-mgr[75954]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'images', 'volumes', 'backups', 'vms', 'cephfs.cephfs.data', '.mgr']
Nov 25 23:43:56 compute-0 ceph-mgr[75954]: [balancer INFO root] prepared 0/10 changes
Nov 25 23:43:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:43:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:43:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:43:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:43:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:43:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:43:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 23:43:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 23:43:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 23:43:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 23:43:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 23:43:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 23:43:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 23:43:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 23:43:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 23:43:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 23:43:56 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v414: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:43:56 compute-0 sshd-session[168399]: Received disconnect from 171.244.140.22 port 43338:11: Bye Bye [preauth]
Nov 25 23:43:56 compute-0 sshd-session[168399]: Disconnected from authenticating user root 171.244.140.22 port 43338 [preauth]
Nov 25 23:43:57 compute-0 ceph-mon[75654]: pgmap v414: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:43:58 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v415: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:43:58 compute-0 kernel: SELinux:  Converting 2768 SID table entries...
Nov 25 23:43:58 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 23:43:58 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 25 23:43:58 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 23:43:58 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 25 23:43:58 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 23:43:58 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 23:43:58 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 23:43:59 compute-0 ceph-mon[75654]: pgmap v415: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:44:00 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v416: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:44:00 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:44:01 compute-0 sudo[168409]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:44:01 compute-0 dbus-broker-launch[769]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Nov 25 23:44:01 compute-0 sudo[168409]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:44:01 compute-0 sudo[168409]: pam_unix(sudo:session): session closed for user root
Nov 25 23:44:01 compute-0 sudo[168434]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:44:01 compute-0 sudo[168434]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:44:01 compute-0 sudo[168434]: pam_unix(sudo:session): session closed for user root
Nov 25 23:44:01 compute-0 sudo[168459]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:44:01 compute-0 sudo[168459]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:44:01 compute-0 sudo[168459]: pam_unix(sudo:session): session closed for user root
Nov 25 23:44:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 23:44:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:44:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 23:44:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:44:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:44:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:44:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:44:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:44:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:44:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:44:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:44:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:44:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 23:44:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:44:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:44:01 compute-0 sudo[168484]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 23:44:01 compute-0 sudo[168484]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:44:01 compute-0 ceph-mon[75654]: pgmap v416: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:44:02 compute-0 sudo[168484]: pam_unix(sudo:session): session closed for user root
Nov 25 23:44:02 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v417: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:44:02 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 25 23:44:02 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 23:44:02 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 23:44:02 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:44:02 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 23:44:02 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 23:44:02 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 23:44:02 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:44:02 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev 010fbaab-a0ce-46e2-98ce-8e5d9c50d789 does not exist
Nov 25 23:44:02 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev eff805bd-9371-4266-8138-f0ec07e62935 does not exist
Nov 25 23:44:02 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev 7da94ce2-822c-4721-900e-3f4d1045c44e does not exist
Nov 25 23:44:02 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 23:44:02 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 23:44:02 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 23:44:02 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 23:44:02 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 23:44:02 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:44:02 compute-0 sudo[168541]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:44:02 compute-0 sudo[168541]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:44:02 compute-0 sudo[168541]: pam_unix(sudo:session): session closed for user root
Nov 25 23:44:02 compute-0 sudo[168567]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:44:02 compute-0 sudo[168567]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:44:02 compute-0 sudo[168567]: pam_unix(sudo:session): session closed for user root
Nov 25 23:44:02 compute-0 sudo[168608]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:44:02 compute-0 sudo[168608]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:44:02 compute-0 sudo[168608]: pam_unix(sudo:session): session closed for user root
Nov 25 23:44:02 compute-0 podman[168565]: 2025-11-25 23:44:02.601217402 +0000 UTC m=+0.174404014 container health_status 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 23:44:02 compute-0 sudo[168640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 23:44:02 compute-0 sudo[168640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:44:02 compute-0 ceph-mon[75654]: pgmap v417: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:44:02 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 23:44:02 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:44:02 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 23:44:02 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:44:02 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 23:44:02 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 23:44:02 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:44:03 compute-0 podman[168706]: 2025-11-25 23:44:03.031867478 +0000 UTC m=+0.055164750 container create e88743b69d5840699c8803042ba021196493f6d5c55e2f20f4c29d20d561239b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_bhabha, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 23:44:03 compute-0 systemd[1]: Started libpod-conmon-e88743b69d5840699c8803042ba021196493f6d5c55e2f20f4c29d20d561239b.scope.
Nov 25 23:44:03 compute-0 podman[168706]: 2025-11-25 23:44:03.000598445 +0000 UTC m=+0.023895737 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:44:03 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:44:03 compute-0 podman[168706]: 2025-11-25 23:44:03.121156254 +0000 UTC m=+0.144453576 container init e88743b69d5840699c8803042ba021196493f6d5c55e2f20f4c29d20d561239b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_bhabha, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 23:44:03 compute-0 podman[168706]: 2025-11-25 23:44:03.126820146 +0000 UTC m=+0.150117428 container start e88743b69d5840699c8803042ba021196493f6d5c55e2f20f4c29d20d561239b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_bhabha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 23:44:03 compute-0 elated_bhabha[168722]: 167 167
Nov 25 23:44:03 compute-0 podman[168706]: 2025-11-25 23:44:03.130529084 +0000 UTC m=+0.153826356 container attach e88743b69d5840699c8803042ba021196493f6d5c55e2f20f4c29d20d561239b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_bhabha, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 23:44:03 compute-0 systemd[1]: libpod-e88743b69d5840699c8803042ba021196493f6d5c55e2f20f4c29d20d561239b.scope: Deactivated successfully.
Nov 25 23:44:03 compute-0 podman[168706]: 2025-11-25 23:44:03.131256024 +0000 UTC m=+0.154553306 container died e88743b69d5840699c8803042ba021196493f6d5c55e2f20f4c29d20d561239b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_bhabha, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:44:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-7c4f98b17af44dac69a3c0d73f00b01e3703f55fdf858cfaf58d520604444410-merged.mount: Deactivated successfully.
Nov 25 23:44:03 compute-0 podman[168706]: 2025-11-25 23:44:03.188090217 +0000 UTC m=+0.211387499 container remove e88743b69d5840699c8803042ba021196493f6d5c55e2f20f4c29d20d561239b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_bhabha, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:44:03 compute-0 systemd[1]: libpod-conmon-e88743b69d5840699c8803042ba021196493f6d5c55e2f20f4c29d20d561239b.scope: Deactivated successfully.
Nov 25 23:44:03 compute-0 podman[168748]: 2025-11-25 23:44:03.364449032 +0000 UTC m=+0.044550217 container create 4cf56ca57028c2f23e3773d984a1e55ca59c8c2a82f36a710d7db57a2ba0a08f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_golick, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3)
Nov 25 23:44:03 compute-0 systemd[1]: Started libpod-conmon-4cf56ca57028c2f23e3773d984a1e55ca59c8c2a82f36a710d7db57a2ba0a08f.scope.
Nov 25 23:44:03 compute-0 podman[168748]: 2025-11-25 23:44:03.344098871 +0000 UTC m=+0.024200146 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:44:03 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:44:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0bcb74f0b859c8a9117708bb79854d6392929b32c1f4309ee1dc5f9104abfa9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 23:44:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0bcb74f0b859c8a9117708bb79854d6392929b32c1f4309ee1dc5f9104abfa9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:44:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0bcb74f0b859c8a9117708bb79854d6392929b32c1f4309ee1dc5f9104abfa9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:44:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0bcb74f0b859c8a9117708bb79854d6392929b32c1f4309ee1dc5f9104abfa9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:44:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0bcb74f0b859c8a9117708bb79854d6392929b32c1f4309ee1dc5f9104abfa9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 23:44:03 compute-0 podman[168748]: 2025-11-25 23:44:03.476828245 +0000 UTC m=+0.156929510 container init 4cf56ca57028c2f23e3773d984a1e55ca59c8c2a82f36a710d7db57a2ba0a08f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_golick, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 23:44:03 compute-0 podman[168748]: 2025-11-25 23:44:03.493832057 +0000 UTC m=+0.173933282 container start 4cf56ca57028c2f23e3773d984a1e55ca59c8c2a82f36a710d7db57a2ba0a08f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_golick, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 25 23:44:03 compute-0 podman[168748]: 2025-11-25 23:44:03.498778718 +0000 UTC m=+0.178879983 container attach 4cf56ca57028c2f23e3773d984a1e55ca59c8c2a82f36a710d7db57a2ba0a08f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_golick, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 25 23:44:04 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v418: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:44:04 compute-0 nifty_golick[168764]: --> passed data devices: 0 physical, 3 LVM
Nov 25 23:44:04 compute-0 nifty_golick[168764]: --> relative data size: 1.0
Nov 25 23:44:04 compute-0 nifty_golick[168764]: --> All data devices are unavailable
Nov 25 23:44:04 compute-0 systemd[1]: libpod-4cf56ca57028c2f23e3773d984a1e55ca59c8c2a82f36a710d7db57a2ba0a08f.scope: Deactivated successfully.
Nov 25 23:44:04 compute-0 podman[168748]: 2025-11-25 23:44:04.580437736 +0000 UTC m=+1.260538931 container died 4cf56ca57028c2f23e3773d984a1e55ca59c8c2a82f36a710d7db57a2ba0a08f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 23:44:04 compute-0 systemd[1]: libpod-4cf56ca57028c2f23e3773d984a1e55ca59c8c2a82f36a710d7db57a2ba0a08f.scope: Consumed 1.037s CPU time.
Nov 25 23:44:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-a0bcb74f0b859c8a9117708bb79854d6392929b32c1f4309ee1dc5f9104abfa9-merged.mount: Deactivated successfully.
Nov 25 23:44:04 compute-0 podman[168748]: 2025-11-25 23:44:04.632086691 +0000 UTC m=+1.312187876 container remove 4cf56ca57028c2f23e3773d984a1e55ca59c8c2a82f36a710d7db57a2ba0a08f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:44:04 compute-0 systemd[1]: libpod-conmon-4cf56ca57028c2f23e3773d984a1e55ca59c8c2a82f36a710d7db57a2ba0a08f.scope: Deactivated successfully.
Nov 25 23:44:04 compute-0 sudo[168640]: pam_unix(sudo:session): session closed for user root
Nov 25 23:44:04 compute-0 sudo[168805]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:44:04 compute-0 sudo[168805]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:44:04 compute-0 sudo[168805]: pam_unix(sudo:session): session closed for user root
Nov 25 23:44:04 compute-0 sudo[168830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:44:04 compute-0 sudo[168830]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:44:04 compute-0 sudo[168830]: pam_unix(sudo:session): session closed for user root
Nov 25 23:44:04 compute-0 sudo[168855]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:44:04 compute-0 sudo[168855]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:44:04 compute-0 sudo[168855]: pam_unix(sudo:session): session closed for user root
Nov 25 23:44:04 compute-0 sudo[168880]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 -- lvm list --format json
Nov 25 23:44:04 compute-0 sudo[168880]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:44:05 compute-0 ceph-mon[75654]: pgmap v418: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:44:05 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:44:05 compute-0 podman[168945]: 2025-11-25 23:44:05.461097963 +0000 UTC m=+0.072957594 container create 83e3c82622b4c810753d364a4f6aca08dd7028e05e97b85247f7a573a306aae7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_montalcini, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:44:05 compute-0 systemd[1]: Started libpod-conmon-83e3c82622b4c810753d364a4f6aca08dd7028e05e97b85247f7a573a306aae7.scope.
Nov 25 23:44:05 compute-0 podman[168945]: 2025-11-25 23:44:05.43136898 +0000 UTC m=+0.043228621 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:44:05 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:44:05 compute-0 podman[168945]: 2025-11-25 23:44:05.556848591 +0000 UTC m=+0.168708212 container init 83e3c82622b4c810753d364a4f6aca08dd7028e05e97b85247f7a573a306aae7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_montalcini, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 23:44:05 compute-0 podman[168945]: 2025-11-25 23:44:05.563734025 +0000 UTC m=+0.175593626 container start 83e3c82622b4c810753d364a4f6aca08dd7028e05e97b85247f7a573a306aae7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:44:05 compute-0 podman[168945]: 2025-11-25 23:44:05.567531246 +0000 UTC m=+0.179390847 container attach 83e3c82622b4c810753d364a4f6aca08dd7028e05e97b85247f7a573a306aae7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 23:44:05 compute-0 systemd[1]: libpod-83e3c82622b4c810753d364a4f6aca08dd7028e05e97b85247f7a573a306aae7.scope: Deactivated successfully.
Nov 25 23:44:05 compute-0 inspiring_montalcini[168962]: 167 167
Nov 25 23:44:05 compute-0 conmon[168962]: conmon 83e3c82622b4c810753d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-83e3c82622b4c810753d364a4f6aca08dd7028e05e97b85247f7a573a306aae7.scope/container/memory.events
Nov 25 23:44:05 compute-0 podman[168945]: 2025-11-25 23:44:05.571968154 +0000 UTC m=+0.183827775 container died 83e3c82622b4c810753d364a4f6aca08dd7028e05e97b85247f7a573a306aae7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 23:44:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-447477ea12e713bbae6911944c10495c60fdd2fd09f2beeea54e0159344a90ea-merged.mount: Deactivated successfully.
Nov 25 23:44:05 compute-0 podman[168945]: 2025-11-25 23:44:05.610605082 +0000 UTC m=+0.222464693 container remove 83e3c82622b4c810753d364a4f6aca08dd7028e05e97b85247f7a573a306aae7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_montalcini, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Nov 25 23:44:05 compute-0 systemd[1]: libpod-conmon-83e3c82622b4c810753d364a4f6aca08dd7028e05e97b85247f7a573a306aae7.scope: Deactivated successfully.
Nov 25 23:44:05 compute-0 podman[168986]: 2025-11-25 23:44:05.831069082 +0000 UTC m=+0.055456048 container create 1e3acf4ca2ee3001e8d5e223bbf3c4be7a8785c32a1b6b0261af3113a933c3d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_faraday, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:44:05 compute-0 systemd[1]: Started libpod-conmon-1e3acf4ca2ee3001e8d5e223bbf3c4be7a8785c32a1b6b0261af3113a933c3d2.scope.
Nov 25 23:44:05 compute-0 podman[168986]: 2025-11-25 23:44:05.812889048 +0000 UTC m=+0.037276094 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:44:05 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:44:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f687b98ffce40f2aa4370f1c9122683e7b5de4d1ed3786d32f69785026d10f73/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 23:44:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f687b98ffce40f2aa4370f1c9122683e7b5de4d1ed3786d32f69785026d10f73/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:44:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f687b98ffce40f2aa4370f1c9122683e7b5de4d1ed3786d32f69785026d10f73/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:44:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f687b98ffce40f2aa4370f1c9122683e7b5de4d1ed3786d32f69785026d10f73/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:44:05 compute-0 podman[168986]: 2025-11-25 23:44:05.947305247 +0000 UTC m=+0.171692293 container init 1e3acf4ca2ee3001e8d5e223bbf3c4be7a8785c32a1b6b0261af3113a933c3d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_faraday, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:44:05 compute-0 podman[168986]: 2025-11-25 23:44:05.961749011 +0000 UTC m=+0.186136017 container start 1e3acf4ca2ee3001e8d5e223bbf3c4be7a8785c32a1b6b0261af3113a933c3d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_faraday, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:44:05 compute-0 podman[168986]: 2025-11-25 23:44:05.965858971 +0000 UTC m=+0.190245977 container attach 1e3acf4ca2ee3001e8d5e223bbf3c4be7a8785c32a1b6b0261af3113a933c3d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 23:44:06 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v419: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:44:06 compute-0 elegant_faraday[169003]: {
Nov 25 23:44:06 compute-0 elegant_faraday[169003]:     "0": [
Nov 25 23:44:06 compute-0 elegant_faraday[169003]:         {
Nov 25 23:44:06 compute-0 elegant_faraday[169003]:             "devices": [
Nov 25 23:44:06 compute-0 elegant_faraday[169003]:                 "/dev/loop3"
Nov 25 23:44:06 compute-0 elegant_faraday[169003]:             ],
Nov 25 23:44:06 compute-0 elegant_faraday[169003]:             "lv_name": "ceph_lv0",
Nov 25 23:44:06 compute-0 elegant_faraday[169003]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 23:44:06 compute-0 elegant_faraday[169003]:             "lv_size": "21470642176",
Nov 25 23:44:06 compute-0 elegant_faraday[169003]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c5ddf08a-6193-41e0-8332-60b5083aa62e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 23:44:06 compute-0 elegant_faraday[169003]:             "lv_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 23:44:06 compute-0 elegant_faraday[169003]:             "name": "ceph_lv0",
Nov 25 23:44:06 compute-0 elegant_faraday[169003]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 23:44:06 compute-0 elegant_faraday[169003]:             "tags": {
Nov 25 23:44:06 compute-0 elegant_faraday[169003]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 23:44:06 compute-0 elegant_faraday[169003]:                 "ceph.block_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 23:44:06 compute-0 elegant_faraday[169003]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 23:44:06 compute-0 elegant_faraday[169003]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:44:06 compute-0 elegant_faraday[169003]:                 "ceph.cluster_name": "ceph",
Nov 25 23:44:06 compute-0 elegant_faraday[169003]:                 "ceph.crush_device_class": "",
Nov 25 23:44:06 compute-0 elegant_faraday[169003]:                 "ceph.encrypted": "0",
Nov 25 23:44:06 compute-0 elegant_faraday[169003]:                 "ceph.osd_fsid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 23:44:06 compute-0 elegant_faraday[169003]:                 "ceph.osd_id": "0",
Nov 25 23:44:06 compute-0 elegant_faraday[169003]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 23:44:06 compute-0 elegant_faraday[169003]:                 "ceph.type": "block",
Nov 25 23:44:06 compute-0 elegant_faraday[169003]:                 "ceph.vdo": "0"
Nov 25 23:44:06 compute-0 elegant_faraday[169003]:             },
Nov 25 23:44:06 compute-0 elegant_faraday[169003]:             "type": "block",
Nov 25 23:44:06 compute-0 elegant_faraday[169003]:             "vg_name": "ceph_vg0"
Nov 25 23:44:06 compute-0 elegant_faraday[169003]:         }
Nov 25 23:44:06 compute-0 elegant_faraday[169003]:     ],
Nov 25 23:44:06 compute-0 elegant_faraday[169003]:     "1": [
Nov 25 23:44:06 compute-0 elegant_faraday[169003]:         {
Nov 25 23:44:06 compute-0 elegant_faraday[169003]:             "devices": [
Nov 25 23:44:06 compute-0 elegant_faraday[169003]:                 "/dev/loop4"
Nov 25 23:44:06 compute-0 elegant_faraday[169003]:             ],
Nov 25 23:44:06 compute-0 elegant_faraday[169003]:             "lv_name": "ceph_lv1",
Nov 25 23:44:06 compute-0 elegant_faraday[169003]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 23:44:06 compute-0 elegant_faraday[169003]:             "lv_size": "21470642176",
Nov 25 23:44:06 compute-0 elegant_faraday[169003]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21bbab34-bea3-466b-8bf7-812749fcef47,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 23:44:06 compute-0 elegant_faraday[169003]:             "lv_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 23:44:06 compute-0 elegant_faraday[169003]:             "name": "ceph_lv1",
Nov 25 23:44:06 compute-0 elegant_faraday[169003]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 23:44:06 compute-0 elegant_faraday[169003]:             "tags": {
Nov 25 23:44:06 compute-0 elegant_faraday[169003]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 23:44:06 compute-0 elegant_faraday[169003]:                 "ceph.block_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 23:44:06 compute-0 elegant_faraday[169003]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 23:44:06 compute-0 elegant_faraday[169003]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:44:06 compute-0 elegant_faraday[169003]:                 "ceph.cluster_name": "ceph",
Nov 25 23:44:06 compute-0 elegant_faraday[169003]:                 "ceph.crush_device_class": "",
Nov 25 23:44:06 compute-0 elegant_faraday[169003]:                 "ceph.encrypted": "0",
Nov 25 23:44:06 compute-0 elegant_faraday[169003]:                 "ceph.osd_fsid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 23:44:06 compute-0 elegant_faraday[169003]:                 "ceph.osd_id": "1",
Nov 25 23:44:06 compute-0 elegant_faraday[169003]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 23:44:06 compute-0 elegant_faraday[169003]:                 "ceph.type": "block",
Nov 25 23:44:06 compute-0 elegant_faraday[169003]:                 "ceph.vdo": "0"
Nov 25 23:44:06 compute-0 elegant_faraday[169003]:             },
Nov 25 23:44:06 compute-0 elegant_faraday[169003]:             "type": "block",
Nov 25 23:44:06 compute-0 elegant_faraday[169003]:             "vg_name": "ceph_vg1"
Nov 25 23:44:06 compute-0 elegant_faraday[169003]:         }
Nov 25 23:44:06 compute-0 elegant_faraday[169003]:     ],
Nov 25 23:44:06 compute-0 elegant_faraday[169003]:     "2": [
Nov 25 23:44:06 compute-0 elegant_faraday[169003]:         {
Nov 25 23:44:06 compute-0 elegant_faraday[169003]:             "devices": [
Nov 25 23:44:06 compute-0 elegant_faraday[169003]:                 "/dev/loop5"
Nov 25 23:44:06 compute-0 elegant_faraday[169003]:             ],
Nov 25 23:44:06 compute-0 elegant_faraday[169003]:             "lv_name": "ceph_lv2",
Nov 25 23:44:06 compute-0 elegant_faraday[169003]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 23:44:06 compute-0 elegant_faraday[169003]:             "lv_size": "21470642176",
Nov 25 23:44:06 compute-0 elegant_faraday[169003]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=019d967b-1a56-4e90-8682-a890da577e20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 23:44:06 compute-0 elegant_faraday[169003]:             "lv_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 23:44:06 compute-0 elegant_faraday[169003]:             "name": "ceph_lv2",
Nov 25 23:44:06 compute-0 elegant_faraday[169003]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 23:44:06 compute-0 elegant_faraday[169003]:             "tags": {
Nov 25 23:44:06 compute-0 elegant_faraday[169003]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 23:44:06 compute-0 elegant_faraday[169003]:                 "ceph.block_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 23:44:06 compute-0 elegant_faraday[169003]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 23:44:06 compute-0 elegant_faraday[169003]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:44:06 compute-0 elegant_faraday[169003]:                 "ceph.cluster_name": "ceph",
Nov 25 23:44:06 compute-0 elegant_faraday[169003]:                 "ceph.crush_device_class": "",
Nov 25 23:44:06 compute-0 elegant_faraday[169003]:                 "ceph.encrypted": "0",
Nov 25 23:44:06 compute-0 elegant_faraday[169003]:                 "ceph.osd_fsid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 23:44:06 compute-0 elegant_faraday[169003]:                 "ceph.osd_id": "2",
Nov 25 23:44:06 compute-0 elegant_faraday[169003]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 23:44:06 compute-0 elegant_faraday[169003]:                 "ceph.type": "block",
Nov 25 23:44:06 compute-0 elegant_faraday[169003]:                 "ceph.vdo": "0"
Nov 25 23:44:06 compute-0 elegant_faraday[169003]:             },
Nov 25 23:44:06 compute-0 elegant_faraday[169003]:             "type": "block",
Nov 25 23:44:06 compute-0 elegant_faraday[169003]:             "vg_name": "ceph_vg2"
Nov 25 23:44:06 compute-0 elegant_faraday[169003]:         }
Nov 25 23:44:06 compute-0 elegant_faraday[169003]:     ]
Nov 25 23:44:06 compute-0 elegant_faraday[169003]: }
Nov 25 23:44:06 compute-0 systemd[1]: libpod-1e3acf4ca2ee3001e8d5e223bbf3c4be7a8785c32a1b6b0261af3113a933c3d2.scope: Deactivated successfully.
Nov 25 23:44:06 compute-0 podman[168986]: 2025-11-25 23:44:06.70240767 +0000 UTC m=+0.926794666 container died 1e3acf4ca2ee3001e8d5e223bbf3c4be7a8785c32a1b6b0261af3113a933c3d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_faraday, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:44:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-f687b98ffce40f2aa4370f1c9122683e7b5de4d1ed3786d32f69785026d10f73-merged.mount: Deactivated successfully.
Nov 25 23:44:06 compute-0 podman[168986]: 2025-11-25 23:44:06.787053774 +0000 UTC m=+1.011440750 container remove 1e3acf4ca2ee3001e8d5e223bbf3c4be7a8785c32a1b6b0261af3113a933c3d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 25 23:44:06 compute-0 systemd[1]: libpod-conmon-1e3acf4ca2ee3001e8d5e223bbf3c4be7a8785c32a1b6b0261af3113a933c3d2.scope: Deactivated successfully.
Nov 25 23:44:06 compute-0 sudo[168880]: pam_unix(sudo:session): session closed for user root
Nov 25 23:44:06 compute-0 sudo[169026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:44:06 compute-0 sudo[169026]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:44:06 compute-0 sudo[169026]: pam_unix(sudo:session): session closed for user root
Nov 25 23:44:06 compute-0 sudo[169051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:44:06 compute-0 sudo[169051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:44:06 compute-0 sudo[169051]: pam_unix(sudo:session): session closed for user root
Nov 25 23:44:07 compute-0 sudo[169076]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:44:07 compute-0 sudo[169076]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:44:07 compute-0 sudo[169076]: pam_unix(sudo:session): session closed for user root
Nov 25 23:44:07 compute-0 sudo[169101]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 -- raw list --format json
Nov 25 23:44:07 compute-0 sudo[169101]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:44:07 compute-0 ceph-mon[75654]: pgmap v419: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:44:07 compute-0 podman[169166]: 2025-11-25 23:44:07.571994581 +0000 UTC m=+0.065157075 container create 61b8eb85d8f0f41af37765b93fa217c01e31b52b6cbc528769277c3b01db88c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_nobel, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 23:44:07 compute-0 systemd[1]: Started libpod-conmon-61b8eb85d8f0f41af37765b93fa217c01e31b52b6cbc528769277c3b01db88c8.scope.
Nov 25 23:44:07 compute-0 podman[169166]: 2025-11-25 23:44:07.52758973 +0000 UTC m=+0.020752264 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:44:07 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:44:07 compute-0 podman[169166]: 2025-11-25 23:44:07.732125585 +0000 UTC m=+0.225288109 container init 61b8eb85d8f0f41af37765b93fa217c01e31b52b6cbc528769277c3b01db88c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:44:07 compute-0 podman[169166]: 2025-11-25 23:44:07.74772631 +0000 UTC m=+0.240888804 container start 61b8eb85d8f0f41af37765b93fa217c01e31b52b6cbc528769277c3b01db88c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_nobel, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 23:44:07 compute-0 exciting_nobel[169183]: 167 167
Nov 25 23:44:07 compute-0 systemd[1]: libpod-61b8eb85d8f0f41af37765b93fa217c01e31b52b6cbc528769277c3b01db88c8.scope: Deactivated successfully.
Nov 25 23:44:07 compute-0 podman[169166]: 2025-11-25 23:44:07.803840024 +0000 UTC m=+0.297002548 container attach 61b8eb85d8f0f41af37765b93fa217c01e31b52b6cbc528769277c3b01db88c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_nobel, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:44:07 compute-0 podman[169166]: 2025-11-25 23:44:07.804539082 +0000 UTC m=+0.297701606 container died 61b8eb85d8f0f41af37765b93fa217c01e31b52b6cbc528769277c3b01db88c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_nobel, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 25 23:44:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-31544bb57c45cbba3f49f53f92f62c8634946eaad3582210ac2b243d414df45b-merged.mount: Deactivated successfully.
Nov 25 23:44:07 compute-0 podman[169166]: 2025-11-25 23:44:07.881331687 +0000 UTC m=+0.374494211 container remove 61b8eb85d8f0f41af37765b93fa217c01e31b52b6cbc528769277c3b01db88c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_nobel, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 23:44:07 compute-0 systemd[1]: libpod-conmon-61b8eb85d8f0f41af37765b93fa217c01e31b52b6cbc528769277c3b01db88c8.scope: Deactivated successfully.
Nov 25 23:44:08 compute-0 kernel: SELinux:  Converting 2768 SID table entries...
Nov 25 23:44:08 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 23:44:08 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 25 23:44:08 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 23:44:08 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 25 23:44:08 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 23:44:08 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 23:44:08 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 23:44:08 compute-0 podman[169213]: 2025-11-25 23:44:08.096355142 +0000 UTC m=+0.062343631 container create bd0b11bddc11a2664578ea83b0ac2d206924516f7a44d8c07ccb4070d72edb35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_rubin, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 23:44:08 compute-0 dbus-broker-launch[769]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Nov 25 23:44:08 compute-0 systemd[1]: Started libpod-conmon-bd0b11bddc11a2664578ea83b0ac2d206924516f7a44d8c07ccb4070d72edb35.scope.
Nov 25 23:44:08 compute-0 podman[169213]: 2025-11-25 23:44:08.062236733 +0000 UTC m=+0.028225252 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:44:08 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:44:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00bbc8791309010021c61c99298ac6aa0f5aeb6f7433b58a00004f325a03e7a4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 23:44:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00bbc8791309010021c61c99298ac6aa0f5aeb6f7433b58a00004f325a03e7a4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:44:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00bbc8791309010021c61c99298ac6aa0f5aeb6f7433b58a00004f325a03e7a4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:44:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00bbc8791309010021c61c99298ac6aa0f5aeb6f7433b58a00004f325a03e7a4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:44:08 compute-0 podman[169213]: 2025-11-25 23:44:08.1830545 +0000 UTC m=+0.149043029 container init bd0b11bddc11a2664578ea83b0ac2d206924516f7a44d8c07ccb4070d72edb35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 23:44:08 compute-0 podman[169213]: 2025-11-25 23:44:08.196637121 +0000 UTC m=+0.162625650 container start bd0b11bddc11a2664578ea83b0ac2d206924516f7a44d8c07ccb4070d72edb35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_rubin, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:44:08 compute-0 podman[169213]: 2025-11-25 23:44:08.200858434 +0000 UTC m=+0.166846963 container attach bd0b11bddc11a2664578ea83b0ac2d206924516f7a44d8c07ccb4070d72edb35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_rubin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 25 23:44:08 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v420: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:44:09 compute-0 gallant_rubin[169230]: {
Nov 25 23:44:09 compute-0 gallant_rubin[169230]:     "019d967b-1a56-4e90-8682-a890da577e20": {
Nov 25 23:44:09 compute-0 gallant_rubin[169230]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:44:09 compute-0 gallant_rubin[169230]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 23:44:09 compute-0 gallant_rubin[169230]:         "osd_id": 2,
Nov 25 23:44:09 compute-0 gallant_rubin[169230]:         "osd_uuid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 23:44:09 compute-0 gallant_rubin[169230]:         "type": "bluestore"
Nov 25 23:44:09 compute-0 gallant_rubin[169230]:     },
Nov 25 23:44:09 compute-0 gallant_rubin[169230]:     "21bbab34-bea3-466b-8bf7-812749fcef47": {
Nov 25 23:44:09 compute-0 gallant_rubin[169230]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:44:09 compute-0 gallant_rubin[169230]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 23:44:09 compute-0 gallant_rubin[169230]:         "osd_id": 1,
Nov 25 23:44:09 compute-0 gallant_rubin[169230]:         "osd_uuid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 23:44:09 compute-0 gallant_rubin[169230]:         "type": "bluestore"
Nov 25 23:44:09 compute-0 gallant_rubin[169230]:     },
Nov 25 23:44:09 compute-0 gallant_rubin[169230]:     "c5ddf08a-6193-41e0-8332-60b5083aa62e": {
Nov 25 23:44:09 compute-0 gallant_rubin[169230]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:44:09 compute-0 gallant_rubin[169230]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 23:44:09 compute-0 gallant_rubin[169230]:         "osd_id": 0,
Nov 25 23:44:09 compute-0 gallant_rubin[169230]:         "osd_uuid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 23:44:09 compute-0 gallant_rubin[169230]:         "type": "bluestore"
Nov 25 23:44:09 compute-0 gallant_rubin[169230]:     }
Nov 25 23:44:09 compute-0 gallant_rubin[169230]: }
Nov 25 23:44:09 compute-0 systemd[1]: libpod-bd0b11bddc11a2664578ea83b0ac2d206924516f7a44d8c07ccb4070d72edb35.scope: Deactivated successfully.
Nov 25 23:44:09 compute-0 podman[169213]: 2025-11-25 23:44:09.132855587 +0000 UTC m=+1.098844096 container died bd0b11bddc11a2664578ea83b0ac2d206924516f7a44d8c07ccb4070d72edb35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_rubin, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 23:44:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-00bbc8791309010021c61c99298ac6aa0f5aeb6f7433b58a00004f325a03e7a4-merged.mount: Deactivated successfully.
Nov 25 23:44:09 compute-0 podman[169213]: 2025-11-25 23:44:09.207286609 +0000 UTC m=+1.173275108 container remove bd0b11bddc11a2664578ea83b0ac2d206924516f7a44d8c07ccb4070d72edb35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_rubin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:44:09 compute-0 systemd[1]: libpod-conmon-bd0b11bddc11a2664578ea83b0ac2d206924516f7a44d8c07ccb4070d72edb35.scope: Deactivated successfully.
Nov 25 23:44:09 compute-0 sudo[169101]: pam_unix(sudo:session): session closed for user root
Nov 25 23:44:09 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 23:44:09 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:44:09 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 23:44:09 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:44:09 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev 71c11069-eda9-4fef-aa8f-649010eb9c87 does not exist
Nov 25 23:44:09 compute-0 sudo[169274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:44:09 compute-0 sudo[169274]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:44:09 compute-0 sudo[169274]: pam_unix(sudo:session): session closed for user root
Nov 25 23:44:09 compute-0 ceph-mon[75654]: pgmap v420: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:44:09 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:44:09 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:44:09 compute-0 sudo[169299]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 23:44:09 compute-0 sudo[169299]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:44:09 compute-0 sudo[169299]: pam_unix(sudo:session): session closed for user root
Nov 25 23:44:10 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v421: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:44:10 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:44:11 compute-0 podman[169324]: 2025-11-25 23:44:11.272565893 +0000 UTC m=+0.089689298 container health_status 9c33443d6f21f9126b440289b7d384a59ef92406382d0e40a2b11f1da5dc47d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Nov 25 23:44:11 compute-0 ceph-mon[75654]: pgmap v421: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:44:12 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v422: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:44:13 compute-0 ceph-mon[75654]: pgmap v422: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:44:14 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v423: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:44:15 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:44:15 compute-0 ceph-mon[75654]: pgmap v423: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:44:16 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v424: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:44:17 compute-0 ceph-mon[75654]: pgmap v424: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:44:18 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v425: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:44:19 compute-0 ceph-mon[75654]: pgmap v425: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:44:20 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v426: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:44:20 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:44:21 compute-0 ceph-mon[75654]: pgmap v426: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:44:22 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v427: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:44:23 compute-0 ceph-mon[75654]: pgmap v427: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:44:24 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v428: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:44:24 compute-0 ceph-mon[75654]: pgmap v428: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:44:25 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:44:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:44:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:44:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:44:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:44:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:44:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:44:26 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v429: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:44:27 compute-0 ceph-mon[75654]: pgmap v429: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:44:28 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v430: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:44:29 compute-0 ceph-mon[75654]: pgmap v430: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:44:30 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v431: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:44:30 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:44:31 compute-0 ceph-mon[75654]: pgmap v431: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:44:32 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v432: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:44:33 compute-0 podman[175921]: 2025-11-25 23:44:33.293303934 +0000 UTC m=+0.116467571 container health_status 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 23:44:33 compute-0 ceph-mon[75654]: pgmap v432: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:44:34 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v433: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:44:35 compute-0 ceph-mon[75654]: pgmap v433: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:44:35 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:44:35 compute-0 sshd-session[176588]: Invalid user user from 202.83.162.167 port 60148
Nov 25 23:44:35 compute-0 sshd-session[176588]: Received disconnect from 202.83.162.167 port 60148:11: Bye Bye [preauth]
Nov 25 23:44:35 compute-0 sshd-session[176588]: Disconnected from invalid user user 202.83.162.167 port 60148 [preauth]
Nov 25 23:44:36 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v434: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:44:37 compute-0 ceph-mon[75654]: pgmap v434: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:44:38 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v435: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:44:39 compute-0 ceph-mon[75654]: pgmap v435: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:44:40 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v436: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:44:40 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:44:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:44:40.752 160725 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 23:44:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:44:40.752 160725 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 23:44:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:44:40.752 160725 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 23:44:41 compute-0 ceph-mon[75654]: pgmap v436: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:44:42 compute-0 podman[180115]: 2025-11-25 23:44:42.263115248 +0000 UTC m=+0.078488994 container health_status 9c33443d6f21f9126b440289b7d384a59ef92406382d0e40a2b11f1da5dc47d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 23:44:42 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v437: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:44:43 compute-0 ceph-mon[75654]: pgmap v437: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:44:44 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v438: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:44:45 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:44:45 compute-0 ceph-mon[75654]: pgmap v438: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:44:46 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v439: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:44:47 compute-0 ceph-mon[75654]: pgmap v439: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:44:48 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v440: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:44:49 compute-0 ceph-mon[75654]: pgmap v440: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:44:50 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v441: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:44:50 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:44:51 compute-0 ceph-mon[75654]: pgmap v441: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:44:52 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v442: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:44:53 compute-0 ceph-mon[75654]: pgmap v442: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:44:54 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v443: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:44:55 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:44:55 compute-0 ceph-mon[75654]: pgmap v443: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:44:56 compute-0 ceph-mgr[75954]: [balancer INFO root] Optimize plan auto_2025-11-25_23:44:56
Nov 25 23:44:56 compute-0 ceph-mgr[75954]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 23:44:56 compute-0 ceph-mgr[75954]: [balancer INFO root] do_upmap
Nov 25 23:44:56 compute-0 ceph-mgr[75954]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.data', 'images', 'vms', '.mgr', 'cephfs.cephfs.meta', 'volumes']
Nov 25 23:44:56 compute-0 ceph-mgr[75954]: [balancer INFO root] prepared 0/10 changes
Nov 25 23:44:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:44:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:44:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:44:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:44:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:44:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:44:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 23:44:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 23:44:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 23:44:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 23:44:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 23:44:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 23:44:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 23:44:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 23:44:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 23:44:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 23:44:56 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v444: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:44:57 compute-0 ceph-mon[75654]: pgmap v444: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:44:58 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v445: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:44:59 compute-0 ceph-mon[75654]: pgmap v445: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:45:00 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v446: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:45:00 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:45:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 23:45:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:45:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 23:45:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:45:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:45:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:45:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:45:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:45:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:45:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:45:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:45:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:45:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 23:45:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:45:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:45:01 compute-0 ceph-mon[75654]: pgmap v446: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:45:02 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v447: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:45:03 compute-0 ceph-mon[75654]: pgmap v447: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:45:04 compute-0 podman[186187]: 2025-11-25 23:45:04.265911793 +0000 UTC m=+0.086356153 container health_status 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Nov 25 23:45:04 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v448: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:45:05 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:45:05 compute-0 ceph-mon[75654]: pgmap v448: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:45:06 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v449: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:45:06 compute-0 kernel: SELinux:  Converting 2769 SID table entries...
Nov 25 23:45:06 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 23:45:06 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 25 23:45:06 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 23:45:06 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 25 23:45:06 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 23:45:06 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 23:45:06 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 23:45:07 compute-0 ceph-mon[75654]: pgmap v449: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:45:08 compute-0 groupadd[186226]: group added to /etc/group: name=dnsmasq, GID=991
Nov 25 23:45:08 compute-0 groupadd[186226]: group added to /etc/gshadow: name=dnsmasq
Nov 25 23:45:08 compute-0 groupadd[186226]: new group: name=dnsmasq, GID=991
Nov 25 23:45:08 compute-0 useradd[186233]: new user: name=dnsmasq, UID=991, GID=991, home=/var/lib/dnsmasq, shell=/usr/sbin/nologin, from=none
Nov 25 23:45:08 compute-0 dbus-broker-launch[758]: Noticed file-system modification, trigger reload.
Nov 25 23:45:08 compute-0 dbus-broker-launch[769]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Nov 25 23:45:08 compute-0 dbus-broker-launch[758]: Noticed file-system modification, trigger reload.
Nov 25 23:45:08 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v450: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:45:08 compute-0 ceph-mon[75654]: pgmap v450: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:45:09 compute-0 groupadd[186246]: group added to /etc/group: name=clevis, GID=990
Nov 25 23:45:09 compute-0 groupadd[186246]: group added to /etc/gshadow: name=clevis
Nov 25 23:45:09 compute-0 groupadd[186246]: new group: name=clevis, GID=990
Nov 25 23:45:09 compute-0 useradd[186253]: new user: name=clevis, UID=990, GID=990, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none
Nov 25 23:45:09 compute-0 usermod[186263]: add 'clevis' to group 'tss'
Nov 25 23:45:09 compute-0 usermod[186263]: add 'clevis' to shadow group 'tss'
Nov 25 23:45:09 compute-0 sudo[186270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:45:09 compute-0 sudo[186270]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:45:09 compute-0 sudo[186270]: pam_unix(sudo:session): session closed for user root
Nov 25 23:45:09 compute-0 sudo[186295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:45:09 compute-0 sudo[186295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:45:09 compute-0 sudo[186295]: pam_unix(sudo:session): session closed for user root
Nov 25 23:45:09 compute-0 sudo[186320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:45:09 compute-0 sudo[186320]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:45:09 compute-0 sudo[186320]: pam_unix(sudo:session): session closed for user root
Nov 25 23:45:09 compute-0 sudo[186348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 25 23:45:09 compute-0 sudo[186348]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:45:10 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v451: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:45:10 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:45:10 compute-0 podman[186452]: 2025-11-25 23:45:10.481450696 +0000 UTC m=+0.118029773 container exec 42789e176a5dd28a3c234ac87642cbe4e8e4b6a7a43641214b6da8a496b18af2 (image=quay.io/ceph/ceph:v18, name=ceph-101922db-575f-58e2-980f-928050464f69-mon-compute-0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 23:45:10 compute-0 podman[186452]: 2025-11-25 23:45:10.591068586 +0000 UTC m=+0.227647613 container exec_died 42789e176a5dd28a3c234ac87642cbe4e8e4b6a7a43641214b6da8a496b18af2 (image=quay.io/ceph/ceph:v18, name=ceph-101922db-575f-58e2-980f-928050464f69-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:45:11 compute-0 sudo[186348]: pam_unix(sudo:session): session closed for user root
Nov 25 23:45:11 compute-0 ceph-mon[75654]: pgmap v451: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:45:11 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 23:45:11 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:45:11 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 23:45:11 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:45:11 compute-0 sudo[186594]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:45:11 compute-0 sudo[186594]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:45:11 compute-0 sudo[186594]: pam_unix(sudo:session): session closed for user root
Nov 25 23:45:11 compute-0 polkitd[44014]: Reloading rules
Nov 25 23:45:11 compute-0 polkitd[44014]: Collecting garbage unconditionally...
Nov 25 23:45:11 compute-0 polkitd[44014]: Loading rules from directory /etc/polkit-1/rules.d
Nov 25 23:45:11 compute-0 polkitd[44014]: Loading rules from directory /usr/share/polkit-1/rules.d
Nov 25 23:45:11 compute-0 polkitd[44014]: Finished loading, compiling and executing 3 rules
Nov 25 23:45:11 compute-0 polkitd[44014]: Reloading rules
Nov 25 23:45:11 compute-0 polkitd[44014]: Collecting garbage unconditionally...
Nov 25 23:45:11 compute-0 polkitd[44014]: Loading rules from directory /etc/polkit-1/rules.d
Nov 25 23:45:11 compute-0 polkitd[44014]: Loading rules from directory /usr/share/polkit-1/rules.d
Nov 25 23:45:11 compute-0 polkitd[44014]: Finished loading, compiling and executing 3 rules
Nov 25 23:45:11 compute-0 sudo[186629]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:45:11 compute-0 sudo[186629]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:45:11 compute-0 sudo[186629]: pam_unix(sudo:session): session closed for user root
Nov 25 23:45:11 compute-0 sudo[186662]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:45:11 compute-0 sudo[186662]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:45:11 compute-0 sudo[186662]: pam_unix(sudo:session): session closed for user root
Nov 25 23:45:11 compute-0 sudo[186700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 23:45:11 compute-0 sudo[186700]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:45:12 compute-0 sudo[186700]: pam_unix(sudo:session): session closed for user root
Nov 25 23:45:12 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 23:45:12 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:45:12 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 23:45:12 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 23:45:12 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 23:45:12 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:45:12 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev 43793928-8cba-4a30-931c-f1cdfe973d7f does not exist
Nov 25 23:45:12 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev 85588371-a430-4799-be4c-b5a7e27c81b5 does not exist
Nov 25 23:45:12 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev 27570809-1dbb-4236-a604-437f6bb04b02 does not exist
Nov 25 23:45:12 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 23:45:12 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 23:45:12 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 23:45:12 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 23:45:12 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 23:45:12 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:45:12 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v452: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:45:12 compute-0 sudo[186846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:45:12 compute-0 sudo[186846]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:45:12 compute-0 sudo[186846]: pam_unix(sudo:session): session closed for user root
Nov 25 23:45:12 compute-0 sudo[186878]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:45:12 compute-0 sudo[186878]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:45:12 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:45:12 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:45:12 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:45:12 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 23:45:12 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:45:12 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 23:45:12 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 23:45:12 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:45:12 compute-0 sudo[186878]: pam_unix(sudo:session): session closed for user root
Nov 25 23:45:12 compute-0 podman[186876]: 2025-11-25 23:45:12.407484175 +0000 UTC m=+0.068120639 container health_status 9c33443d6f21f9126b440289b7d384a59ef92406382d0e40a2b11f1da5dc47d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Nov 25 23:45:12 compute-0 sudo[186929]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:45:12 compute-0 sudo[186929]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:45:12 compute-0 sudo[186929]: pam_unix(sudo:session): session closed for user root
Nov 25 23:45:12 compute-0 sudo[186967]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 23:45:12 compute-0 sudo[186967]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:45:12 compute-0 groupadd[187021]: group added to /etc/group: name=ceph, GID=167
Nov 25 23:45:12 compute-0 groupadd[187021]: group added to /etc/gshadow: name=ceph
Nov 25 23:45:12 compute-0 groupadd[187021]: new group: name=ceph, GID=167
Nov 25 23:45:12 compute-0 useradd[187041]: new user: name=ceph, UID=167, GID=167, home=/var/lib/ceph, shell=/sbin/nologin, from=none
Nov 25 23:45:12 compute-0 podman[187062]: 2025-11-25 23:45:12.896820632 +0000 UTC m=+0.074392976 container create f534f3c6b5fa2d72983601da32d3d8909e7148edfb0fba79e49f0f8512c07618 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_rosalind, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 23:45:12 compute-0 systemd[1]: Started libpod-conmon-f534f3c6b5fa2d72983601da32d3d8909e7148edfb0fba79e49f0f8512c07618.scope.
Nov 25 23:45:12 compute-0 podman[187062]: 2025-11-25 23:45:12.867676738 +0000 UTC m=+0.045249102 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:45:12 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:45:13 compute-0 podman[187062]: 2025-11-25 23:45:13.007366556 +0000 UTC m=+0.184938900 container init f534f3c6b5fa2d72983601da32d3d8909e7148edfb0fba79e49f0f8512c07618 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:45:13 compute-0 podman[187062]: 2025-11-25 23:45:13.019648062 +0000 UTC m=+0.197220436 container start f534f3c6b5fa2d72983601da32d3d8909e7148edfb0fba79e49f0f8512c07618 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_rosalind, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:45:13 compute-0 podman[187062]: 2025-11-25 23:45:13.023427222 +0000 UTC m=+0.200999596 container attach f534f3c6b5fa2d72983601da32d3d8909e7148edfb0fba79e49f0f8512c07618 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_rosalind, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:45:13 compute-0 cranky_rosalind[187078]: 167 167
Nov 25 23:45:13 compute-0 systemd[1]: libpod-f534f3c6b5fa2d72983601da32d3d8909e7148edfb0fba79e49f0f8512c07618.scope: Deactivated successfully.
Nov 25 23:45:13 compute-0 podman[187062]: 2025-11-25 23:45:13.028931688 +0000 UTC m=+0.206504042 container died f534f3c6b5fa2d72983601da32d3d8909e7148edfb0fba79e49f0f8512c07618 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_rosalind, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:45:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-de67faf27eb09da5177414384b64db47e8f3c15d72aa4faef2298b89e75b3b2c-merged.mount: Deactivated successfully.
Nov 25 23:45:13 compute-0 podman[187062]: 2025-11-25 23:45:13.073749427 +0000 UTC m=+0.251321771 container remove f534f3c6b5fa2d72983601da32d3d8909e7148edfb0fba79e49f0f8512c07618 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_rosalind, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 23:45:13 compute-0 systemd[1]: libpod-conmon-f534f3c6b5fa2d72983601da32d3d8909e7148edfb0fba79e49f0f8512c07618.scope: Deactivated successfully.
Nov 25 23:45:13 compute-0 podman[187102]: 2025-11-25 23:45:13.333002499 +0000 UTC m=+0.074172120 container create 501d3900fe403667a48331a180a1c61b71b32847ef6ec1c0a7f0d205321dfe59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_hermann, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:45:13 compute-0 systemd[1]: Started libpod-conmon-501d3900fe403667a48331a180a1c61b71b32847ef6ec1c0a7f0d205321dfe59.scope.
Nov 25 23:45:13 compute-0 ceph-mon[75654]: pgmap v452: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:45:13 compute-0 podman[187102]: 2025-11-25 23:45:13.301383989 +0000 UTC m=+0.042553660 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:45:13 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:45:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ac84d92d8c16ddb484109a296b24b7899495aa4b1d4f7743176432c18007444/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 23:45:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ac84d92d8c16ddb484109a296b24b7899495aa4b1d4f7743176432c18007444/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:45:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ac84d92d8c16ddb484109a296b24b7899495aa4b1d4f7743176432c18007444/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:45:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ac84d92d8c16ddb484109a296b24b7899495aa4b1d4f7743176432c18007444/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:45:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ac84d92d8c16ddb484109a296b24b7899495aa4b1d4f7743176432c18007444/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 23:45:13 compute-0 podman[187102]: 2025-11-25 23:45:13.450059025 +0000 UTC m=+0.191228646 container init 501d3900fe403667a48331a180a1c61b71b32847ef6ec1c0a7f0d205321dfe59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_hermann, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:45:13 compute-0 podman[187102]: 2025-11-25 23:45:13.460665077 +0000 UTC m=+0.201834688 container start 501d3900fe403667a48331a180a1c61b71b32847ef6ec1c0a7f0d205321dfe59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_hermann, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:45:13 compute-0 podman[187102]: 2025-11-25 23:45:13.464904819 +0000 UTC m=+0.206074430 container attach 501d3900fe403667a48331a180a1c61b71b32847ef6ec1c0a7f0d205321dfe59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_hermann, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:45:14 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v453: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:45:14 compute-0 quirky_hermann[187119]: --> passed data devices: 0 physical, 3 LVM
Nov 25 23:45:14 compute-0 quirky_hermann[187119]: --> relative data size: 1.0
Nov 25 23:45:14 compute-0 quirky_hermann[187119]: --> All data devices are unavailable
Nov 25 23:45:14 compute-0 systemd[1]: libpod-501d3900fe403667a48331a180a1c61b71b32847ef6ec1c0a7f0d205321dfe59.scope: Deactivated successfully.
Nov 25 23:45:14 compute-0 systemd[1]: libpod-501d3900fe403667a48331a180a1c61b71b32847ef6ec1c0a7f0d205321dfe59.scope: Consumed 1.088s CPU time.
Nov 25 23:45:14 compute-0 podman[187102]: 2025-11-25 23:45:14.596835171 +0000 UTC m=+1.338004822 container died 501d3900fe403667a48331a180a1c61b71b32847ef6ec1c0a7f0d205321dfe59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 25 23:45:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-5ac84d92d8c16ddb484109a296b24b7899495aa4b1d4f7743176432c18007444-merged.mount: Deactivated successfully.
Nov 25 23:45:14 compute-0 podman[187102]: 2025-11-25 23:45:14.680113022 +0000 UTC m=+1.421282613 container remove 501d3900fe403667a48331a180a1c61b71b32847ef6ec1c0a7f0d205321dfe59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_hermann, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:45:14 compute-0 systemd[1]: libpod-conmon-501d3900fe403667a48331a180a1c61b71b32847ef6ec1c0a7f0d205321dfe59.scope: Deactivated successfully.
Nov 25 23:45:14 compute-0 sudo[186967]: pam_unix(sudo:session): session closed for user root
Nov 25 23:45:14 compute-0 sudo[187260]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:45:14 compute-0 sudo[187260]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:45:14 compute-0 sudo[187260]: pam_unix(sudo:session): session closed for user root
Nov 25 23:45:14 compute-0 sudo[187325]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:45:14 compute-0 sudo[187325]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:45:14 compute-0 sudo[187325]: pam_unix(sudo:session): session closed for user root
Nov 25 23:45:14 compute-0 sudo[187380]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:45:14 compute-0 sudo[187380]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:45:14 compute-0 sudo[187380]: pam_unix(sudo:session): session closed for user root
Nov 25 23:45:15 compute-0 sudo[187438]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 -- lvm list --format json
Nov 25 23:45:15 compute-0 sudo[187438]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:45:15 compute-0 ceph-mon[75654]: pgmap v453: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:45:15 compute-0 podman[187695]: 2025-11-25 23:45:15.401929408 +0000 UTC m=+0.065807417 container create d61767c6c3becf3dbb3ed29191ac2f715a3f2b1c5a9111199bf13654a8d761f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_kapitsa, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:45:15 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:45:15 compute-0 systemd[1]: Started libpod-conmon-d61767c6c3becf3dbb3ed29191ac2f715a3f2b1c5a9111199bf13654a8d761f0.scope.
Nov 25 23:45:15 compute-0 podman[187695]: 2025-11-25 23:45:15.375018004 +0000 UTC m=+0.038896093 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:45:15 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:45:15 compute-0 podman[187695]: 2025-11-25 23:45:15.492993645 +0000 UTC m=+0.156871704 container init d61767c6c3becf3dbb3ed29191ac2f715a3f2b1c5a9111199bf13654a8d761f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_kapitsa, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 23:45:15 compute-0 podman[187695]: 2025-11-25 23:45:15.498979475 +0000 UTC m=+0.162857474 container start d61767c6c3becf3dbb3ed29191ac2f715a3f2b1c5a9111199bf13654a8d761f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 23:45:15 compute-0 podman[187695]: 2025-11-25 23:45:15.501873222 +0000 UTC m=+0.165751221 container attach d61767c6c3becf3dbb3ed29191ac2f715a3f2b1c5a9111199bf13654a8d761f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_kapitsa, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:45:15 compute-0 upbeat_kapitsa[187783]: 167 167
Nov 25 23:45:15 compute-0 systemd[1]: libpod-d61767c6c3becf3dbb3ed29191ac2f715a3f2b1c5a9111199bf13654a8d761f0.scope: Deactivated successfully.
Nov 25 23:45:15 compute-0 podman[187695]: 2025-11-25 23:45:15.506219777 +0000 UTC m=+0.170097796 container died d61767c6c3becf3dbb3ed29191ac2f715a3f2b1c5a9111199bf13654a8d761f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_kapitsa, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 23:45:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-d36716eeb2fe7ca2ad3809314b42107f675ecd1f0519a755b205aef7da3330ba-merged.mount: Deactivated successfully.
Nov 25 23:45:15 compute-0 podman[187695]: 2025-11-25 23:45:15.541613616 +0000 UTC m=+0.205491605 container remove d61767c6c3becf3dbb3ed29191ac2f715a3f2b1c5a9111199bf13654a8d761f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:45:15 compute-0 systemd[1]: libpod-conmon-d61767c6c3becf3dbb3ed29191ac2f715a3f2b1c5a9111199bf13654a8d761f0.scope: Deactivated successfully.
Nov 25 23:45:15 compute-0 podman[187949]: 2025-11-25 23:45:15.695620243 +0000 UTC m=+0.037197448 container create 520adc5fbc1067ed0a4f18e855e3f4dd2fde0fd416f846b154b1c4a6aa28ac79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:45:15 compute-0 systemd[1]: Started libpod-conmon-520adc5fbc1067ed0a4f18e855e3f4dd2fde0fd416f846b154b1c4a6aa28ac79.scope.
Nov 25 23:45:15 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:45:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c702683a2e6c046fd88b94804e6d933a5b9781cee3e3ce23e66afbb5a5abb885/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 23:45:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c702683a2e6c046fd88b94804e6d933a5b9781cee3e3ce23e66afbb5a5abb885/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:45:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c702683a2e6c046fd88b94804e6d933a5b9781cee3e3ce23e66afbb5a5abb885/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:45:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c702683a2e6c046fd88b94804e6d933a5b9781cee3e3ce23e66afbb5a5abb885/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:45:15 compute-0 sshd-session[187146]: Invalid user apagar from 45.78.216.132 port 40116
Nov 25 23:45:15 compute-0 podman[187949]: 2025-11-25 23:45:15.678290064 +0000 UTC m=+0.019867289 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:45:15 compute-0 podman[187949]: 2025-11-25 23:45:15.777641111 +0000 UTC m=+0.119218346 container init 520adc5fbc1067ed0a4f18e855e3f4dd2fde0fd416f846b154b1c4a6aa28ac79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_lamport, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 23:45:15 compute-0 podman[187949]: 2025-11-25 23:45:15.790377879 +0000 UTC m=+0.131955074 container start 520adc5fbc1067ed0a4f18e855e3f4dd2fde0fd416f846b154b1c4a6aa28ac79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_lamport, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:45:15 compute-0 podman[187949]: 2025-11-25 23:45:15.79496457 +0000 UTC m=+0.136541795 container attach 520adc5fbc1067ed0a4f18e855e3f4dd2fde0fd416f846b154b1c4a6aa28ac79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 23:45:15 compute-0 sshd[1002]: Received signal 15; terminating.
Nov 25 23:45:15 compute-0 systemd[1]: Stopping OpenSSH server daemon...
Nov 25 23:45:15 compute-0 systemd[1]: sshd.service: Deactivated successfully.
Nov 25 23:45:15 compute-0 systemd[1]: sshd.service: Unit process 187146 (sshd-session) remains running after unit stopped.
Nov 25 23:45:15 compute-0 systemd[1]: sshd.service: Unit process 187148 (sshd-session) remains running after unit stopped.
Nov 25 23:45:15 compute-0 systemd[1]: Stopped OpenSSH server daemon.
Nov 25 23:45:15 compute-0 systemd[1]: sshd.service: Consumed 16.482s CPU time, 38.2M memory peak, read 564.0K from disk, written 384.0K to disk.
Nov 25 23:45:15 compute-0 systemd[1]: Stopped target sshd-keygen.target.
Nov 25 23:45:15 compute-0 systemd[1]: Stopping sshd-keygen.target...
Nov 25 23:45:15 compute-0 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 25 23:45:15 compute-0 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 25 23:45:15 compute-0 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 25 23:45:15 compute-0 systemd[1]: Reached target sshd-keygen.target.
Nov 25 23:45:15 compute-0 systemd[1]: Starting OpenSSH server daemon...
Nov 25 23:45:15 compute-0 sshd[187979]: Server listening on 0.0.0.0 port 22.
Nov 25 23:45:15 compute-0 sshd[187979]: Server listening on :: port 22.
Nov 25 23:45:15 compute-0 systemd[1]: Started OpenSSH server daemon.
Nov 25 23:45:16 compute-0 sshd-session[187146]: Received disconnect from 45.78.216.132 port 40116:11: Bye Bye [preauth]
Nov 25 23:45:16 compute-0 sshd-session[187146]: Disconnected from invalid user apagar 45.78.216.132 port 40116 [preauth]
Nov 25 23:45:16 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v454: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:45:16 compute-0 vigilant_lamport[187970]: {
Nov 25 23:45:16 compute-0 vigilant_lamport[187970]:     "0": [
Nov 25 23:45:16 compute-0 vigilant_lamport[187970]:         {
Nov 25 23:45:16 compute-0 vigilant_lamport[187970]:             "devices": [
Nov 25 23:45:16 compute-0 vigilant_lamport[187970]:                 "/dev/loop3"
Nov 25 23:45:16 compute-0 vigilant_lamport[187970]:             ],
Nov 25 23:45:16 compute-0 vigilant_lamport[187970]:             "lv_name": "ceph_lv0",
Nov 25 23:45:16 compute-0 vigilant_lamport[187970]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 23:45:16 compute-0 vigilant_lamport[187970]:             "lv_size": "21470642176",
Nov 25 23:45:16 compute-0 vigilant_lamport[187970]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c5ddf08a-6193-41e0-8332-60b5083aa62e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 23:45:16 compute-0 vigilant_lamport[187970]:             "lv_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 23:45:16 compute-0 vigilant_lamport[187970]:             "name": "ceph_lv0",
Nov 25 23:45:16 compute-0 vigilant_lamport[187970]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 23:45:16 compute-0 vigilant_lamport[187970]:             "tags": {
Nov 25 23:45:16 compute-0 vigilant_lamport[187970]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 23:45:16 compute-0 vigilant_lamport[187970]:                 "ceph.block_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 23:45:16 compute-0 vigilant_lamport[187970]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 23:45:16 compute-0 vigilant_lamport[187970]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:45:16 compute-0 vigilant_lamport[187970]:                 "ceph.cluster_name": "ceph",
Nov 25 23:45:16 compute-0 vigilant_lamport[187970]:                 "ceph.crush_device_class": "",
Nov 25 23:45:16 compute-0 vigilant_lamport[187970]:                 "ceph.encrypted": "0",
Nov 25 23:45:16 compute-0 vigilant_lamport[187970]:                 "ceph.osd_fsid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 23:45:16 compute-0 vigilant_lamport[187970]:                 "ceph.osd_id": "0",
Nov 25 23:45:16 compute-0 vigilant_lamport[187970]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 23:45:16 compute-0 vigilant_lamport[187970]:                 "ceph.type": "block",
Nov 25 23:45:16 compute-0 vigilant_lamport[187970]:                 "ceph.vdo": "0"
Nov 25 23:45:16 compute-0 vigilant_lamport[187970]:             },
Nov 25 23:45:16 compute-0 vigilant_lamport[187970]:             "type": "block",
Nov 25 23:45:16 compute-0 vigilant_lamport[187970]:             "vg_name": "ceph_vg0"
Nov 25 23:45:16 compute-0 vigilant_lamport[187970]:         }
Nov 25 23:45:16 compute-0 vigilant_lamport[187970]:     ],
Nov 25 23:45:16 compute-0 vigilant_lamport[187970]:     "1": [
Nov 25 23:45:16 compute-0 vigilant_lamport[187970]:         {
Nov 25 23:45:16 compute-0 vigilant_lamport[187970]:             "devices": [
Nov 25 23:45:16 compute-0 vigilant_lamport[187970]:                 "/dev/loop4"
Nov 25 23:45:16 compute-0 vigilant_lamport[187970]:             ],
Nov 25 23:45:16 compute-0 vigilant_lamport[187970]:             "lv_name": "ceph_lv1",
Nov 25 23:45:16 compute-0 vigilant_lamport[187970]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 23:45:16 compute-0 vigilant_lamport[187970]:             "lv_size": "21470642176",
Nov 25 23:45:16 compute-0 vigilant_lamport[187970]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21bbab34-bea3-466b-8bf7-812749fcef47,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 23:45:16 compute-0 vigilant_lamport[187970]:             "lv_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 23:45:16 compute-0 vigilant_lamport[187970]:             "name": "ceph_lv1",
Nov 25 23:45:16 compute-0 vigilant_lamport[187970]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 23:45:16 compute-0 vigilant_lamport[187970]:             "tags": {
Nov 25 23:45:16 compute-0 vigilant_lamport[187970]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 23:45:16 compute-0 vigilant_lamport[187970]:                 "ceph.block_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 23:45:16 compute-0 vigilant_lamport[187970]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 23:45:16 compute-0 vigilant_lamport[187970]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:45:16 compute-0 vigilant_lamport[187970]:                 "ceph.cluster_name": "ceph",
Nov 25 23:45:16 compute-0 vigilant_lamport[187970]:                 "ceph.crush_device_class": "",
Nov 25 23:45:16 compute-0 vigilant_lamport[187970]:                 "ceph.encrypted": "0",
Nov 25 23:45:16 compute-0 vigilant_lamport[187970]:                 "ceph.osd_fsid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 23:45:16 compute-0 vigilant_lamport[187970]:                 "ceph.osd_id": "1",
Nov 25 23:45:16 compute-0 vigilant_lamport[187970]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 23:45:16 compute-0 vigilant_lamport[187970]:                 "ceph.type": "block",
Nov 25 23:45:16 compute-0 vigilant_lamport[187970]:                 "ceph.vdo": "0"
Nov 25 23:45:16 compute-0 vigilant_lamport[187970]:             },
Nov 25 23:45:16 compute-0 vigilant_lamport[187970]:             "type": "block",
Nov 25 23:45:16 compute-0 vigilant_lamport[187970]:             "vg_name": "ceph_vg1"
Nov 25 23:45:16 compute-0 vigilant_lamport[187970]:         }
Nov 25 23:45:16 compute-0 vigilant_lamport[187970]:     ],
Nov 25 23:45:16 compute-0 vigilant_lamport[187970]:     "2": [
Nov 25 23:45:16 compute-0 vigilant_lamport[187970]:         {
Nov 25 23:45:16 compute-0 vigilant_lamport[187970]:             "devices": [
Nov 25 23:45:16 compute-0 vigilant_lamport[187970]:                 "/dev/loop5"
Nov 25 23:45:16 compute-0 vigilant_lamport[187970]:             ],
Nov 25 23:45:16 compute-0 vigilant_lamport[187970]:             "lv_name": "ceph_lv2",
Nov 25 23:45:16 compute-0 vigilant_lamport[187970]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 23:45:16 compute-0 vigilant_lamport[187970]:             "lv_size": "21470642176",
Nov 25 23:45:16 compute-0 vigilant_lamport[187970]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=019d967b-1a56-4e90-8682-a890da577e20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 23:45:16 compute-0 vigilant_lamport[187970]:             "lv_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 23:45:16 compute-0 vigilant_lamport[187970]:             "name": "ceph_lv2",
Nov 25 23:45:16 compute-0 vigilant_lamport[187970]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 23:45:16 compute-0 vigilant_lamport[187970]:             "tags": {
Nov 25 23:45:16 compute-0 vigilant_lamport[187970]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 23:45:16 compute-0 vigilant_lamport[187970]:                 "ceph.block_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 23:45:16 compute-0 vigilant_lamport[187970]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 23:45:16 compute-0 vigilant_lamport[187970]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:45:16 compute-0 vigilant_lamport[187970]:                 "ceph.cluster_name": "ceph",
Nov 25 23:45:16 compute-0 vigilant_lamport[187970]:                 "ceph.crush_device_class": "",
Nov 25 23:45:16 compute-0 vigilant_lamport[187970]:                 "ceph.encrypted": "0",
Nov 25 23:45:16 compute-0 vigilant_lamport[187970]:                 "ceph.osd_fsid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 23:45:16 compute-0 vigilant_lamport[187970]:                 "ceph.osd_id": "2",
Nov 25 23:45:16 compute-0 vigilant_lamport[187970]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 23:45:16 compute-0 vigilant_lamport[187970]:                 "ceph.type": "block",
Nov 25 23:45:16 compute-0 vigilant_lamport[187970]:                 "ceph.vdo": "0"
Nov 25 23:45:16 compute-0 vigilant_lamport[187970]:             },
Nov 25 23:45:16 compute-0 vigilant_lamport[187970]:             "type": "block",
Nov 25 23:45:16 compute-0 vigilant_lamport[187970]:             "vg_name": "ceph_vg2"
Nov 25 23:45:16 compute-0 vigilant_lamport[187970]:         }
Nov 25 23:45:16 compute-0 vigilant_lamport[187970]:     ]
Nov 25 23:45:16 compute-0 vigilant_lamport[187970]: }
Nov 25 23:45:16 compute-0 systemd[1]: libpod-520adc5fbc1067ed0a4f18e855e3f4dd2fde0fd416f846b154b1c4a6aa28ac79.scope: Deactivated successfully.
Nov 25 23:45:16 compute-0 podman[187949]: 2025-11-25 23:45:16.633504826 +0000 UTC m=+0.975082041 container died 520adc5fbc1067ed0a4f18e855e3f4dd2fde0fd416f846b154b1c4a6aa28ac79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_lamport, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:45:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-c702683a2e6c046fd88b94804e6d933a5b9781cee3e3ce23e66afbb5a5abb885-merged.mount: Deactivated successfully.
Nov 25 23:45:16 compute-0 podman[187949]: 2025-11-25 23:45:16.717847134 +0000 UTC m=+1.059424379 container remove 520adc5fbc1067ed0a4f18e855e3f4dd2fde0fd416f846b154b1c4a6aa28ac79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_lamport, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 25 23:45:16 compute-0 systemd[1]: libpod-conmon-520adc5fbc1067ed0a4f18e855e3f4dd2fde0fd416f846b154b1c4a6aa28ac79.scope: Deactivated successfully.
Nov 25 23:45:16 compute-0 sudo[187438]: pam_unix(sudo:session): session closed for user root
Nov 25 23:45:16 compute-0 sudo[188075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:45:16 compute-0 sudo[188075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:45:16 compute-0 sudo[188075]: pam_unix(sudo:session): session closed for user root
Nov 25 23:45:16 compute-0 sudo[188108]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:45:16 compute-0 sudo[188108]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:45:16 compute-0 sudo[188108]: pam_unix(sudo:session): session closed for user root
Nov 25 23:45:17 compute-0 sudo[188143]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:45:17 compute-0 sudo[188143]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:45:17 compute-0 sudo[188143]: pam_unix(sudo:session): session closed for user root
Nov 25 23:45:17 compute-0 sudo[188182]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 -- raw list --format json
Nov 25 23:45:17 compute-0 sudo[188182]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:45:17 compute-0 ceph-mon[75654]: pgmap v454: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:45:17 compute-0 podman[188291]: 2025-11-25 23:45:17.497503577 +0000 UTC m=+0.049369711 container create f643cb7fb9fb429810d7349b1f3ad35b0d10cfc1d9fd1b562a5efc78786f4d27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:45:17 compute-0 systemd[1]: Started libpod-conmon-f643cb7fb9fb429810d7349b1f3ad35b0d10cfc1d9fd1b562a5efc78786f4d27.scope.
Nov 25 23:45:17 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:45:17 compute-0 podman[188291]: 2025-11-25 23:45:17.473092489 +0000 UTC m=+0.024958633 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:45:17 compute-0 podman[188291]: 2025-11-25 23:45:17.575889967 +0000 UTC m=+0.127756111 container init f643cb7fb9fb429810d7349b1f3ad35b0d10cfc1d9fd1b562a5efc78786f4d27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_nobel, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 23:45:17 compute-0 podman[188291]: 2025-11-25 23:45:17.584236079 +0000 UTC m=+0.136102213 container start f643cb7fb9fb429810d7349b1f3ad35b0d10cfc1d9fd1b562a5efc78786f4d27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_nobel, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 23:45:17 compute-0 podman[188291]: 2025-11-25 23:45:17.587232578 +0000 UTC m=+0.139098732 container attach f643cb7fb9fb429810d7349b1f3ad35b0d10cfc1d9fd1b562a5efc78786f4d27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 23:45:17 compute-0 elated_nobel[188318]: 167 167
Nov 25 23:45:17 compute-0 systemd[1]: libpod-f643cb7fb9fb429810d7349b1f3ad35b0d10cfc1d9fd1b562a5efc78786f4d27.scope: Deactivated successfully.
Nov 25 23:45:17 compute-0 podman[188291]: 2025-11-25 23:45:17.591415409 +0000 UTC m=+0.143281543 container died f643cb7fb9fb429810d7349b1f3ad35b0d10cfc1d9fd1b562a5efc78786f4d27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_nobel, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:45:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-c66f1133381c23656896e84de449188ffbe3383a5d683f01b49b30608aee7c15-merged.mount: Deactivated successfully.
Nov 25 23:45:17 compute-0 podman[188291]: 2025-11-25 23:45:17.633810024 +0000 UTC m=+0.185676158 container remove f643cb7fb9fb429810d7349b1f3ad35b0d10cfc1d9fd1b562a5efc78786f4d27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_nobel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 23:45:17 compute-0 systemd[1]: libpod-conmon-f643cb7fb9fb429810d7349b1f3ad35b0d10cfc1d9fd1b562a5efc78786f4d27.scope: Deactivated successfully.
Nov 25 23:45:17 compute-0 podman[188368]: 2025-11-25 23:45:17.785043308 +0000 UTC m=+0.038372109 container create 574c0cf00ff199bcdb606ad27938e58b22539a665f84b1aea96265e5afb67d41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True)
Nov 25 23:45:17 compute-0 systemd[1]: Started libpod-conmon-574c0cf00ff199bcdb606ad27938e58b22539a665f84b1aea96265e5afb67d41.scope.
Nov 25 23:45:17 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:45:17 compute-0 podman[188368]: 2025-11-25 23:45:17.765921601 +0000 UTC m=+0.019250442 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:45:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/709465d388b48862742d50032e958dff4347d35ce4805ad274a8df31b496b770/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 23:45:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/709465d388b48862742d50032e958dff4347d35ce4805ad274a8df31b496b770/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:45:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/709465d388b48862742d50032e958dff4347d35ce4805ad274a8df31b496b770/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:45:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/709465d388b48862742d50032e958dff4347d35ce4805ad274a8df31b496b770/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:45:17 compute-0 podman[188368]: 2025-11-25 23:45:17.87627736 +0000 UTC m=+0.129606161 container init 574c0cf00ff199bcdb606ad27938e58b22539a665f84b1aea96265e5afb67d41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:45:17 compute-0 podman[188368]: 2025-11-25 23:45:17.882774672 +0000 UTC m=+0.136103473 container start 574c0cf00ff199bcdb606ad27938e58b22539a665f84b1aea96265e5afb67d41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_panini, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 23:45:17 compute-0 podman[188368]: 2025-11-25 23:45:17.885581836 +0000 UTC m=+0.138910627 container attach 574c0cf00ff199bcdb606ad27938e58b22539a665f84b1aea96265e5afb67d41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 23:45:18 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 25 23:45:18 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 25 23:45:18 compute-0 systemd[1]: Reloading.
Nov 25 23:45:18 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v455: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:45:18 compute-0 systemd-sysv-generator[188463]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 23:45:18 compute-0 systemd-rc-local-generator[188459]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 23:45:18 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 25 23:45:18 compute-0 loving_panini[188386]: {
Nov 25 23:45:18 compute-0 loving_panini[188386]:     "019d967b-1a56-4e90-8682-a890da577e20": {
Nov 25 23:45:18 compute-0 loving_panini[188386]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:45:18 compute-0 loving_panini[188386]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 23:45:18 compute-0 loving_panini[188386]:         "osd_id": 2,
Nov 25 23:45:18 compute-0 loving_panini[188386]:         "osd_uuid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 23:45:18 compute-0 loving_panini[188386]:         "type": "bluestore"
Nov 25 23:45:18 compute-0 loving_panini[188386]:     },
Nov 25 23:45:18 compute-0 loving_panini[188386]:     "21bbab34-bea3-466b-8bf7-812749fcef47": {
Nov 25 23:45:18 compute-0 loving_panini[188386]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:45:18 compute-0 loving_panini[188386]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 23:45:18 compute-0 loving_panini[188386]:         "osd_id": 1,
Nov 25 23:45:18 compute-0 loving_panini[188386]:         "osd_uuid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 23:45:18 compute-0 loving_panini[188386]:         "type": "bluestore"
Nov 25 23:45:18 compute-0 loving_panini[188386]:     },
Nov 25 23:45:18 compute-0 loving_panini[188386]:     "c5ddf08a-6193-41e0-8332-60b5083aa62e": {
Nov 25 23:45:18 compute-0 loving_panini[188386]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:45:18 compute-0 loving_panini[188386]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 23:45:18 compute-0 loving_panini[188386]:         "osd_id": 0,
Nov 25 23:45:18 compute-0 loving_panini[188386]:         "osd_uuid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 23:45:18 compute-0 loving_panini[188386]:         "type": "bluestore"
Nov 25 23:45:18 compute-0 loving_panini[188386]:     }
Nov 25 23:45:18 compute-0 loving_panini[188386]: }
Nov 25 23:45:18 compute-0 systemd[1]: libpod-574c0cf00ff199bcdb606ad27938e58b22539a665f84b1aea96265e5afb67d41.scope: Deactivated successfully.
Nov 25 23:45:18 compute-0 podman[188368]: 2025-11-25 23:45:18.874708239 +0000 UTC m=+1.128037080 container died 574c0cf00ff199bcdb606ad27938e58b22539a665f84b1aea96265e5afb67d41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_panini, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:45:19 compute-0 ceph-mon[75654]: pgmap v455: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:45:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-709465d388b48862742d50032e958dff4347d35ce4805ad274a8df31b496b770-merged.mount: Deactivated successfully.
Nov 25 23:45:19 compute-0 podman[188368]: 2025-11-25 23:45:19.902379574 +0000 UTC m=+2.155708405 container remove 574c0cf00ff199bcdb606ad27938e58b22539a665f84b1aea96265e5afb67d41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_panini, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3)
Nov 25 23:45:19 compute-0 systemd[1]: libpod-conmon-574c0cf00ff199bcdb606ad27938e58b22539a665f84b1aea96265e5afb67d41.scope: Deactivated successfully.
Nov 25 23:45:19 compute-0 sudo[188182]: pam_unix(sudo:session): session closed for user root
Nov 25 23:45:19 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 23:45:19 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:45:19 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 23:45:19 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:45:19 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev 0b2489e4-b52a-4e47-bc0b-c9f6818c826d does not exist
Nov 25 23:45:20 compute-0 sudo[189964]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:45:20 compute-0 sudo[189964]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:45:20 compute-0 sudo[189964]: pam_unix(sudo:session): session closed for user root
Nov 25 23:45:20 compute-0 sudo[190046]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 23:45:20 compute-0 sudo[190046]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:45:20 compute-0 sudo[190046]: pam_unix(sudo:session): session closed for user root
Nov 25 23:45:20 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v456: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:45:20 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:45:20 compute-0 sudo[168156]: pam_unix(sudo:session): session closed for user root
Nov 25 23:45:21 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:45:21 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:45:21 compute-0 ceph-mon[75654]: pgmap v456: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:45:21 compute-0 sudo[191334]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvezvtlaopkfjmmcktiokfuncpagvrxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114320.8407874-336-166879083631301/AnsiballZ_systemd.py'
Nov 25 23:45:21 compute-0 sudo[191334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:45:21 compute-0 python3.9[191365]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 25 23:45:21 compute-0 systemd[1]: Reloading.
Nov 25 23:45:21 compute-0 systemd-rc-local-generator[191815]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 23:45:22 compute-0 systemd-sysv-generator[191821]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 23:45:22 compute-0 sudo[191334]: pam_unix(sudo:session): session closed for user root
Nov 25 23:45:22 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v457: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:45:22 compute-0 sudo[192665]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shxnhvqvexnqdfsnpbxbhfhoeqhpjzyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114322.427607-336-36452425313828/AnsiballZ_systemd.py'
Nov 25 23:45:22 compute-0 sudo[192665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:45:23 compute-0 python3.9[192692]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 25 23:45:23 compute-0 systemd[1]: Reloading.
Nov 25 23:45:23 compute-0 systemd-rc-local-generator[193167]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 23:45:23 compute-0 systemd-sysv-generator[193174]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 23:45:23 compute-0 ceph-mon[75654]: pgmap v457: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:45:23 compute-0 sudo[192665]: pam_unix(sudo:session): session closed for user root
Nov 25 23:45:23 compute-0 sshd-session[192214]: Received disconnect from 124.18.141.70 port 41660:11: Bye Bye [preauth]
Nov 25 23:45:23 compute-0 sshd-session[192214]: Disconnected from authenticating user root 124.18.141.70 port 41660 [preauth]
Nov 25 23:45:23 compute-0 sudo[193902]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkjbtebanrgekkbxaufeoibwariiggmw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114323.6251829-336-1523146229405/AnsiballZ_systemd.py'
Nov 25 23:45:23 compute-0 sudo[193902]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:45:24 compute-0 python3.9[193915]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 25 23:45:24 compute-0 systemd[1]: Reloading.
Nov 25 23:45:24 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v458: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:45:24 compute-0 systemd-rc-local-generator[194342]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 23:45:24 compute-0 systemd-sysv-generator[194346]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 23:45:24 compute-0 sudo[193902]: pam_unix(sudo:session): session closed for user root
Nov 25 23:45:25 compute-0 sudo[195094]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcqrnunfozsvidbywdhccgqqbgbzdvix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114324.7729974-336-42121587660093/AnsiballZ_systemd.py'
Nov 25 23:45:25 compute-0 sudo[195094]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:45:25 compute-0 ceph-mon[75654]: pgmap v458: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:45:25 compute-0 python3.9[195113]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 25 23:45:25 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:45:25 compute-0 systemd[1]: Reloading.
Nov 25 23:45:25 compute-0 systemd-rc-local-generator[195543]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 23:45:25 compute-0 systemd-sysv-generator[195547]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 23:45:25 compute-0 sudo[195094]: pam_unix(sudo:session): session closed for user root
Nov 25 23:45:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:45:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:45:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:45:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:45:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:45:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:45:26 compute-0 sudo[196438]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zekmcjzcftcuuivdvdusfwjwycxbmkro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114325.9752803-365-64080043295113/AnsiballZ_systemd.py'
Nov 25 23:45:26 compute-0 sudo[196438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:45:26 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v459: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:45:26 compute-0 python3.9[196462]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 23:45:26 compute-0 systemd[1]: Reloading.
Nov 25 23:45:26 compute-0 systemd-rc-local-generator[196822]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 23:45:26 compute-0 systemd-sysv-generator[196828]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 23:45:26 compute-0 sudo[196438]: pam_unix(sudo:session): session closed for user root
Nov 25 23:45:27 compute-0 ceph-mon[75654]: pgmap v459: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:45:27 compute-0 sudo[197602]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bysxspdwcceafrajbgplyyeywlkkxmay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114327.1406052-365-26139765976325/AnsiballZ_systemd.py'
Nov 25 23:45:27 compute-0 sudo[197602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:45:27 compute-0 python3.9[197625]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 23:45:27 compute-0 systemd[1]: Reloading.
Nov 25 23:45:28 compute-0 systemd-rc-local-generator[198021]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 23:45:28 compute-0 systemd-sysv-generator[198031]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 23:45:28 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 25 23:45:28 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 25 23:45:28 compute-0 systemd[1]: man-db-cache-update.service: Consumed 12.706s CPU time.
Nov 25 23:45:28 compute-0 sudo[197602]: pam_unix(sudo:session): session closed for user root
Nov 25 23:45:28 compute-0 systemd[1]: run-r09d2ef183fdd4d688ef929122973b249.service: Deactivated successfully.
Nov 25 23:45:28 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v460: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:45:28 compute-0 sudo[198214]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-diohtfpqdjrwdbgfejtaqiyelepkpecg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114328.4235415-365-21365878607857/AnsiballZ_systemd.py'
Nov 25 23:45:28 compute-0 sudo[198214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:45:29 compute-0 python3.9[198216]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 23:45:29 compute-0 systemd[1]: Reloading.
Nov 25 23:45:29 compute-0 systemd-rc-local-generator[198243]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 23:45:29 compute-0 systemd-sysv-generator[198250]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 23:45:29 compute-0 ceph-mon[75654]: pgmap v460: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:45:29 compute-0 sudo[198214]: pam_unix(sudo:session): session closed for user root
Nov 25 23:45:30 compute-0 sudo[198404]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-soholjpqbvxlvdjwbbujxcishdzwmctf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114329.7669177-365-128562429937607/AnsiballZ_systemd.py'
Nov 25 23:45:30 compute-0 sudo[198404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:45:30 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v461: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:45:30 compute-0 python3.9[198406]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 23:45:30 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:45:30 compute-0 sudo[198404]: pam_unix(sudo:session): session closed for user root
Nov 25 23:45:30 compute-0 sudo[198559]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulkszhlskjgeomeqriklkdcxdajqpldn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114330.6095226-365-122449297439269/AnsiballZ_systemd.py'
Nov 25 23:45:30 compute-0 sudo[198559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:45:31 compute-0 python3.9[198561]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 23:45:31 compute-0 systemd[1]: Reloading.
Nov 25 23:45:31 compute-0 ceph-mon[75654]: pgmap v461: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:45:31 compute-0 systemd-rc-local-generator[198593]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 23:45:31 compute-0 systemd-sysv-generator[198597]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 23:45:31 compute-0 sudo[198559]: pam_unix(sudo:session): session closed for user root
Nov 25 23:45:32 compute-0 sudo[198750]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-veaamvpaenmdhzlzcbmrmzfqzramehsx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114331.8790376-401-3090399327358/AnsiballZ_systemd.py'
Nov 25 23:45:32 compute-0 sudo[198750]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:45:32 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v462: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:45:32 compute-0 python3.9[198752]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 25 23:45:32 compute-0 systemd[1]: Reloading.
Nov 25 23:45:32 compute-0 systemd-rc-local-generator[198783]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 23:45:32 compute-0 systemd-sysv-generator[198787]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 23:45:32 compute-0 systemd[1]: Listening on libvirt proxy daemon socket.
Nov 25 23:45:32 compute-0 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Nov 25 23:45:32 compute-0 sudo[198750]: pam_unix(sudo:session): session closed for user root
Nov 25 23:45:33 compute-0 ceph-mon[75654]: pgmap v462: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:45:33 compute-0 sudo[198944]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smfuqpopiqksaxjzfnrwvougseajqfrl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114333.1841116-409-254329914589822/AnsiballZ_systemd.py'
Nov 25 23:45:33 compute-0 sudo[198944]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:45:34 compute-0 python3.9[198946]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 23:45:34 compute-0 sudo[198944]: pam_unix(sudo:session): session closed for user root
Nov 25 23:45:34 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v463: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:45:34 compute-0 sudo[199116]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofjpfvhqvvihnwyvfcdergmhcchzrzzb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114334.4085653-409-187831184339329/AnsiballZ_systemd.py'
Nov 25 23:45:34 compute-0 sudo[199116]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:45:34 compute-0 podman[199073]: 2025-11-25 23:45:34.884496085 +0000 UTC m=+0.112741437 container health_status 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 23:45:35 compute-0 python3.9[199120]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 23:45:35 compute-0 ceph-mon[75654]: pgmap v463: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:45:35 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:45:36 compute-0 sudo[199116]: pam_unix(sudo:session): session closed for user root
Nov 25 23:45:36 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v464: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:45:36 compute-0 sudo[199280]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksevizvfqfejhohbmxlnsvkwhslrfxhy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114336.4322236-409-80141885312713/AnsiballZ_systemd.py'
Nov 25 23:45:36 compute-0 sudo[199280]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:45:37 compute-0 python3.9[199282]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 23:45:37 compute-0 sudo[199280]: pam_unix(sudo:session): session closed for user root
Nov 25 23:45:37 compute-0 ceph-mon[75654]: pgmap v464: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:45:37 compute-0 sudo[199435]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ciekmwvynsalmvlgvexufquivfovqghl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114337.433-409-58459895733205/AnsiballZ_systemd.py'
Nov 25 23:45:37 compute-0 sudo[199435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:45:38 compute-0 python3.9[199437]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 23:45:38 compute-0 sudo[199435]: pam_unix(sudo:session): session closed for user root
Nov 25 23:45:38 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v465: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:45:38 compute-0 sudo[199590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvrlruavsfjgfjzvjrirulzxyyczpeds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114338.4093196-409-181351038056460/AnsiballZ_systemd.py'
Nov 25 23:45:38 compute-0 sudo[199590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:45:39 compute-0 python3.9[199592]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 23:45:39 compute-0 sudo[199590]: pam_unix(sudo:session): session closed for user root
Nov 25 23:45:39 compute-0 ceph-mon[75654]: pgmap v465: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:45:39 compute-0 sudo[199745]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jevcyqowqqnuanjbqjkozurwxsabdrkd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114339.3575728-409-9483958600840/AnsiballZ_systemd.py'
Nov 25 23:45:39 compute-0 sudo[199745]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:45:40 compute-0 python3.9[199747]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 23:45:40 compute-0 sudo[199745]: pam_unix(sudo:session): session closed for user root
Nov 25 23:45:40 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v466: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:45:40 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:45:40 compute-0 sudo[199900]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avqvpzslmagxjpdeftfidkarmxiylwap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114340.3321283-409-59674044448990/AnsiballZ_systemd.py'
Nov 25 23:45:40 compute-0 sudo[199900]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:45:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:45:40.753 160725 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 23:45:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:45:40.753 160725 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 23:45:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:45:40.753 160725 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 23:45:41 compute-0 python3.9[199902]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 23:45:41 compute-0 sudo[199900]: pam_unix(sudo:session): session closed for user root
Nov 25 23:45:41 compute-0 ceph-mon[75654]: pgmap v466: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:45:41 compute-0 sudo[200055]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eojmnszfzwrearkwuifrlahzxoxzqtwn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114341.3086658-409-25814128111103/AnsiballZ_systemd.py'
Nov 25 23:45:41 compute-0 sudo[200055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:45:42 compute-0 python3.9[200057]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 23:45:42 compute-0 sudo[200055]: pam_unix(sudo:session): session closed for user root
Nov 25 23:45:42 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v467: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:45:42 compute-0 sudo[200220]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llvmkkdmvcvbzbtjstdgapkxweuzgruf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114342.3423152-409-161762118476218/AnsiballZ_systemd.py'
Nov 25 23:45:42 compute-0 sudo[200220]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:45:42 compute-0 podman[200184]: 2025-11-25 23:45:42.790932444 +0000 UTC m=+0.112680576 container health_status 9c33443d6f21f9126b440289b7d384a59ef92406382d0e40a2b11f1da5dc47d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, container_name=ovn_metadata_agent)
Nov 25 23:45:43 compute-0 python3.9[200232]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 23:45:43 compute-0 ceph-mon[75654]: pgmap v467: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:45:44 compute-0 sudo[200220]: pam_unix(sudo:session): session closed for user root
Nov 25 23:45:44 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v468: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:45:44 compute-0 sudo[200386]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwqhfklbumerukbiethgmsbikxwlcskv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114344.3647072-409-153200862216848/AnsiballZ_systemd.py'
Nov 25 23:45:44 compute-0 sudo[200386]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:45:45 compute-0 python3.9[200388]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 23:45:45 compute-0 sudo[200386]: pam_unix(sudo:session): session closed for user root
Nov 25 23:45:45 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:45:45 compute-0 ceph-mon[75654]: pgmap v468: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:45:45 compute-0 sudo[200541]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfhtrohexatnaezbbanfjvoogduyxoin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114345.2266765-409-1707066044383/AnsiballZ_systemd.py'
Nov 25 23:45:45 compute-0 sudo[200541]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:45:45 compute-0 python3.9[200543]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 23:45:45 compute-0 sudo[200541]: pam_unix(sudo:session): session closed for user root
Nov 25 23:45:46 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v469: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:45:46 compute-0 sudo[200696]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swntczrlrmgtstdhjqysyybgvqmvjfci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114346.0763547-409-192985422079237/AnsiballZ_systemd.py'
Nov 25 23:45:46 compute-0 sudo[200696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:45:46 compute-0 python3.9[200698]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 23:45:46 compute-0 sudo[200696]: pam_unix(sudo:session): session closed for user root
Nov 25 23:45:47 compute-0 sudo[200851]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imblfcwzfzgwpevgrlymvpopaylcfkfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114346.8780715-409-27316810541792/AnsiballZ_systemd.py'
Nov 25 23:45:47 compute-0 sudo[200851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:45:47 compute-0 ceph-mon[75654]: pgmap v469: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:45:47 compute-0 python3.9[200853]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 23:45:47 compute-0 sudo[200851]: pam_unix(sudo:session): session closed for user root
Nov 25 23:45:48 compute-0 sudo[201006]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvunyjrrhmaaegyvhwfmjlsohwifrtdw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114347.8119233-409-239308472918120/AnsiballZ_systemd.py'
Nov 25 23:45:48 compute-0 sudo[201006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:45:48 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v470: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:45:48 compute-0 python3.9[201008]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 23:45:48 compute-0 sudo[201006]: pam_unix(sudo:session): session closed for user root
Nov 25 23:45:49 compute-0 sudo[201161]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftcrgwqsifwyvlavdshuxxgejoulqybi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114349.0305297-511-206248020213849/AnsiballZ_file.py'
Nov 25 23:45:49 compute-0 sudo[201161]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:45:49 compute-0 ceph-mon[75654]: pgmap v470: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:45:49 compute-0 python3.9[201163]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:45:49 compute-0 sudo[201161]: pam_unix(sudo:session): session closed for user root
Nov 25 23:45:50 compute-0 sudo[201313]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgdhaafvqkirjvutqixzdmtfqoujwujp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114349.7878733-511-172741850972506/AnsiballZ_file.py'
Nov 25 23:45:50 compute-0 sudo[201313]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:45:50 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v471: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:45:50 compute-0 python3.9[201315]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:45:50 compute-0 sudo[201313]: pam_unix(sudo:session): session closed for user root
Nov 25 23:45:50 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:45:50 compute-0 sudo[201465]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efcpxfsbuhweuemswibrstglkjjkmjps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114350.5186625-511-243135886593238/AnsiballZ_file.py'
Nov 25 23:45:50 compute-0 sudo[201465]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:45:51 compute-0 python3.9[201467]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:45:51 compute-0 sudo[201465]: pam_unix(sudo:session): session closed for user root
Nov 25 23:45:51 compute-0 ceph-mon[75654]: pgmap v471: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:45:51 compute-0 sudo[201617]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipmbusqychiwceikpficpjjiabhgvkdw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114351.2380033-511-244588964483558/AnsiballZ_file.py'
Nov 25 23:45:51 compute-0 sudo[201617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:45:51 compute-0 python3.9[201619]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:45:51 compute-0 sudo[201617]: pam_unix(sudo:session): session closed for user root
Nov 25 23:45:52 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v472: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:45:52 compute-0 sudo[201769]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dinyvyzqclcjollxwgjjmhscgrjphije ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114352.0170522-511-224562687512140/AnsiballZ_file.py'
Nov 25 23:45:52 compute-0 sudo[201769]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:45:52 compute-0 python3.9[201771]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:45:52 compute-0 sudo[201769]: pam_unix(sudo:session): session closed for user root
Nov 25 23:45:53 compute-0 sudo[201921]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hetwrgunzntqtkzkqhozludcfixhnfky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114352.8002975-511-119801966124627/AnsiballZ_file.py'
Nov 25 23:45:53 compute-0 sudo[201921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:45:53 compute-0 python3.9[201923]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:45:53 compute-0 sudo[201921]: pam_unix(sudo:session): session closed for user root
Nov 25 23:45:53 compute-0 ceph-mon[75654]: pgmap v472: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:45:54 compute-0 sudo[202073]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-joptypnzwpuylaxnzpablgltcnasabye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114353.6066053-554-34671781885842/AnsiballZ_stat.py'
Nov 25 23:45:54 compute-0 sudo[202073]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:45:54 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v473: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:45:54 compute-0 python3.9[202075]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:45:54 compute-0 sudo[202073]: pam_unix(sudo:session): session closed for user root
Nov 25 23:45:55 compute-0 sudo[202198]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sebdamihrbjmgneonbijgxgdbhwddwny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114353.6066053-554-34671781885842/AnsiballZ_copy.py'
Nov 25 23:45:55 compute-0 sudo[202198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:45:55 compute-0 python3.9[202200]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764114353.6066053-554-34671781885842/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:45:55 compute-0 sudo[202198]: pam_unix(sudo:session): session closed for user root
Nov 25 23:45:55 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:45:55 compute-0 ceph-mon[75654]: pgmap v473: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:45:55 compute-0 sudo[202350]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdekhazzccnhxxpkrhjzyikqnoytqpwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114355.462142-554-122363963963017/AnsiballZ_stat.py'
Nov 25 23:45:55 compute-0 sudo[202350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:45:56 compute-0 ceph-mgr[75954]: [balancer INFO root] Optimize plan auto_2025-11-25_23:45:56
Nov 25 23:45:56 compute-0 ceph-mgr[75954]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 23:45:56 compute-0 ceph-mgr[75954]: [balancer INFO root] do_upmap
Nov 25 23:45:56 compute-0 ceph-mgr[75954]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'backups', 'volumes', 'images', 'vms', '.mgr', 'cephfs.cephfs.data']
Nov 25 23:45:56 compute-0 ceph-mgr[75954]: [balancer INFO root] prepared 0/10 changes
Nov 25 23:45:56 compute-0 python3.9[202352]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:45:56 compute-0 sudo[202350]: pam_unix(sudo:session): session closed for user root
Nov 25 23:45:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:45:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:45:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:45:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:45:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:45:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:45:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 23:45:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 23:45:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 23:45:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 23:45:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 23:45:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 23:45:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 23:45:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 23:45:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 23:45:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 23:45:56 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v474: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:45:56 compute-0 sudo[202477]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skdddswhywresjrgybjrfnynrcezbshz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114355.462142-554-122363963963017/AnsiballZ_copy.py'
Nov 25 23:45:56 compute-0 sudo[202477]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:45:56 compute-0 sshd-session[202379]: Connection closed by 171.244.140.22 port 55374 [preauth]
Nov 25 23:45:56 compute-0 python3.9[202479]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764114355.462142-554-122363963963017/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:45:56 compute-0 sudo[202477]: pam_unix(sudo:session): session closed for user root
Nov 25 23:45:57 compute-0 sudo[202629]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omiytkwirungkphkvkruhugqdpbrmfeq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114357.0792313-554-227852866324465/AnsiballZ_stat.py'
Nov 25 23:45:57 compute-0 sudo[202629]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:45:57 compute-0 ceph-mon[75654]: pgmap v474: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:45:57 compute-0 python3.9[202631]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:45:57 compute-0 sudo[202629]: pam_unix(sudo:session): session closed for user root
Nov 25 23:45:58 compute-0 sudo[202754]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kuxcbviaplkieyvvqqcgcgxdykbonqjx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114357.0792313-554-227852866324465/AnsiballZ_copy.py'
Nov 25 23:45:58 compute-0 sudo[202754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:45:58 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v475: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:45:58 compute-0 python3.9[202756]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764114357.0792313-554-227852866324465/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:45:58 compute-0 sudo[202754]: pam_unix(sudo:session): session closed for user root
Nov 25 23:45:58 compute-0 sudo[202906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rojqgynkganyuufnclzdofplxsyrinwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114358.5717072-554-111722182401673/AnsiballZ_stat.py'
Nov 25 23:45:58 compute-0 sudo[202906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:45:59 compute-0 python3.9[202908]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:45:59 compute-0 sudo[202906]: pam_unix(sudo:session): session closed for user root
Nov 25 23:45:59 compute-0 ceph-mon[75654]: pgmap v475: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:45:59 compute-0 sudo[203031]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgyexathoxgtlawjnogrvrfdwegfsgpt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114358.5717072-554-111722182401673/AnsiballZ_copy.py'
Nov 25 23:45:59 compute-0 sudo[203031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:45:59 compute-0 python3.9[203033]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764114358.5717072-554-111722182401673/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:45:59 compute-0 sudo[203031]: pam_unix(sudo:session): session closed for user root
Nov 25 23:46:00 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v476: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:46:00 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:46:00 compute-0 sudo[203183]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdyyhrownynelxrgyykbdcemllqpitbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114360.2035878-554-207829125034064/AnsiballZ_stat.py'
Nov 25 23:46:00 compute-0 sudo[203183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:46:00 compute-0 python3.9[203185]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:46:00 compute-0 sudo[203183]: pam_unix(sudo:session): session closed for user root
Nov 25 23:46:01 compute-0 sudo[203308]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbjhttbegrvuxjxbvfwfcvfzlaztnjus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114360.2035878-554-207829125034064/AnsiballZ_copy.py'
Nov 25 23:46:01 compute-0 sudo[203308]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:46:01 compute-0 python3.9[203310]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764114360.2035878-554-207829125034064/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:46:01 compute-0 sudo[203308]: pam_unix(sudo:session): session closed for user root
Nov 25 23:46:01 compute-0 ceph-mon[75654]: pgmap v476: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:46:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 23:46:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:46:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 23:46:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:46:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:46:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:46:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:46:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:46:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:46:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:46:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:46:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:46:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 23:46:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:46:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:46:02 compute-0 sudo[203460]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxcevylnnniycbjnexehswbrqypjqmpu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114361.656885-554-59003449351394/AnsiballZ_stat.py'
Nov 25 23:46:02 compute-0 sudo[203460]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:46:02 compute-0 python3.9[203462]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:46:02 compute-0 sudo[203460]: pam_unix(sudo:session): session closed for user root
Nov 25 23:46:02 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v477: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:46:02 compute-0 sudo[203585]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vajqdmiapskgyfxhboptscwzpwyinxli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114361.656885-554-59003449351394/AnsiballZ_copy.py'
Nov 25 23:46:02 compute-0 sudo[203585]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:46:02 compute-0 python3.9[203587]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764114361.656885-554-59003449351394/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:46:02 compute-0 sudo[203585]: pam_unix(sudo:session): session closed for user root
Nov 25 23:46:03 compute-0 sudo[203737]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpwldjnexkxwrogzgvwcophmjnpcsmbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114363.1260843-554-31945535958151/AnsiballZ_stat.py'
Nov 25 23:46:03 compute-0 sudo[203737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:46:03 compute-0 ceph-mon[75654]: pgmap v477: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:46:03 compute-0 python3.9[203739]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:46:03 compute-0 sudo[203737]: pam_unix(sudo:session): session closed for user root
Nov 25 23:46:04 compute-0 sudo[203860]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyqqsynqhqwmetkmgtxslljzcswknvpl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114363.1260843-554-31945535958151/AnsiballZ_copy.py'
Nov 25 23:46:04 compute-0 sudo[203860]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:46:04 compute-0 python3.9[203862]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764114363.1260843-554-31945535958151/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:46:04 compute-0 sudo[203860]: pam_unix(sudo:session): session closed for user root
Nov 25 23:46:04 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v478: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:46:04 compute-0 sudo[204012]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djsstoyemfzrdzsashbrakjpztvxduje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114364.490349-554-103738279089853/AnsiballZ_stat.py'
Nov 25 23:46:04 compute-0 sudo[204012]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:46:05 compute-0 python3.9[204014]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:46:05 compute-0 sudo[204012]: pam_unix(sudo:session): session closed for user root
Nov 25 23:46:05 compute-0 podman[204039]: 2025-11-25 23:46:05.260557732 +0000 UTC m=+0.091330227 container health_status 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 23:46:05 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:46:05 compute-0 sudo[204164]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ancazjptfxrspueiyzsjouvbadqagbbh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114364.490349-554-103738279089853/AnsiballZ_copy.py'
Nov 25 23:46:05 compute-0 sudo[204164]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:46:05 compute-0 ceph-mon[75654]: pgmap v478: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:46:05 compute-0 python3.9[204166]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764114364.490349-554-103738279089853/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:46:05 compute-0 sudo[204164]: pam_unix(sudo:session): session closed for user root
Nov 25 23:46:06 compute-0 sudo[204316]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vystsesctydkaokdtilnsciaudnxqqbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114365.8788948-667-45597709325104/AnsiballZ_command.py'
Nov 25 23:46:06 compute-0 sudo[204316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:46:06 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v479: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:46:06 compute-0 python3.9[204318]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Nov 25 23:46:06 compute-0 sudo[204316]: pam_unix(sudo:session): session closed for user root
Nov 25 23:46:06 compute-0 sudo[204469]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sahhnpwjraynbowsvftkufgdtawefvxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114366.6902921-676-29698762239852/AnsiballZ_file.py'
Nov 25 23:46:06 compute-0 sudo[204469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:46:07 compute-0 python3.9[204471]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:46:07 compute-0 sudo[204469]: pam_unix(sudo:session): session closed for user root
Nov 25 23:46:07 compute-0 ceph-mon[75654]: pgmap v479: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:46:07 compute-0 sudo[204623]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-selfixycgetichckccunacqplzfyszyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114367.3388073-676-29929453891112/AnsiballZ_file.py'
Nov 25 23:46:07 compute-0 sudo[204623]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:46:07 compute-0 python3.9[204625]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:46:07 compute-0 sudo[204623]: pam_unix(sudo:session): session closed for user root
Nov 25 23:46:08 compute-0 sudo[204775]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmfgwvhjthfpppfrafugbdgehvexieql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114367.9428217-676-1728147470409/AnsiballZ_file.py'
Nov 25 23:46:08 compute-0 sudo[204775]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:46:08 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v480: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:46:08 compute-0 sshd-session[204490]: Invalid user test from 202.83.162.167 port 45288
Nov 25 23:46:08 compute-0 python3.9[204777]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:46:08 compute-0 sudo[204775]: pam_unix(sudo:session): session closed for user root
Nov 25 23:46:08 compute-0 sshd-session[204490]: Received disconnect from 202.83.162.167 port 45288:11: Bye Bye [preauth]
Nov 25 23:46:08 compute-0 sshd-session[204490]: Disconnected from invalid user test 202.83.162.167 port 45288 [preauth]
Nov 25 23:46:08 compute-0 sudo[204927]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxkmkwnqdhvirlvtbnkvancvvlmnpekt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114368.6485825-676-23806535594739/AnsiballZ_file.py'
Nov 25 23:46:08 compute-0 sudo[204927]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:46:09 compute-0 python3.9[204929]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:46:09 compute-0 sudo[204927]: pam_unix(sudo:session): session closed for user root
Nov 25 23:46:09 compute-0 ceph-mon[75654]: pgmap v480: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:46:09 compute-0 sudo[205079]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzvvukpatmxxpktqhxggoguckgbybjdz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114369.3679707-676-168414039627382/AnsiballZ_file.py'
Nov 25 23:46:09 compute-0 sudo[205079]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:46:09 compute-0 python3.9[205081]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:46:09 compute-0 sudo[205079]: pam_unix(sudo:session): session closed for user root
Nov 25 23:46:10 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v481: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:46:10 compute-0 sudo[205231]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ecrxuqzfjylpkdnvudrdyyizomrimmzt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114370.0376353-676-20948651443583/AnsiballZ_file.py'
Nov 25 23:46:10 compute-0 sudo[205231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:46:10 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:46:10 compute-0 python3.9[205233]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:46:10 compute-0 sudo[205231]: pam_unix(sudo:session): session closed for user root
Nov 25 23:46:11 compute-0 sudo[205383]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qebwirmlhdgjeofowewlipmhowjogmpn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114370.6979418-676-192574460013028/AnsiballZ_file.py'
Nov 25 23:46:11 compute-0 sudo[205383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:46:11 compute-0 python3.9[205385]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:46:11 compute-0 sudo[205383]: pam_unix(sudo:session): session closed for user root
Nov 25 23:46:11 compute-0 ceph-mon[75654]: pgmap v481: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:46:11 compute-0 sudo[205535]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xyykynlpcexfagduugaaqatsbnizveyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114371.3666978-676-2402427732923/AnsiballZ_file.py'
Nov 25 23:46:11 compute-0 sudo[205535]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:46:11 compute-0 python3.9[205537]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:46:11 compute-0 sudo[205535]: pam_unix(sudo:session): session closed for user root
Nov 25 23:46:12 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v482: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:46:12 compute-0 sudo[205687]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rsshlgmmcxckfivtdmaogdawqrbdmxcy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114372.1745977-676-120883860933590/AnsiballZ_file.py'
Nov 25 23:46:12 compute-0 sudo[205687]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:46:12 compute-0 python3.9[205689]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:46:12 compute-0 sudo[205687]: pam_unix(sudo:session): session closed for user root
Nov 25 23:46:13 compute-0 podman[205785]: 2025-11-25 23:46:13.28578945 +0000 UTC m=+0.092948861 container health_status 9c33443d6f21f9126b440289b7d384a59ef92406382d0e40a2b11f1da5dc47d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 25 23:46:13 compute-0 sudo[205858]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-putegzmwkushdnylpykvjvtdbgpnfavk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114372.987411-676-54414127549702/AnsiballZ_file.py'
Nov 25 23:46:13 compute-0 sudo[205858]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:46:13 compute-0 ceph-mon[75654]: pgmap v482: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:46:13 compute-0 python3.9[205860]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:46:13 compute-0 sudo[205858]: pam_unix(sudo:session): session closed for user root
Nov 25 23:46:14 compute-0 sudo[206010]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgguygjhabyvttvqygblrrksojyiidqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114373.8486714-676-80451460583142/AnsiballZ_file.py'
Nov 25 23:46:14 compute-0 sudo[206010]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:46:14 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v483: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:46:14 compute-0 python3.9[206012]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:46:14 compute-0 sudo[206010]: pam_unix(sudo:session): session closed for user root
Nov 25 23:46:15 compute-0 sudo[206162]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zswjrnpeyvnkafioxxlljfbivduopvtf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114374.7068615-676-60821448509454/AnsiballZ_file.py'
Nov 25 23:46:15 compute-0 sudo[206162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:46:15 compute-0 python3.9[206164]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:46:15 compute-0 sudo[206162]: pam_unix(sudo:session): session closed for user root
Nov 25 23:46:15 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:46:15 compute-0 ceph-mon[75654]: pgmap v483: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:46:15 compute-0 sudo[206314]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yimbtznnktpmsfcklqbbketgdjtozdmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114375.3593166-676-153478876499930/AnsiballZ_file.py'
Nov 25 23:46:15 compute-0 sudo[206314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:46:15 compute-0 python3.9[206316]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:46:15 compute-0 sudo[206314]: pam_unix(sudo:session): session closed for user root
Nov 25 23:46:16 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v484: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:46:16 compute-0 sudo[206466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flpzuivmxdjeawmszadzpacsmtzzctvh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114376.1648507-676-117092060660815/AnsiballZ_file.py'
Nov 25 23:46:16 compute-0 sudo[206466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:46:16 compute-0 python3.9[206468]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:46:16 compute-0 sudo[206466]: pam_unix(sudo:session): session closed for user root
Nov 25 23:46:17 compute-0 sudo[206618]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwkdepgpibfhsyfmiycnjpbmjpetfhkh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114377.0625043-775-197558175278093/AnsiballZ_stat.py'
Nov 25 23:46:17 compute-0 sudo[206618]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:46:17 compute-0 ceph-mon[75654]: pgmap v484: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:46:17 compute-0 python3.9[206620]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:46:17 compute-0 sudo[206618]: pam_unix(sudo:session): session closed for user root
Nov 25 23:46:18 compute-0 sudo[206741]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iytckcetsqjnebxogkvksmqmcytvacdn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114377.0625043-775-197558175278093/AnsiballZ_copy.py'
Nov 25 23:46:18 compute-0 sudo[206741]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:46:18 compute-0 python3.9[206743]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764114377.0625043-775-197558175278093/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:46:18 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v485: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:46:18 compute-0 sudo[206741]: pam_unix(sudo:session): session closed for user root
Nov 25 23:46:18 compute-0 sudo[206893]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzbmughndfopebvumnndzlwpvmrzphuy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114378.5051255-775-111059718653022/AnsiballZ_stat.py'
Nov 25 23:46:18 compute-0 sudo[206893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:46:19 compute-0 python3.9[206895]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:46:19 compute-0 sudo[206893]: pam_unix(sudo:session): session closed for user root
Nov 25 23:46:19 compute-0 sudo[207016]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hicpjhohlfezxfpencjryvdnrqaioygq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114378.5051255-775-111059718653022/AnsiballZ_copy.py'
Nov 25 23:46:19 compute-0 sudo[207016]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:46:19 compute-0 ceph-mon[75654]: pgmap v485: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:46:19 compute-0 python3.9[207018]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764114378.5051255-775-111059718653022/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:46:19 compute-0 sudo[207016]: pam_unix(sudo:session): session closed for user root
Nov 25 23:46:20 compute-0 sudo[207142]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:46:20 compute-0 sudo[207142]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:46:20 compute-0 sudo[207142]: pam_unix(sudo:session): session closed for user root
Nov 25 23:46:20 compute-0 sudo[207194]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkehllpefeiappuyiioeovyojtmteihi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114379.8471944-775-12685982182814/AnsiballZ_stat.py'
Nov 25 23:46:20 compute-0 sudo[207194]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:46:20 compute-0 sudo[207193]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:46:20 compute-0 sudo[207193]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:46:20 compute-0 sudo[207193]: pam_unix(sudo:session): session closed for user root
Nov 25 23:46:20 compute-0 sudo[207221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:46:20 compute-0 sudo[207221]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:46:20 compute-0 sudo[207221]: pam_unix(sudo:session): session closed for user root
Nov 25 23:46:20 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v486: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:46:20 compute-0 python3.9[207201]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:46:20 compute-0 sudo[207246]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 23:46:20 compute-0 sudo[207246]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:46:20 compute-0 sudo[207194]: pam_unix(sudo:session): session closed for user root
Nov 25 23:46:20 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:46:20 compute-0 sudo[207410]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcftsvimcogjgpuhfydrvxrvnsjyogbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114379.8471944-775-12685982182814/AnsiballZ_copy.py'
Nov 25 23:46:20 compute-0 sudo[207410]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:46:20 compute-0 sudo[207246]: pam_unix(sudo:session): session closed for user root
Nov 25 23:46:20 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 23:46:20 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:46:20 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 23:46:20 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 23:46:20 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 23:46:21 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:46:21 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev 199a0de9-ebb0-4283-b2aa-2422972e28c5 does not exist
Nov 25 23:46:21 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev 44134351-52ec-428f-8d81-ad92a8913d2c does not exist
Nov 25 23:46:21 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev 89d283b5-f379-455d-972a-9ecc796fbfcb does not exist
Nov 25 23:46:21 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 23:46:21 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 23:46:21 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 23:46:21 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 23:46:21 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 23:46:21 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:46:21 compute-0 sudo[207425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:46:21 compute-0 sudo[207425]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:46:21 compute-0 sudo[207425]: pam_unix(sudo:session): session closed for user root
Nov 25 23:46:21 compute-0 python3.9[207412]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764114379.8471944-775-12685982182814/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:46:21 compute-0 sudo[207450]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:46:21 compute-0 sudo[207450]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:46:21 compute-0 sudo[207450]: pam_unix(sudo:session): session closed for user root
Nov 25 23:46:21 compute-0 sudo[207410]: pam_unix(sudo:session): session closed for user root
Nov 25 23:46:21 compute-0 sudo[207475]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:46:21 compute-0 sudo[207475]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:46:21 compute-0 sudo[207475]: pam_unix(sudo:session): session closed for user root
Nov 25 23:46:21 compute-0 sudo[207524]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 23:46:21 compute-0 sudo[207524]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:46:21 compute-0 ceph-mon[75654]: pgmap v486: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:46:21 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:46:21 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 23:46:21 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:46:21 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 23:46:21 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 23:46:21 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:46:21 compute-0 podman[207689]: 2025-11-25 23:46:21.930368547 +0000 UTC m=+0.038550106 container create a3b0c309173414eeff656573be7392041c2f765d8e7c95f5f2949821453f1f71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_herschel, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:46:21 compute-0 sudo[207722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikrhdmhlhnpazkxgkhdwxqogsuhwjkkv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114381.5191815-775-111157250370951/AnsiballZ_stat.py'
Nov 25 23:46:21 compute-0 sudo[207722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:46:21 compute-0 systemd[1]: Started libpod-conmon-a3b0c309173414eeff656573be7392041c2f765d8e7c95f5f2949821453f1f71.scope.
Nov 25 23:46:21 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:46:21 compute-0 podman[207689]: 2025-11-25 23:46:21.99615925 +0000 UTC m=+0.104340799 container init a3b0c309173414eeff656573be7392041c2f765d8e7c95f5f2949821453f1f71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_herschel, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:46:22 compute-0 podman[207689]: 2025-11-25 23:46:22.001922247 +0000 UTC m=+0.110103806 container start a3b0c309173414eeff656573be7392041c2f765d8e7c95f5f2949821453f1f71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:46:22 compute-0 podman[207689]: 2025-11-25 23:46:22.00539899 +0000 UTC m=+0.113580529 container attach a3b0c309173414eeff656573be7392041c2f765d8e7c95f5f2949821453f1f71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_herschel, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:46:22 compute-0 naughty_herschel[207728]: 167 167
Nov 25 23:46:22 compute-0 podman[207689]: 2025-11-25 23:46:21.912557944 +0000 UTC m=+0.020739483 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:46:22 compute-0 systemd[1]: libpod-a3b0c309173414eeff656573be7392041c2f765d8e7c95f5f2949821453f1f71.scope: Deactivated successfully.
Nov 25 23:46:22 compute-0 podman[207689]: 2025-11-25 23:46:22.008660829 +0000 UTC m=+0.116842358 container died a3b0c309173414eeff656573be7392041c2f765d8e7c95f5f2949821453f1f71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_herschel, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 23:46:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-8db5ab3e7314fff301d30dcd83f66c5cebf54fbaebc6eba8c7df82b5613b7ea4-merged.mount: Deactivated successfully.
Nov 25 23:46:22 compute-0 podman[207689]: 2025-11-25 23:46:22.041467308 +0000 UTC m=+0.149648847 container remove a3b0c309173414eeff656573be7392041c2f765d8e7c95f5f2949821453f1f71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_herschel, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:46:22 compute-0 systemd[1]: libpod-conmon-a3b0c309173414eeff656573be7392041c2f765d8e7c95f5f2949821453f1f71.scope: Deactivated successfully.
Nov 25 23:46:22 compute-0 python3.9[207724]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:46:22 compute-0 sudo[207722]: pam_unix(sudo:session): session closed for user root
Nov 25 23:46:22 compute-0 podman[207749]: 2025-11-25 23:46:22.263653662 +0000 UTC m=+0.064017027 container create 8aa8dd8ceeaa08c6671c857366a5a2dfa23f3fc17d0f464e28cc0c1ffbc7780c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 25 23:46:22 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v487: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:46:22 compute-0 systemd[1]: Started libpod-conmon-8aa8dd8ceeaa08c6671c857366a5a2dfa23f3fc17d0f464e28cc0c1ffbc7780c.scope.
Nov 25 23:46:22 compute-0 podman[207749]: 2025-11-25 23:46:22.239412254 +0000 UTC m=+0.039775699 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:46:22 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:46:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/522923b8e4516788be07bf2ed875153ead0d80c9951b98232c1e29e0b479f92e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 23:46:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/522923b8e4516788be07bf2ed875153ead0d80c9951b98232c1e29e0b479f92e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:46:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/522923b8e4516788be07bf2ed875153ead0d80c9951b98232c1e29e0b479f92e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:46:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/522923b8e4516788be07bf2ed875153ead0d80c9951b98232c1e29e0b479f92e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:46:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/522923b8e4516788be07bf2ed875153ead0d80c9951b98232c1e29e0b479f92e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 23:46:22 compute-0 podman[207749]: 2025-11-25 23:46:22.377513438 +0000 UTC m=+0.177876813 container init 8aa8dd8ceeaa08c6671c857366a5a2dfa23f3fc17d0f464e28cc0c1ffbc7780c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 23:46:22 compute-0 podman[207749]: 2025-11-25 23:46:22.387878418 +0000 UTC m=+0.188241813 container start 8aa8dd8ceeaa08c6671c857366a5a2dfa23f3fc17d0f464e28cc0c1ffbc7780c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_sutherland, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:46:22 compute-0 podman[207749]: 2025-11-25 23:46:22.392290769 +0000 UTC m=+0.192654144 container attach 8aa8dd8ceeaa08c6671c857366a5a2dfa23f3fc17d0f464e28cc0c1ffbc7780c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 25 23:46:22 compute-0 sudo[207891]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggdrnhcmqesdepkiycnqtgtngrtozmfn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114381.5191815-775-111157250370951/AnsiballZ_copy.py'
Nov 25 23:46:22 compute-0 sudo[207891]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:46:22 compute-0 python3.9[207893]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764114381.5191815-775-111157250370951/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:46:22 compute-0 sudo[207891]: pam_unix(sudo:session): session closed for user root
Nov 25 23:46:23 compute-0 sudo[208059]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afmritmjvczeclfhwmzuesccvwpvewrd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114382.9596784-775-272894014053985/AnsiballZ_stat.py'
Nov 25 23:46:23 compute-0 sudo[208059]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:46:23 compute-0 fervent_sutherland[207813]: --> passed data devices: 0 physical, 3 LVM
Nov 25 23:46:23 compute-0 fervent_sutherland[207813]: --> relative data size: 1.0
Nov 25 23:46:23 compute-0 fervent_sutherland[207813]: --> All data devices are unavailable
Nov 25 23:46:23 compute-0 systemd[1]: libpod-8aa8dd8ceeaa08c6671c857366a5a2dfa23f3fc17d0f464e28cc0c1ffbc7780c.scope: Deactivated successfully.
Nov 25 23:46:23 compute-0 systemd[1]: libpod-8aa8dd8ceeaa08c6671c857366a5a2dfa23f3fc17d0f464e28cc0c1ffbc7780c.scope: Consumed 1.030s CPU time.
Nov 25 23:46:23 compute-0 podman[207749]: 2025-11-25 23:46:23.483375686 +0000 UTC m=+1.283739071 container died 8aa8dd8ceeaa08c6671c857366a5a2dfa23f3fc17d0f464e28cc0c1ffbc7780c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_sutherland, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 25 23:46:23 compute-0 python3.9[208061]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:46:23 compute-0 sudo[208059]: pam_unix(sudo:session): session closed for user root
Nov 25 23:46:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-522923b8e4516788be07bf2ed875153ead0d80c9951b98232c1e29e0b479f92e-merged.mount: Deactivated successfully.
Nov 25 23:46:23 compute-0 podman[207749]: 2025-11-25 23:46:23.554705449 +0000 UTC m=+1.355068824 container remove 8aa8dd8ceeaa08c6671c857366a5a2dfa23f3fc17d0f464e28cc0c1ffbc7780c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:46:23 compute-0 systemd[1]: libpod-conmon-8aa8dd8ceeaa08c6671c857366a5a2dfa23f3fc17d0f464e28cc0c1ffbc7780c.scope: Deactivated successfully.
Nov 25 23:46:23 compute-0 sudo[207524]: pam_unix(sudo:session): session closed for user root
Nov 25 23:46:23 compute-0 ceph-mon[75654]: pgmap v487: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:46:23 compute-0 sudo[208108]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:46:23 compute-0 sudo[208108]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:46:23 compute-0 sudo[208108]: pam_unix(sudo:session): session closed for user root
Nov 25 23:46:23 compute-0 sudo[208157]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:46:23 compute-0 sudo[208157]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:46:23 compute-0 sudo[208157]: pam_unix(sudo:session): session closed for user root
Nov 25 23:46:23 compute-0 sudo[208205]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:46:23 compute-0 sudo[208205]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:46:23 compute-0 sudo[208205]: pam_unix(sudo:session): session closed for user root
Nov 25 23:46:23 compute-0 sudo[208254]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 -- lvm list --format json
Nov 25 23:46:23 compute-0 sudo[208254]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:46:23 compute-0 sudo[208305]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvfjtaywobwyylufvxfzxwkxwrdmhrgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114382.9596784-775-272894014053985/AnsiballZ_copy.py'
Nov 25 23:46:23 compute-0 sudo[208305]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:46:24 compute-0 python3.9[208307]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764114382.9596784-775-272894014053985/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:46:24 compute-0 sudo[208305]: pam_unix(sudo:session): session closed for user root
Nov 25 23:46:24 compute-0 podman[208366]: 2025-11-25 23:46:24.276079914 +0000 UTC m=+0.041177647 container create bc7aac43cef3167398e4a62da7916a11eebd2cd942038d8ceb30ce29c7b3ec21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_northcutt, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:46:24 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v488: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:46:24 compute-0 systemd[1]: Started libpod-conmon-bc7aac43cef3167398e4a62da7916a11eebd2cd942038d8ceb30ce29c7b3ec21.scope.
Nov 25 23:46:24 compute-0 podman[208366]: 2025-11-25 23:46:24.257287515 +0000 UTC m=+0.022385258 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:46:24 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:46:24 compute-0 podman[208366]: 2025-11-25 23:46:24.379294732 +0000 UTC m=+0.144392505 container init bc7aac43cef3167398e4a62da7916a11eebd2cd942038d8ceb30ce29c7b3ec21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_northcutt, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 23:46:24 compute-0 podman[208366]: 2025-11-25 23:46:24.391427001 +0000 UTC m=+0.156524764 container start bc7aac43cef3167398e4a62da7916a11eebd2cd942038d8ceb30ce29c7b3ec21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_northcutt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:46:24 compute-0 podman[208366]: 2025-11-25 23:46:24.3958027 +0000 UTC m=+0.160900433 container attach bc7aac43cef3167398e4a62da7916a11eebd2cd942038d8ceb30ce29c7b3ec21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_northcutt, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 23:46:24 compute-0 systemd[1]: libpod-bc7aac43cef3167398e4a62da7916a11eebd2cd942038d8ceb30ce29c7b3ec21.scope: Deactivated successfully.
Nov 25 23:46:24 compute-0 upbeat_northcutt[208398]: 167 167
Nov 25 23:46:24 compute-0 conmon[208398]: conmon bc7aac43cef3167398e4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bc7aac43cef3167398e4a62da7916a11eebd2cd942038d8ceb30ce29c7b3ec21.scope/container/memory.events
Nov 25 23:46:24 compute-0 podman[208366]: 2025-11-25 23:46:24.40062228 +0000 UTC m=+0.165720043 container died bc7aac43cef3167398e4a62da7916a11eebd2cd942038d8ceb30ce29c7b3ec21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_northcutt, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Nov 25 23:46:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-197be54d3d8e1704b220af00fa72515c7982c00982c6f3e19069d018d2a00284-merged.mount: Deactivated successfully.
Nov 25 23:46:24 compute-0 podman[208366]: 2025-11-25 23:46:24.45038697 +0000 UTC m=+0.215484723 container remove bc7aac43cef3167398e4a62da7916a11eebd2cd942038d8ceb30ce29c7b3ec21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_northcutt, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 23:46:24 compute-0 systemd[1]: libpod-conmon-bc7aac43cef3167398e4a62da7916a11eebd2cd942038d8ceb30ce29c7b3ec21.scope: Deactivated successfully.
Nov 25 23:46:24 compute-0 podman[208488]: 2025-11-25 23:46:24.668975215 +0000 UTC m=+0.052947626 container create 15601ca826acedf49336c6b4aefe2b070c241b221525c3b0e240feea0f0775ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_johnson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 23:46:24 compute-0 systemd[1]: Started libpod-conmon-15601ca826acedf49336c6b4aefe2b070c241b221525c3b0e240feea0f0775ea.scope.
Nov 25 23:46:24 compute-0 podman[208488]: 2025-11-25 23:46:24.647160003 +0000 UTC m=+0.031132414 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:46:24 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:46:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e61fec0c93a2d99398aaacaf2945300686221dae8a021377ddcbfb16319dd078/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 23:46:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e61fec0c93a2d99398aaacaf2945300686221dae8a021377ddcbfb16319dd078/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:46:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e61fec0c93a2d99398aaacaf2945300686221dae8a021377ddcbfb16319dd078/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:46:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e61fec0c93a2d99398aaacaf2945300686221dae8a021377ddcbfb16319dd078/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:46:24 compute-0 podman[208488]: 2025-11-25 23:46:24.784234269 +0000 UTC m=+0.168206680 container init 15601ca826acedf49336c6b4aefe2b070c241b221525c3b0e240feea0f0775ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_johnson, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:46:24 compute-0 podman[208488]: 2025-11-25 23:46:24.797851459 +0000 UTC m=+0.181823840 container start 15601ca826acedf49336c6b4aefe2b070c241b221525c3b0e240feea0f0775ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_johnson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 25 23:46:24 compute-0 podman[208488]: 2025-11-25 23:46:24.801210609 +0000 UTC m=+0.185182990 container attach 15601ca826acedf49336c6b4aefe2b070c241b221525c3b0e240feea0f0775ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_johnson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 23:46:24 compute-0 sudo[208558]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckvtlllootkrcdcinoneiupwgdtzhkqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114384.3701646-775-56314254395744/AnsiballZ_stat.py'
Nov 25 23:46:24 compute-0 sudo[208558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:46:25 compute-0 python3.9[208562]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:46:25 compute-0 sudo[208558]: pam_unix(sudo:session): session closed for user root
Nov 25 23:46:25 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:46:25 compute-0 sudo[208687]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thcaxcbuujkcioumygwzytluhndjbnpd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114384.3701646-775-56314254395744/AnsiballZ_copy.py'
Nov 25 23:46:25 compute-0 sudo[208687]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:46:25 compute-0 practical_johnson[208529]: {
Nov 25 23:46:25 compute-0 practical_johnson[208529]:     "0": [
Nov 25 23:46:25 compute-0 practical_johnson[208529]:         {
Nov 25 23:46:25 compute-0 practical_johnson[208529]:             "devices": [
Nov 25 23:46:25 compute-0 practical_johnson[208529]:                 "/dev/loop3"
Nov 25 23:46:25 compute-0 practical_johnson[208529]:             ],
Nov 25 23:46:25 compute-0 practical_johnson[208529]:             "lv_name": "ceph_lv0",
Nov 25 23:46:25 compute-0 practical_johnson[208529]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 23:46:25 compute-0 practical_johnson[208529]:             "lv_size": "21470642176",
Nov 25 23:46:25 compute-0 practical_johnson[208529]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c5ddf08a-6193-41e0-8332-60b5083aa62e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 23:46:25 compute-0 practical_johnson[208529]:             "lv_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 23:46:25 compute-0 practical_johnson[208529]:             "name": "ceph_lv0",
Nov 25 23:46:25 compute-0 practical_johnson[208529]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 23:46:25 compute-0 practical_johnson[208529]:             "tags": {
Nov 25 23:46:25 compute-0 practical_johnson[208529]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 23:46:25 compute-0 practical_johnson[208529]:                 "ceph.block_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 23:46:25 compute-0 practical_johnson[208529]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 23:46:25 compute-0 practical_johnson[208529]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:46:25 compute-0 practical_johnson[208529]:                 "ceph.cluster_name": "ceph",
Nov 25 23:46:25 compute-0 practical_johnson[208529]:                 "ceph.crush_device_class": "",
Nov 25 23:46:25 compute-0 practical_johnson[208529]:                 "ceph.encrypted": "0",
Nov 25 23:46:25 compute-0 practical_johnson[208529]:                 "ceph.osd_fsid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 23:46:25 compute-0 practical_johnson[208529]:                 "ceph.osd_id": "0",
Nov 25 23:46:25 compute-0 practical_johnson[208529]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 23:46:25 compute-0 practical_johnson[208529]:                 "ceph.type": "block",
Nov 25 23:46:25 compute-0 practical_johnson[208529]:                 "ceph.vdo": "0"
Nov 25 23:46:25 compute-0 practical_johnson[208529]:             },
Nov 25 23:46:25 compute-0 practical_johnson[208529]:             "type": "block",
Nov 25 23:46:25 compute-0 practical_johnson[208529]:             "vg_name": "ceph_vg0"
Nov 25 23:46:25 compute-0 practical_johnson[208529]:         }
Nov 25 23:46:25 compute-0 practical_johnson[208529]:     ],
Nov 25 23:46:25 compute-0 practical_johnson[208529]:     "1": [
Nov 25 23:46:25 compute-0 practical_johnson[208529]:         {
Nov 25 23:46:25 compute-0 practical_johnson[208529]:             "devices": [
Nov 25 23:46:25 compute-0 practical_johnson[208529]:                 "/dev/loop4"
Nov 25 23:46:25 compute-0 practical_johnson[208529]:             ],
Nov 25 23:46:25 compute-0 practical_johnson[208529]:             "lv_name": "ceph_lv1",
Nov 25 23:46:25 compute-0 practical_johnson[208529]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 23:46:25 compute-0 practical_johnson[208529]:             "lv_size": "21470642176",
Nov 25 23:46:25 compute-0 practical_johnson[208529]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21bbab34-bea3-466b-8bf7-812749fcef47,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 23:46:25 compute-0 practical_johnson[208529]:             "lv_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 23:46:25 compute-0 practical_johnson[208529]:             "name": "ceph_lv1",
Nov 25 23:46:25 compute-0 practical_johnson[208529]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 23:46:25 compute-0 practical_johnson[208529]:             "tags": {
Nov 25 23:46:25 compute-0 practical_johnson[208529]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 23:46:25 compute-0 practical_johnson[208529]:                 "ceph.block_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 23:46:25 compute-0 practical_johnson[208529]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 23:46:25 compute-0 practical_johnson[208529]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:46:25 compute-0 practical_johnson[208529]:                 "ceph.cluster_name": "ceph",
Nov 25 23:46:25 compute-0 practical_johnson[208529]:                 "ceph.crush_device_class": "",
Nov 25 23:46:25 compute-0 practical_johnson[208529]:                 "ceph.encrypted": "0",
Nov 25 23:46:25 compute-0 practical_johnson[208529]:                 "ceph.osd_fsid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 23:46:25 compute-0 practical_johnson[208529]:                 "ceph.osd_id": "1",
Nov 25 23:46:25 compute-0 practical_johnson[208529]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 23:46:25 compute-0 practical_johnson[208529]:                 "ceph.type": "block",
Nov 25 23:46:25 compute-0 practical_johnson[208529]:                 "ceph.vdo": "0"
Nov 25 23:46:25 compute-0 practical_johnson[208529]:             },
Nov 25 23:46:25 compute-0 practical_johnson[208529]:             "type": "block",
Nov 25 23:46:25 compute-0 practical_johnson[208529]:             "vg_name": "ceph_vg1"
Nov 25 23:46:25 compute-0 practical_johnson[208529]:         }
Nov 25 23:46:25 compute-0 practical_johnson[208529]:     ],
Nov 25 23:46:25 compute-0 practical_johnson[208529]:     "2": [
Nov 25 23:46:25 compute-0 practical_johnson[208529]:         {
Nov 25 23:46:25 compute-0 practical_johnson[208529]:             "devices": [
Nov 25 23:46:25 compute-0 practical_johnson[208529]:                 "/dev/loop5"
Nov 25 23:46:25 compute-0 practical_johnson[208529]:             ],
Nov 25 23:46:25 compute-0 practical_johnson[208529]:             "lv_name": "ceph_lv2",
Nov 25 23:46:25 compute-0 practical_johnson[208529]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 23:46:25 compute-0 practical_johnson[208529]:             "lv_size": "21470642176",
Nov 25 23:46:25 compute-0 practical_johnson[208529]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=019d967b-1a56-4e90-8682-a890da577e20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 23:46:25 compute-0 practical_johnson[208529]:             "lv_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 23:46:25 compute-0 practical_johnson[208529]:             "name": "ceph_lv2",
Nov 25 23:46:25 compute-0 practical_johnson[208529]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 23:46:25 compute-0 practical_johnson[208529]:             "tags": {
Nov 25 23:46:25 compute-0 practical_johnson[208529]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 23:46:25 compute-0 practical_johnson[208529]:                 "ceph.block_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 23:46:25 compute-0 practical_johnson[208529]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 23:46:25 compute-0 practical_johnson[208529]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:46:25 compute-0 practical_johnson[208529]:                 "ceph.cluster_name": "ceph",
Nov 25 23:46:25 compute-0 practical_johnson[208529]:                 "ceph.crush_device_class": "",
Nov 25 23:46:25 compute-0 practical_johnson[208529]:                 "ceph.encrypted": "0",
Nov 25 23:46:25 compute-0 practical_johnson[208529]:                 "ceph.osd_fsid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 23:46:25 compute-0 practical_johnson[208529]:                 "ceph.osd_id": "2",
Nov 25 23:46:25 compute-0 practical_johnson[208529]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 23:46:25 compute-0 practical_johnson[208529]:                 "ceph.type": "block",
Nov 25 23:46:25 compute-0 practical_johnson[208529]:                 "ceph.vdo": "0"
Nov 25 23:46:25 compute-0 practical_johnson[208529]:             },
Nov 25 23:46:25 compute-0 practical_johnson[208529]:             "type": "block",
Nov 25 23:46:25 compute-0 practical_johnson[208529]:             "vg_name": "ceph_vg2"
Nov 25 23:46:25 compute-0 practical_johnson[208529]:         }
Nov 25 23:46:25 compute-0 practical_johnson[208529]:     ]
Nov 25 23:46:25 compute-0 practical_johnson[208529]: }
Nov 25 23:46:25 compute-0 systemd[1]: libpod-15601ca826acedf49336c6b4aefe2b070c241b221525c3b0e240feea0f0775ea.scope: Deactivated successfully.
Nov 25 23:46:25 compute-0 podman[208488]: 2025-11-25 23:46:25.535931026 +0000 UTC m=+0.919903427 container died 15601ca826acedf49336c6b4aefe2b070c241b221525c3b0e240feea0f0775ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_johnson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 23:46:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-e61fec0c93a2d99398aaacaf2945300686221dae8a021377ddcbfb16319dd078-merged.mount: Deactivated successfully.
Nov 25 23:46:25 compute-0 podman[208488]: 2025-11-25 23:46:25.610549029 +0000 UTC m=+0.994521410 container remove 15601ca826acedf49336c6b4aefe2b070c241b221525c3b0e240feea0f0775ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_johnson, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 23:46:25 compute-0 ceph-mon[75654]: pgmap v488: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:46:25 compute-0 systemd[1]: libpod-conmon-15601ca826acedf49336c6b4aefe2b070c241b221525c3b0e240feea0f0775ea.scope: Deactivated successfully.
Nov 25 23:46:25 compute-0 sudo[208254]: pam_unix(sudo:session): session closed for user root
Nov 25 23:46:25 compute-0 python3.9[208689]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764114384.3701646-775-56314254395744/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:46:25 compute-0 sudo[208701]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:46:25 compute-0 sudo[208701]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:46:25 compute-0 sudo[208701]: pam_unix(sudo:session): session closed for user root
Nov 25 23:46:25 compute-0 sudo[208687]: pam_unix(sudo:session): session closed for user root
Nov 25 23:46:25 compute-0 sudo[208726]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:46:25 compute-0 sudo[208726]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:46:25 compute-0 sudo[208726]: pam_unix(sudo:session): session closed for user root
Nov 25 23:46:25 compute-0 sudo[208775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:46:25 compute-0 sudo[208775]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:46:25 compute-0 sudo[208775]: pam_unix(sudo:session): session closed for user root
Nov 25 23:46:25 compute-0 sudo[208823]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 -- raw list --format json
Nov 25 23:46:25 compute-0 sudo[208823]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:46:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:46:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:46:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:46:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:46:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:46:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:46:26 compute-0 sudo[208977]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mozeriwixdkhetidooswvvgjdjtoqroi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114385.920137-775-238866778892393/AnsiballZ_stat.py'
Nov 25 23:46:26 compute-0 sudo[208977]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:46:26 compute-0 podman[208992]: 2025-11-25 23:46:26.311746607 +0000 UTC m=+0.040459768 container create 9dc5c4b3f060dc427fc3e12d07db96802341fbb58007b551a88af459c0612ff6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_bose, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 23:46:26 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v489: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:46:26 compute-0 systemd[1]: Started libpod-conmon-9dc5c4b3f060dc427fc3e12d07db96802341fbb58007b551a88af459c0612ff6.scope.
Nov 25 23:46:26 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:46:26 compute-0 podman[208992]: 2025-11-25 23:46:26.293357209 +0000 UTC m=+0.022070450 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:46:26 compute-0 python3.9[208985]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:46:26 compute-0 sudo[208977]: pam_unix(sudo:session): session closed for user root
Nov 25 23:46:26 compute-0 podman[208992]: 2025-11-25 23:46:26.407345778 +0000 UTC m=+0.136059029 container init 9dc5c4b3f060dc427fc3e12d07db96802341fbb58007b551a88af459c0612ff6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_bose, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 23:46:26 compute-0 podman[208992]: 2025-11-25 23:46:26.414723388 +0000 UTC m=+0.143436549 container start 9dc5c4b3f060dc427fc3e12d07db96802341fbb58007b551a88af459c0612ff6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_bose, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:46:26 compute-0 podman[208992]: 2025-11-25 23:46:26.418229004 +0000 UTC m=+0.146942205 container attach 9dc5c4b3f060dc427fc3e12d07db96802341fbb58007b551a88af459c0612ff6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_bose, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:46:26 compute-0 recursing_bose[209008]: 167 167
Nov 25 23:46:26 compute-0 systemd[1]: libpod-9dc5c4b3f060dc427fc3e12d07db96802341fbb58007b551a88af459c0612ff6.scope: Deactivated successfully.
Nov 25 23:46:26 compute-0 podman[208992]: 2025-11-25 23:46:26.420896206 +0000 UTC m=+0.149609397 container died 9dc5c4b3f060dc427fc3e12d07db96802341fbb58007b551a88af459c0612ff6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_bose, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:46:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-28d9a9502b1f6aba5815a7f1663c6f1f8a9943e26db7fdcfc7ac9d87c097bcc8-merged.mount: Deactivated successfully.
Nov 25 23:46:26 compute-0 podman[208992]: 2025-11-25 23:46:26.464770595 +0000 UTC m=+0.193483796 container remove 9dc5c4b3f060dc427fc3e12d07db96802341fbb58007b551a88af459c0612ff6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_bose, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 23:46:26 compute-0 systemd[1]: libpod-conmon-9dc5c4b3f060dc427fc3e12d07db96802341fbb58007b551a88af459c0612ff6.scope: Deactivated successfully.
Nov 25 23:46:26 compute-0 podman[209084]: 2025-11-25 23:46:26.71650997 +0000 UTC m=+0.076604128 container create 668ebd37dc5b43f917e35894a99680c3e37ed928fdd867129c5a0f6a0e7b519f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_rhodes, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:46:26 compute-0 systemd[1]: Started libpod-conmon-668ebd37dc5b43f917e35894a99680c3e37ed928fdd867129c5a0f6a0e7b519f.scope.
Nov 25 23:46:26 compute-0 podman[209084]: 2025-11-25 23:46:26.682260501 +0000 UTC m=+0.042354739 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:46:26 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:46:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5df890663211821870406e799a66f696eb607716823b9918bc3cc79bb74ae32/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 23:46:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5df890663211821870406e799a66f696eb607716823b9918bc3cc79bb74ae32/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:46:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5df890663211821870406e799a66f696eb607716823b9918bc3cc79bb74ae32/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:46:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5df890663211821870406e799a66f696eb607716823b9918bc3cc79bb74ae32/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:46:26 compute-0 podman[209084]: 2025-11-25 23:46:26.829511383 +0000 UTC m=+0.189605621 container init 668ebd37dc5b43f917e35894a99680c3e37ed928fdd867129c5a0f6a0e7b519f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_rhodes, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 23:46:26 compute-0 podman[209084]: 2025-11-25 23:46:26.837357425 +0000 UTC m=+0.197451623 container start 668ebd37dc5b43f917e35894a99680c3e37ed928fdd867129c5a0f6a0e7b519f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_rhodes, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:46:26 compute-0 podman[209084]: 2025-11-25 23:46:26.84120763 +0000 UTC m=+0.201301788 container attach 668ebd37dc5b43f917e35894a99680c3e37ed928fdd867129c5a0f6a0e7b519f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_rhodes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 23:46:26 compute-0 sudo[209171]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fugpgteyhbhldqffkqqdobxbvjzgxtlk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114385.920137-775-238866778892393/AnsiballZ_copy.py'
Nov 25 23:46:26 compute-0 sudo[209171]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:46:27 compute-0 python3.9[209175]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764114385.920137-775-238866778892393/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:46:27 compute-0 sudo[209171]: pam_unix(sudo:session): session closed for user root
Nov 25 23:46:27 compute-0 sudo[209336]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdovalbeavebhkdxqvzgsjpakhqsadbe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114387.2453518-775-261889011415995/AnsiballZ_stat.py'
Nov 25 23:46:27 compute-0 sudo[209336]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:46:27 compute-0 ceph-mon[75654]: pgmap v489: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:46:27 compute-0 python3.9[209341]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:46:27 compute-0 keen_rhodes[209142]: {
Nov 25 23:46:27 compute-0 keen_rhodes[209142]:     "019d967b-1a56-4e90-8682-a890da577e20": {
Nov 25 23:46:27 compute-0 keen_rhodes[209142]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:46:27 compute-0 keen_rhodes[209142]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 23:46:27 compute-0 keen_rhodes[209142]:         "osd_id": 2,
Nov 25 23:46:27 compute-0 keen_rhodes[209142]:         "osd_uuid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 23:46:27 compute-0 keen_rhodes[209142]:         "type": "bluestore"
Nov 25 23:46:27 compute-0 keen_rhodes[209142]:     },
Nov 25 23:46:27 compute-0 keen_rhodes[209142]:     "21bbab34-bea3-466b-8bf7-812749fcef47": {
Nov 25 23:46:27 compute-0 keen_rhodes[209142]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:46:27 compute-0 keen_rhodes[209142]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 23:46:27 compute-0 keen_rhodes[209142]:         "osd_id": 1,
Nov 25 23:46:27 compute-0 keen_rhodes[209142]:         "osd_uuid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 23:46:27 compute-0 keen_rhodes[209142]:         "type": "bluestore"
Nov 25 23:46:27 compute-0 keen_rhodes[209142]:     },
Nov 25 23:46:27 compute-0 keen_rhodes[209142]:     "c5ddf08a-6193-41e0-8332-60b5083aa62e": {
Nov 25 23:46:27 compute-0 keen_rhodes[209142]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:46:27 compute-0 keen_rhodes[209142]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 23:46:27 compute-0 keen_rhodes[209142]:         "osd_id": 0,
Nov 25 23:46:27 compute-0 keen_rhodes[209142]:         "osd_uuid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 23:46:27 compute-0 keen_rhodes[209142]:         "type": "bluestore"
Nov 25 23:46:27 compute-0 keen_rhodes[209142]:     }
Nov 25 23:46:27 compute-0 keen_rhodes[209142]: }
Nov 25 23:46:27 compute-0 sudo[209336]: pam_unix(sudo:session): session closed for user root
Nov 25 23:46:27 compute-0 systemd[1]: libpod-668ebd37dc5b43f917e35894a99680c3e37ed928fdd867129c5a0f6a0e7b519f.scope: Deactivated successfully.
Nov 25 23:46:27 compute-0 podman[209084]: 2025-11-25 23:46:27.840903699 +0000 UTC m=+1.200997857 container died 668ebd37dc5b43f917e35894a99680c3e37ed928fdd867129c5a0f6a0e7b519f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_rhodes, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:46:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-b5df890663211821870406e799a66f696eb607716823b9918bc3cc79bb74ae32-merged.mount: Deactivated successfully.
Nov 25 23:46:27 compute-0 podman[209084]: 2025-11-25 23:46:27.912336626 +0000 UTC m=+1.272430794 container remove 668ebd37dc5b43f917e35894a99680c3e37ed928fdd867129c5a0f6a0e7b519f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_rhodes, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:46:27 compute-0 systemd[1]: libpod-conmon-668ebd37dc5b43f917e35894a99680c3e37ed928fdd867129c5a0f6a0e7b519f.scope: Deactivated successfully.
Nov 25 23:46:27 compute-0 sudo[208823]: pam_unix(sudo:session): session closed for user root
Nov 25 23:46:27 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 23:46:27 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:46:27 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 23:46:27 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:46:27 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev 0110317f-e754-48b0-82ed-769cc7332997 does not exist
Nov 25 23:46:28 compute-0 sudo[209392]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:46:28 compute-0 sudo[209392]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:46:28 compute-0 sudo[209392]: pam_unix(sudo:session): session closed for user root
Nov 25 23:46:28 compute-0 sudo[209435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 23:46:28 compute-0 sudo[209435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:46:28 compute-0 sudo[209435]: pam_unix(sudo:session): session closed for user root
Nov 25 23:46:28 compute-0 sudo[209539]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghonaspqpbxxksewwkvjnvlxemxygkls ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114387.2453518-775-261889011415995/AnsiballZ_copy.py'
Nov 25 23:46:28 compute-0 sudo[209539]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:46:28 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v490: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:46:28 compute-0 python3.9[209541]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764114387.2453518-775-261889011415995/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:46:28 compute-0 sudo[209539]: pam_unix(sudo:session): session closed for user root
Nov 25 23:46:28 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:46:28 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:46:28 compute-0 ceph-mon[75654]: pgmap v490: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:46:29 compute-0 sudo[209691]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vybdzolqrubdxdhrlkafwgavjfmpqamj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114388.6411607-775-116090343045594/AnsiballZ_stat.py'
Nov 25 23:46:29 compute-0 sudo[209691]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:46:29 compute-0 python3.9[209693]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:46:29 compute-0 sudo[209691]: pam_unix(sudo:session): session closed for user root
Nov 25 23:46:29 compute-0 sudo[209814]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmmqqbsusvywaqbfdmvepxlxoyxfecdl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114388.6411607-775-116090343045594/AnsiballZ_copy.py'
Nov 25 23:46:29 compute-0 sudo[209814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:46:29 compute-0 python3.9[209816]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764114388.6411607-775-116090343045594/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:46:29 compute-0 sudo[209814]: pam_unix(sudo:session): session closed for user root
Nov 25 23:46:30 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v491: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:46:30 compute-0 sudo[209966]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-quoxwknvvqbdyqhzwbdnjnvrosxjdmcq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114390.0266225-775-36547474698443/AnsiballZ_stat.py'
Nov 25 23:46:30 compute-0 sudo[209966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:46:30 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:46:30 compute-0 python3.9[209968]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:46:30 compute-0 sudo[209966]: pam_unix(sudo:session): session closed for user root
Nov 25 23:46:31 compute-0 sudo[210089]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngdetoqpdtxtykdfvshrxeatiqiimfbn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114390.0266225-775-36547474698443/AnsiballZ_copy.py'
Nov 25 23:46:31 compute-0 sudo[210089]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:46:31 compute-0 python3.9[210091]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764114390.0266225-775-36547474698443/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:46:31 compute-0 sudo[210089]: pam_unix(sudo:session): session closed for user root
Nov 25 23:46:31 compute-0 ceph-mon[75654]: pgmap v491: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:46:31 compute-0 sudo[210241]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqwckxpwyispjnybnbhjdcaruwvaiygm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114391.384346-775-36274582867095/AnsiballZ_stat.py'
Nov 25 23:46:31 compute-0 sudo[210241]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:46:31 compute-0 python3.9[210243]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:46:31 compute-0 sudo[210241]: pam_unix(sudo:session): session closed for user root
Nov 25 23:46:32 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v492: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:46:32 compute-0 sudo[210364]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zolcswtpifwyjyrvaotxiynnrivzjoyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114391.384346-775-36274582867095/AnsiballZ_copy.py'
Nov 25 23:46:32 compute-0 sudo[210364]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:46:32 compute-0 python3.9[210366]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764114391.384346-775-36274582867095/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:46:32 compute-0 sudo[210364]: pam_unix(sudo:session): session closed for user root
Nov 25 23:46:33 compute-0 sudo[210516]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nofnniveomraskjsrjieqfzfbmcjjvpm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114392.7747965-775-32331882837140/AnsiballZ_stat.py'
Nov 25 23:46:33 compute-0 sudo[210516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:46:33 compute-0 python3.9[210518]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:46:33 compute-0 ceph-mon[75654]: pgmap v492: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:46:33 compute-0 sudo[210516]: pam_unix(sudo:session): session closed for user root
Nov 25 23:46:33 compute-0 sudo[210639]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbuwdeckwgtnmycuuujmueksohggahnx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114392.7747965-775-32331882837140/AnsiballZ_copy.py'
Nov 25 23:46:33 compute-0 sudo[210639]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:46:34 compute-0 python3.9[210641]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764114392.7747965-775-32331882837140/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:46:34 compute-0 sudo[210639]: pam_unix(sudo:session): session closed for user root
Nov 25 23:46:34 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v493: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:46:34 compute-0 sudo[210791]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbezcbdzmkmcshirfaakfsbmpyuzdclx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114394.2072077-775-208307381025807/AnsiballZ_stat.py'
Nov 25 23:46:34 compute-0 sudo[210791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:46:34 compute-0 python3.9[210793]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:46:34 compute-0 sudo[210791]: pam_unix(sudo:session): session closed for user root
Nov 25 23:46:35 compute-0 sudo[210914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wakqsqjlwtsdbygkcuibvhimuktwlgct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114394.2072077-775-208307381025807/AnsiballZ_copy.py'
Nov 25 23:46:35 compute-0 sudo[210914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:46:35 compute-0 ceph-mon[75654]: pgmap v493: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:46:35 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:46:35 compute-0 podman[210916]: 2025-11-25 23:46:35.50888919 +0000 UTC m=+0.163209292 container health_status 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 25 23:46:35 compute-0 python3.9[210917]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764114394.2072077-775-208307381025807/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:46:35 compute-0 sudo[210914]: pam_unix(sudo:session): session closed for user root
Nov 25 23:46:36 compute-0 sudo[211092]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rejqpnrfhvoqcodyxvykugubyestornc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114395.7450414-775-181594614764895/AnsiballZ_stat.py'
Nov 25 23:46:36 compute-0 sudo[211092]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:46:36 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v494: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:46:36 compute-0 python3.9[211094]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:46:36 compute-0 sudo[211092]: pam_unix(sudo:session): session closed for user root
Nov 25 23:46:36 compute-0 sudo[211215]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzxdoipycucnlxdgvjvopbwhqrcxthcw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114395.7450414-775-181594614764895/AnsiballZ_copy.py'
Nov 25 23:46:36 compute-0 sudo[211215]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:46:37 compute-0 python3.9[211217]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764114395.7450414-775-181594614764895/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:46:37 compute-0 sudo[211215]: pam_unix(sudo:session): session closed for user root
Nov 25 23:46:37 compute-0 ceph-mon[75654]: pgmap v494: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:46:37 compute-0 python3.9[211367]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:46:38 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v495: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:46:38 compute-0 sudo[211520]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hktyejdwmhpudsaajhtjaaedkxdzgezc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114398.1178374-981-135597917153315/AnsiballZ_seboolean.py'
Nov 25 23:46:38 compute-0 sudo[211520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:46:38 compute-0 python3.9[211522]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Nov 25 23:46:39 compute-0 ceph-mon[75654]: pgmap v495: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:46:39 compute-0 sudo[211520]: pam_unix(sudo:session): session closed for user root
Nov 25 23:46:40 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v496: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:46:40 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:46:40 compute-0 sudo[211676]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wiwlilqppltlmluqaltcozmpmvrdwgxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114400.237979-989-17652563578299/AnsiballZ_copy.py'
Nov 25 23:46:40 compute-0 dbus-broker-launch[769]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Nov 25 23:46:40 compute-0 sudo[211676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:46:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:46:40.753 160725 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 23:46:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:46:40.755 160725 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 23:46:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:46:40.755 160725 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 23:46:40 compute-0 python3.9[211678]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:46:40 compute-0 sudo[211676]: pam_unix(sudo:session): session closed for user root
Nov 25 23:46:41 compute-0 sudo[211828]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzlunhsnlzdjpstinjyecdpkysiexrwd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114401.083836-989-23134915981638/AnsiballZ_copy.py'
Nov 25 23:46:41 compute-0 sudo[211828]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:46:41 compute-0 ceph-mon[75654]: pgmap v496: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:46:41 compute-0 python3.9[211830]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:46:41 compute-0 sudo[211828]: pam_unix(sudo:session): session closed for user root
Nov 25 23:46:42 compute-0 sudo[211980]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thkcqubjyynhmxznxtjxkynroxwbsmah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114401.846259-989-25122083692991/AnsiballZ_copy.py'
Nov 25 23:46:42 compute-0 sudo[211980]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:46:42 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v497: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:46:42 compute-0 python3.9[211982]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:46:42 compute-0 sudo[211980]: pam_unix(sudo:session): session closed for user root
Nov 25 23:46:42 compute-0 sudo[212132]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-toopooqibfgpfkztdmhzetgnrbworikq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114402.6658466-989-250397389943318/AnsiballZ_copy.py'
Nov 25 23:46:42 compute-0 sudo[212132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:46:43 compute-0 python3.9[212134]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:46:43 compute-0 sudo[212132]: pam_unix(sudo:session): session closed for user root
Nov 25 23:46:43 compute-0 ceph-mon[75654]: pgmap v497: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:46:43 compute-0 sudo[212297]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdmynlwelvmynviurfxegjgqdsbqtwqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114403.3605838-989-114167850129159/AnsiballZ_copy.py'
Nov 25 23:46:43 compute-0 sudo[212297]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:46:43 compute-0 podman[212258]: 2025-11-25 23:46:43.785914331 +0000 UTC m=+0.073787528 container health_status 9c33443d6f21f9126b440289b7d384a59ef92406382d0e40a2b11f1da5dc47d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Nov 25 23:46:43 compute-0 python3.9[212306]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:46:44 compute-0 sudo[212297]: pam_unix(sudo:session): session closed for user root
Nov 25 23:46:44 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v498: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:46:44 compute-0 sudo[212457]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zplyuzrttgsurtberbjvexqosqdlyuvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114404.24661-1025-103572989217931/AnsiballZ_copy.py'
Nov 25 23:46:44 compute-0 sudo[212457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:46:44 compute-0 python3.9[212459]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:46:44 compute-0 sudo[212457]: pam_unix(sudo:session): session closed for user root
Nov 25 23:46:45 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:46:45 compute-0 sudo[212609]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfypwemzqqbbilkhxxqaqmwcjtikkwbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114405.100222-1025-205405869024948/AnsiballZ_copy.py'
Nov 25 23:46:45 compute-0 sudo[212609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:46:45 compute-0 ceph-mon[75654]: pgmap v498: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:46:45 compute-0 python3.9[212611]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:46:45 compute-0 sudo[212609]: pam_unix(sudo:session): session closed for user root
Nov 25 23:46:46 compute-0 sudo[212762]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wyceycfsokswduwmcsajgnyjgkkfmmwz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114405.890594-1025-15989668240383/AnsiballZ_copy.py'
Nov 25 23:46:46 compute-0 sudo[212762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:46:46 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v499: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:46:46 compute-0 python3.9[212764]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:46:46 compute-0 sudo[212762]: pam_unix(sudo:session): session closed for user root
Nov 25 23:46:46 compute-0 sudo[212914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svtayfawilpptbphxchuxcltkvdvhkhp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114406.6089227-1025-260284400889178/AnsiballZ_copy.py'
Nov 25 23:46:46 compute-0 sudo[212914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:46:47 compute-0 python3.9[212916]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:46:47 compute-0 sudo[212914]: pam_unix(sudo:session): session closed for user root
Nov 25 23:46:47 compute-0 ceph-mon[75654]: pgmap v499: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:46:47 compute-0 sudo[213066]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfdovjtcmfrazxsdpvtrfrxrgiuhrqvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114407.2696373-1025-200879178786921/AnsiballZ_copy.py'
Nov 25 23:46:47 compute-0 sudo[213066]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:46:47 compute-0 python3.9[213068]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:46:47 compute-0 sudo[213066]: pam_unix(sudo:session): session closed for user root
Nov 25 23:46:48 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v500: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:46:48 compute-0 sudo[213218]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkykdrrewnsflgoyphxbcqclzofywlze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114408.1086173-1061-148333678945283/AnsiballZ_systemd.py'
Nov 25 23:46:48 compute-0 sudo[213218]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:46:48 compute-0 python3.9[213220]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 23:46:48 compute-0 systemd[1]: Reloading.
Nov 25 23:46:48 compute-0 systemd-rc-local-generator[213244]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 23:46:48 compute-0 systemd-sysv-generator[213249]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 23:46:49 compute-0 systemd[1]: Starting libvirt logging daemon socket...
Nov 25 23:46:49 compute-0 systemd[1]: Listening on libvirt logging daemon socket.
Nov 25 23:46:49 compute-0 systemd[1]: Starting libvirt logging daemon admin socket...
Nov 25 23:46:49 compute-0 systemd[1]: Listening on libvirt logging daemon admin socket.
Nov 25 23:46:49 compute-0 systemd[1]: Starting libvirt logging daemon...
Nov 25 23:46:49 compute-0 systemd[1]: Started libvirt logging daemon.
Nov 25 23:46:49 compute-0 sudo[213218]: pam_unix(sudo:session): session closed for user root
Nov 25 23:46:49 compute-0 ceph-mon[75654]: pgmap v500: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:46:49 compute-0 sudo[213412]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-buimqgvwolorvhkginwzwtsdytuohpjn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114409.4856105-1061-161595991935927/AnsiballZ_systemd.py'
Nov 25 23:46:49 compute-0 sudo[213412]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:46:50 compute-0 python3.9[213414]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 23:46:50 compute-0 systemd[1]: Reloading.
Nov 25 23:46:50 compute-0 systemd-rc-local-generator[213442]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 23:46:50 compute-0 systemd-sysv-generator[213446]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 23:46:50 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v501: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:46:50 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:46:50 compute-0 systemd[1]: Starting libvirt nodedev daemon socket...
Nov 25 23:46:50 compute-0 systemd[1]: Listening on libvirt nodedev daemon socket.
Nov 25 23:46:50 compute-0 systemd[1]: Starting libvirt nodedev daemon admin socket...
Nov 25 23:46:50 compute-0 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Nov 25 23:46:50 compute-0 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Nov 25 23:46:50 compute-0 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Nov 25 23:46:50 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Nov 25 23:46:50 compute-0 systemd[1]: Started libvirt nodedev daemon.
Nov 25 23:46:50 compute-0 sudo[213412]: pam_unix(sudo:session): session closed for user root
Nov 25 23:46:51 compute-0 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Nov 25 23:46:51 compute-0 sudo[213629]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igptjhnkqgjovgagdsdxtbpyuhcvzeyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114410.824063-1061-175685531141526/AnsiballZ_systemd.py'
Nov 25 23:46:51 compute-0 sudo[213629]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:46:51 compute-0 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Nov 25 23:46:51 compute-0 python3.9[213631]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 23:46:51 compute-0 systemd[1]: Reloading.
Nov 25 23:46:51 compute-0 ceph-mon[75654]: pgmap v501: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:46:51 compute-0 systemd-rc-local-generator[213665]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 23:46:51 compute-0 systemd-sysv-generator[213668]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 23:46:51 compute-0 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Nov 25 23:46:51 compute-0 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Nov 25 23:46:52 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v502: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:46:52 compute-0 setroubleshoot[213578]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l a0b44441-1549-43ae-9d32-761e369081f6
Nov 25 23:46:52 compute-0 setroubleshoot[213578]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Nov 25 23:46:52 compute-0 setroubleshoot[213578]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l a0b44441-1549-43ae-9d32-761e369081f6
Nov 25 23:46:52 compute-0 setroubleshoot[213578]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Nov 25 23:46:52 compute-0 systemd[1]: Starting libvirt proxy daemon admin socket...
Nov 25 23:46:52 compute-0 systemd[1]: Starting libvirt proxy daemon read-only socket...
Nov 25 23:46:52 compute-0 systemd[1]: Listening on libvirt proxy daemon admin socket.
Nov 25 23:46:52 compute-0 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Nov 25 23:46:52 compute-0 systemd[1]: Starting libvirt proxy daemon...
Nov 25 23:46:52 compute-0 systemd[1]: Started libvirt proxy daemon.
Nov 25 23:46:52 compute-0 sudo[213629]: pam_unix(sudo:session): session closed for user root
Nov 25 23:46:53 compute-0 ceph-mon[75654]: pgmap v502: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:46:53 compute-0 sudo[213850]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-reaxzisgdkteuqbvynpggporxsngoqnx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114413.188395-1061-117660551008380/AnsiballZ_systemd.py'
Nov 25 23:46:53 compute-0 sudo[213850]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:46:53 compute-0 python3.9[213852]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 23:46:53 compute-0 systemd[1]: Reloading.
Nov 25 23:46:54 compute-0 systemd-sysv-generator[213883]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 23:46:54 compute-0 systemd-rc-local-generator[213880]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 23:46:54 compute-0 systemd[1]: Listening on libvirt locking daemon socket.
Nov 25 23:46:54 compute-0 systemd[1]: Starting libvirt QEMU daemon socket...
Nov 25 23:46:54 compute-0 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Nov 25 23:46:54 compute-0 systemd[1]: Starting Virtual Machine and Container Registration Service...
Nov 25 23:46:54 compute-0 systemd[1]: Listening on libvirt QEMU daemon socket.
Nov 25 23:46:54 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v503: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:46:54 compute-0 systemd[1]: Starting libvirt QEMU daemon admin socket...
Nov 25 23:46:54 compute-0 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Nov 25 23:46:54 compute-0 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Nov 25 23:46:54 compute-0 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Nov 25 23:46:54 compute-0 systemd[1]: Started Virtual Machine and Container Registration Service.
Nov 25 23:46:54 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Nov 25 23:46:54 compute-0 systemd[1]: Started libvirt QEMU daemon.
Nov 25 23:46:54 compute-0 sudo[213850]: pam_unix(sudo:session): session closed for user root
Nov 25 23:46:55 compute-0 sudo[214065]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkmxfcaltdzuvrlmkxgzzclndwmokimv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114414.6458087-1061-100681126810244/AnsiballZ_systemd.py'
Nov 25 23:46:55 compute-0 sudo[214065]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:46:55 compute-0 python3.9[214067]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 23:46:55 compute-0 systemd[1]: Reloading.
Nov 25 23:46:55 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:46:55 compute-0 systemd-rc-local-generator[214095]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 23:46:55 compute-0 systemd-sysv-generator[214099]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 23:46:55 compute-0 ceph-mon[75654]: pgmap v503: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:46:55 compute-0 systemd[1]: Starting libvirt secret daemon socket...
Nov 25 23:46:55 compute-0 systemd[1]: Listening on libvirt secret daemon socket.
Nov 25 23:46:55 compute-0 systemd[1]: Starting libvirt secret daemon admin socket...
Nov 25 23:46:55 compute-0 systemd[1]: Starting libvirt secret daemon read-only socket...
Nov 25 23:46:55 compute-0 systemd[1]: Listening on libvirt secret daemon admin socket.
Nov 25 23:46:55 compute-0 systemd[1]: Listening on libvirt secret daemon read-only socket.
Nov 25 23:46:55 compute-0 systemd[1]: Starting libvirt secret daemon...
Nov 25 23:46:55 compute-0 systemd[1]: Started libvirt secret daemon.
Nov 25 23:46:55 compute-0 sudo[214065]: pam_unix(sudo:session): session closed for user root
Nov 25 23:46:56 compute-0 ceph-mgr[75954]: [balancer INFO root] Optimize plan auto_2025-11-25_23:46:56
Nov 25 23:46:56 compute-0 ceph-mgr[75954]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 23:46:56 compute-0 ceph-mgr[75954]: [balancer INFO root] do_upmap
Nov 25 23:46:56 compute-0 ceph-mgr[75954]: [balancer INFO root] pools ['volumes', 'images', 'cephfs.cephfs.meta', 'backups', 'cephfs.cephfs.data', 'vms', '.mgr']
Nov 25 23:46:56 compute-0 ceph-mgr[75954]: [balancer INFO root] prepared 0/10 changes
Nov 25 23:46:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:46:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:46:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:46:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:46:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:46:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:46:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 23:46:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 23:46:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 23:46:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 23:46:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 23:46:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 23:46:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 23:46:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 23:46:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 23:46:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 23:46:56 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v504: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:46:56 compute-0 sudo[214277]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-janhqdoczcrrpjvsdyutksdimefaadkn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114416.1554694-1098-139540261336269/AnsiballZ_file.py'
Nov 25 23:46:56 compute-0 sudo[214277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:46:56 compute-0 python3.9[214279]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:46:56 compute-0 sudo[214277]: pam_unix(sudo:session): session closed for user root
Nov 25 23:46:57 compute-0 sudo[214429]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqogkrkdmhmrsplsfaeehydiqmtgqimv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114416.8742733-1106-235677797722552/AnsiballZ_find.py'
Nov 25 23:46:57 compute-0 sudo[214429]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:46:57 compute-0 python3.9[214431]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 25 23:46:57 compute-0 sudo[214429]: pam_unix(sudo:session): session closed for user root
Nov 25 23:46:57 compute-0 ceph-mon[75654]: pgmap v504: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:46:58 compute-0 sudo[214581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aquedcnspkqajliivltsgllmptdnlvhk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114417.562972-1114-251230076146707/AnsiballZ_command.py'
Nov 25 23:46:58 compute-0 sudo[214581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:46:58 compute-0 python3.9[214583]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;
                                             echo ceph
                                             awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:46:58 compute-0 sudo[214581]: pam_unix(sudo:session): session closed for user root
Nov 25 23:46:58 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v505: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:46:59 compute-0 python3.9[214737]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 25 23:46:59 compute-0 ceph-mon[75654]: pgmap v505: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:47:00 compute-0 python3.9[214887]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:47:00 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v506: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:47:00 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:47:00 compute-0 python3.9[215008]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764114419.5057478-1133-172326382186032/.source.xml follow=False _original_basename=secret.xml.j2 checksum=0dfd54db3937ba95246ebf996592c369f4394dfb backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:47:01 compute-0 sudo[215158]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrhbxwhwreatrwsevbfwpdeuygsazdju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114420.930321-1148-36851010620896/AnsiballZ_command.py'
Nov 25 23:47:01 compute-0 sudo[215158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:47:01 compute-0 python3.9[215160]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 101922db-575f-58e2-980f-928050464f69
                                             virsh secret-define --file /tmp/secret.xml
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:47:01 compute-0 ceph-mon[75654]: pgmap v506: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:47:01 compute-0 polkitd[44014]: Registered Authentication Agent for unix-process:215162:323423 (system bus name :1.2763 [pkttyagent --process 215162 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Nov 25 23:47:01 compute-0 polkitd[44014]: Unregistered Authentication Agent for unix-process:215162:323423 (system bus name :1.2763, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Nov 25 23:47:01 compute-0 polkitd[44014]: Registered Authentication Agent for unix-process:215161:323423 (system bus name :1.2764 [pkttyagent --process 215161 --notify-fd 5 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Nov 25 23:47:01 compute-0 polkitd[44014]: Unregistered Authentication Agent for unix-process:215161:323423 (system bus name :1.2764, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Nov 25 23:47:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 23:47:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:47:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 23:47:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:47:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:47:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:47:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:47:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:47:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:47:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:47:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:47:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:47:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 23:47:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:47:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:47:01 compute-0 sudo[215158]: pam_unix(sudo:session): session closed for user root
Nov 25 23:47:02 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v507: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:47:02 compute-0 python3.9[215322]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:47:02 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Nov 25 23:47:02 compute-0 systemd[1]: setroubleshootd.service: Deactivated successfully.
Nov 25 23:47:03 compute-0 sudo[215474]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skbspistrlrhpvmoipeodmryrnkjppqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114422.7862337-1164-259040117879048/AnsiballZ_command.py'
Nov 25 23:47:03 compute-0 sudo[215474]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:47:03 compute-0 sudo[215474]: pam_unix(sudo:session): session closed for user root
Nov 25 23:47:03 compute-0 ceph-mon[75654]: pgmap v507: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:47:03 compute-0 sudo[215627]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqwdqihsjdqupfjnlatjtlysexxknxql ; FSID=101922db-575f-58e2-980f-928050464f69 KEY=AQAfPCZpAAAAABAAikUZSrYMJ3qAPbvPGOplUw== /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114423.648244-1172-140869159849403/AnsiballZ_command.py'
Nov 25 23:47:04 compute-0 sudo[215627]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:47:04 compute-0 sshd-session[215374]: Received disconnect from 124.18.141.70 port 44638:11: Bye Bye [preauth]
Nov 25 23:47:04 compute-0 sshd-session[215374]: Disconnected from authenticating user root 124.18.141.70 port 44638 [preauth]
Nov 25 23:47:04 compute-0 polkitd[44014]: Registered Authentication Agent for unix-process:215630:323694 (system bus name :1.2767 [pkttyagent --process 215630 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Nov 25 23:47:04 compute-0 polkitd[44014]: Unregistered Authentication Agent for unix-process:215630:323694 (system bus name :1.2767, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Nov 25 23:47:04 compute-0 sudo[215627]: pam_unix(sudo:session): session closed for user root
Nov 25 23:47:04 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v508: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:47:04 compute-0 sudo[215785]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufzbhyoavauxwrhwwtdxearnvhxrczqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114424.5454597-1180-206007580641519/AnsiballZ_copy.py'
Nov 25 23:47:04 compute-0 sudo[215785]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:47:05 compute-0 python3.9[215787]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:47:05 compute-0 sudo[215785]: pam_unix(sudo:session): session closed for user root
Nov 25 23:47:05 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:47:05 compute-0 ceph-mon[75654]: pgmap v508: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:47:05 compute-0 sudo[215954]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ruqnvmfgkzubjiupxueketygvxuqcrbw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114425.3150318-1188-213143539195875/AnsiballZ_stat.py'
Nov 25 23:47:05 compute-0 sudo[215954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:47:05 compute-0 podman[215911]: 2025-11-25 23:47:05.922243755 +0000 UTC m=+0.166415827 container health_status 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 23:47:06 compute-0 python3.9[215959]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:47:06 compute-0 sudo[215954]: pam_unix(sudo:session): session closed for user root
Nov 25 23:47:06 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v509: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:47:06 compute-0 sudo[216086]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djzikxtfwhoeeckqbxwganbyvssxnlsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114425.3150318-1188-213143539195875/AnsiballZ_copy.py'
Nov 25 23:47:06 compute-0 sudo[216086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:47:06 compute-0 python3.9[216088]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764114425.3150318-1188-213143539195875/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:47:06 compute-0 sudo[216086]: pam_unix(sudo:session): session closed for user root
Nov 25 23:47:07 compute-0 sudo[216238]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnbcvlbxctdntvyecjagnstylmeezpsg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114426.9773278-1204-155214794814082/AnsiballZ_file.py'
Nov 25 23:47:07 compute-0 sudo[216238]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:47:07 compute-0 python3.9[216240]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:47:07 compute-0 sudo[216238]: pam_unix(sudo:session): session closed for user root
Nov 25 23:47:07 compute-0 ceph-mon[75654]: pgmap v509: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:47:08 compute-0 sudo[216390]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-danbienssbdmjdkoxqrgfoytzeqakojv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114427.753297-1212-88058254693130/AnsiballZ_stat.py'
Nov 25 23:47:08 compute-0 sudo[216390]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:47:08 compute-0 python3.9[216392]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:47:08 compute-0 sudo[216390]: pam_unix(sudo:session): session closed for user root
Nov 25 23:47:08 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v510: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:47:08 compute-0 sudo[216468]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqcpylsmbdiioviawtxafxxxpgsxzjxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114427.753297-1212-88058254693130/AnsiballZ_file.py'
Nov 25 23:47:08 compute-0 sudo[216468]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:47:08 compute-0 python3.9[216470]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:47:08 compute-0 sudo[216468]: pam_unix(sudo:session): session closed for user root
Nov 25 23:47:09 compute-0 sudo[216620]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fyytlviqvezcftcaetjnsbgxrvgjevvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114428.9801705-1224-61580416225121/AnsiballZ_stat.py'
Nov 25 23:47:09 compute-0 sudo[216620]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:47:09 compute-0 python3.9[216622]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:47:09 compute-0 sudo[216620]: pam_unix(sudo:session): session closed for user root
Nov 25 23:47:09 compute-0 ceph-mon[75654]: pgmap v510: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:47:09 compute-0 sudo[216698]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sysjulptptptvnwknvawzjtpdccelvlo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114428.9801705-1224-61580416225121/AnsiballZ_file.py'
Nov 25 23:47:09 compute-0 sudo[216698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:47:09 compute-0 python3.9[216700]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.zs05y00_ recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:47:09 compute-0 sudo[216698]: pam_unix(sudo:session): session closed for user root
Nov 25 23:47:10 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v511: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:47:10 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:47:10 compute-0 sudo[216850]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbgwgolfvptdvkreexmgyuqjbgofsukk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114430.2176106-1236-14931578612444/AnsiballZ_stat.py'
Nov 25 23:47:10 compute-0 sudo[216850]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:47:10 compute-0 python3.9[216852]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:47:10 compute-0 sudo[216850]: pam_unix(sudo:session): session closed for user root
Nov 25 23:47:11 compute-0 sudo[216928]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdskwbaihcvgwlphyxhjbepytupkqluh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114430.2176106-1236-14931578612444/AnsiballZ_file.py'
Nov 25 23:47:11 compute-0 sudo[216928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:47:11 compute-0 python3.9[216930]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:47:11 compute-0 sudo[216928]: pam_unix(sudo:session): session closed for user root
Nov 25 23:47:11 compute-0 ceph-mon[75654]: pgmap v511: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:47:11 compute-0 sudo[217080]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thocpelqcqmvjbddpuhzccuguvfhhyeg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114431.559549-1249-260109478271114/AnsiballZ_command.py'
Nov 25 23:47:11 compute-0 sudo[217080]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:47:12 compute-0 python3.9[217082]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:47:12 compute-0 sudo[217080]: pam_unix(sudo:session): session closed for user root
Nov 25 23:47:12 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v512: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:47:12 compute-0 sudo[217233]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-enlhyuuksmxgtaxockkegcfhccbfspwo ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764114432.2343414-1257-231963364684/AnsiballZ_edpm_nftables_from_files.py'
Nov 25 23:47:12 compute-0 sudo[217233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:47:12 compute-0 python3[217235]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 25 23:47:12 compute-0 sudo[217233]: pam_unix(sudo:session): session closed for user root
Nov 25 23:47:13 compute-0 sudo[217385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlsjkrcarirnokvmucewibjadhbywojx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114433.083765-1265-181272371311521/AnsiballZ_stat.py'
Nov 25 23:47:13 compute-0 sudo[217385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:47:13 compute-0 ceph-mon[75654]: pgmap v512: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:47:13 compute-0 python3.9[217387]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:47:13 compute-0 sudo[217385]: pam_unix(sudo:session): session closed for user root
Nov 25 23:47:13 compute-0 podman[217437]: 2025-11-25 23:47:13.990850917 +0000 UTC m=+0.054808525 container health_status 9c33443d6f21f9126b440289b7d384a59ef92406382d0e40a2b11f1da5dc47d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 23:47:14 compute-0 sudo[217480]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjyljdyjgftdnieklqwlpttdqkjixnyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114433.083765-1265-181272371311521/AnsiballZ_file.py'
Nov 25 23:47:14 compute-0 sudo[217480]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:47:14 compute-0 python3.9[217483]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:47:14 compute-0 sudo[217480]: pam_unix(sudo:session): session closed for user root
Nov 25 23:47:14 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v513: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:47:14 compute-0 sudo[217633]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-upgkvvupfbxfnkhgyxkqxfloovbgbsyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114434.416687-1277-83954436015299/AnsiballZ_stat.py'
Nov 25 23:47:14 compute-0 sudo[217633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:47:15 compute-0 python3.9[217635]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:47:15 compute-0 sudo[217633]: pam_unix(sudo:session): session closed for user root
Nov 25 23:47:15 compute-0 sudo[217711]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibkmnpseoqvpfwxfyioopxiuxpohobsd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114434.416687-1277-83954436015299/AnsiballZ_file.py'
Nov 25 23:47:15 compute-0 sudo[217711]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:47:15 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:47:15 compute-0 python3.9[217713]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:47:15 compute-0 sudo[217711]: pam_unix(sudo:session): session closed for user root
Nov 25 23:47:15 compute-0 ceph-mon[75654]: pgmap v513: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:47:16 compute-0 sudo[217863]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-goomrbdhqvzatisdhqxcxdoibqpyxtrk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114435.7867873-1289-242898743923076/AnsiballZ_stat.py'
Nov 25 23:47:16 compute-0 sudo[217863]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:47:16 compute-0 python3.9[217865]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:47:16 compute-0 sudo[217863]: pam_unix(sudo:session): session closed for user root
Nov 25 23:47:16 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v514: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:47:16 compute-0 sudo[217941]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-advadwtlbhdmkswywuxoodhflxcqirvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114435.7867873-1289-242898743923076/AnsiballZ_file.py'
Nov 25 23:47:16 compute-0 sudo[217941]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:47:16 compute-0 python3.9[217943]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:47:16 compute-0 sudo[217941]: pam_unix(sudo:session): session closed for user root
Nov 25 23:47:17 compute-0 sudo[218093]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwcpdvtswjjrkhsnrmmvpcdgknwxomov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114436.9499314-1301-104489684184451/AnsiballZ_stat.py'
Nov 25 23:47:17 compute-0 sudo[218093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:47:17 compute-0 python3.9[218095]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:47:17 compute-0 sudo[218093]: pam_unix(sudo:session): session closed for user root
Nov 25 23:47:17 compute-0 ceph-mon[75654]: pgmap v514: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:47:17 compute-0 sudo[218171]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgsiiheqjgyqvedawlljltpkmodjjoat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114436.9499314-1301-104489684184451/AnsiballZ_file.py'
Nov 25 23:47:17 compute-0 sudo[218171]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:47:18 compute-0 python3.9[218173]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:47:18 compute-0 sudo[218171]: pam_unix(sudo:session): session closed for user root
Nov 25 23:47:18 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v515: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:47:18 compute-0 sudo[218323]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vaxjhmdjgvxmudwnlwguauzxiyaiivgb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114438.2151957-1313-161999911809549/AnsiballZ_stat.py'
Nov 25 23:47:18 compute-0 sudo[218323]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:47:18 compute-0 python3.9[218325]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:47:18 compute-0 sudo[218323]: pam_unix(sudo:session): session closed for user root
Nov 25 23:47:19 compute-0 sudo[218448]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fudkuodgoksgnnmsvvciwjjitsalmbha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114438.2151957-1313-161999911809549/AnsiballZ_copy.py'
Nov 25 23:47:19 compute-0 sudo[218448]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:47:19 compute-0 python3.9[218450]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764114438.2151957-1313-161999911809549/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:47:19 compute-0 sudo[218448]: pam_unix(sudo:session): session closed for user root
Nov 25 23:47:19 compute-0 ceph-mon[75654]: pgmap v515: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:47:20 compute-0 sudo[218600]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-weaiwhyidwucffwyxyhedwuwfgfwkfdb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114439.8291354-1328-245591053320749/AnsiballZ_file.py'
Nov 25 23:47:20 compute-0 sudo[218600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:47:20 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v516: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:47:20 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:47:20 compute-0 python3.9[218602]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:47:20 compute-0 sudo[218600]: pam_unix(sudo:session): session closed for user root
Nov 25 23:47:21 compute-0 sudo[218752]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohlrnjfktubmijrdcxheynkeridxoape ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114440.7913935-1336-6617579647678/AnsiballZ_command.py'
Nov 25 23:47:21 compute-0 sudo[218752]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:47:21 compute-0 python3.9[218754]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:47:21 compute-0 sudo[218752]: pam_unix(sudo:session): session closed for user root
Nov 25 23:47:21 compute-0 ceph-mon[75654]: pgmap v516: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:47:21 compute-0 ceph-mon[75654]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Nov 25 23:47:21 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:47:21.660305) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 23:47:21 compute-0 ceph-mon[75654]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Nov 25 23:47:21 compute-0 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764114441660335, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2036, "num_deletes": 251, "total_data_size": 2341115, "memory_usage": 2389456, "flush_reason": "Manual Compaction"}
Nov 25 23:47:21 compute-0 ceph-mon[75654]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Nov 25 23:47:21 compute-0 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764114441675540, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 2279957, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9069, "largest_seqno": 11104, "table_properties": {"data_size": 2270726, "index_size": 5853, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 17733, "raw_average_key_size": 19, "raw_value_size": 2252369, "raw_average_value_size": 2469, "num_data_blocks": 269, "num_entries": 912, "num_filter_entries": 912, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764114204, "oldest_key_time": 1764114204, "file_creation_time": 1764114441, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cf57a6b1-796f-4cfa-b350-53eb10a4554d", "db_session_id": "Q7VS70283MEZ1V621ZPR", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Nov 25 23:47:21 compute-0 ceph-mon[75654]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 15332 microseconds, and 9485 cpu microseconds.
Nov 25 23:47:21 compute-0 ceph-mon[75654]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 23:47:21 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:47:21.675600) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 2279957 bytes OK
Nov 25 23:47:21 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:47:21.675657) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Nov 25 23:47:21 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:47:21.677679) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Nov 25 23:47:21 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:47:21.677702) EVENT_LOG_v1 {"time_micros": 1764114441677695, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 23:47:21 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:47:21.677722) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 23:47:21 compute-0 ceph-mon[75654]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 2332636, prev total WAL file size 2332636, number of live WAL files 2.
Nov 25 23:47:21 compute-0 ceph-mon[75654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 23:47:21 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:47:21.678879) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Nov 25 23:47:21 compute-0 ceph-mon[75654]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 23:47:21 compute-0 ceph-mon[75654]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(2226KB)], [26(4609KB)]
Nov 25 23:47:21 compute-0 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764114441678951, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 6999812, "oldest_snapshot_seqno": -1}
Nov 25 23:47:21 compute-0 ceph-mon[75654]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 3208 keys, 5898567 bytes, temperature: kUnknown
Nov 25 23:47:21 compute-0 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764114441727752, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 5898567, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 5872637, "index_size": 16796, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8069, "raw_key_size": 73947, "raw_average_key_size": 23, "raw_value_size": 5810768, "raw_average_value_size": 1811, "num_data_blocks": 741, "num_entries": 3208, "num_filter_entries": 3208, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764113467, "oldest_key_time": 0, "file_creation_time": 1764114441, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cf57a6b1-796f-4cfa-b350-53eb10a4554d", "db_session_id": "Q7VS70283MEZ1V621ZPR", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Nov 25 23:47:21 compute-0 ceph-mon[75654]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 23:47:21 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:47:21.728085) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 5898567 bytes
Nov 25 23:47:21 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:47:21.729716) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 143.1 rd, 120.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 4.5 +0.0 blob) out(5.6 +0.0 blob), read-write-amplify(5.7) write-amplify(2.6) OK, records in: 3722, records dropped: 514 output_compression: NoCompression
Nov 25 23:47:21 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:47:21.729747) EVENT_LOG_v1 {"time_micros": 1764114441729732, "job": 10, "event": "compaction_finished", "compaction_time_micros": 48928, "compaction_time_cpu_micros": 30451, "output_level": 6, "num_output_files": 1, "total_output_size": 5898567, "num_input_records": 3722, "num_output_records": 3208, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 23:47:21 compute-0 ceph-mon[75654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 23:47:21 compute-0 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764114441730686, "job": 10, "event": "table_file_deletion", "file_number": 28}
Nov 25 23:47:21 compute-0 ceph-mon[75654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 23:47:21 compute-0 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764114441732507, "job": 10, "event": "table_file_deletion", "file_number": 26}
Nov 25 23:47:21 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:47:21.678767) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 23:47:21 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:47:21.732583) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 23:47:21 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:47:21.732826) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 23:47:21 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:47:21.732829) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 23:47:21 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:47:21.732832) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 23:47:21 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:47:21.732835) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 23:47:22 compute-0 sudo[218907]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evbljnbcwlnfkaaaqwlmhspspdslitfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114441.6095703-1344-183507599143671/AnsiballZ_blockinfile.py'
Nov 25 23:47:22 compute-0 sudo[218907]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:47:22 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v517: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:47:22 compute-0 python3.9[218909]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:47:22 compute-0 sudo[218907]: pam_unix(sudo:session): session closed for user root
Nov 25 23:47:23 compute-0 sudo[219059]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zseghyqktvjbpihmhawkcxsabdiyxeoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114442.7812417-1353-215017665313640/AnsiballZ_command.py'
Nov 25 23:47:23 compute-0 sudo[219059]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:47:23 compute-0 python3.9[219061]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:47:23 compute-0 sudo[219059]: pam_unix(sudo:session): session closed for user root
Nov 25 23:47:23 compute-0 ceph-mon[75654]: pgmap v517: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:47:24 compute-0 sudo[219212]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpyupynhxwbkzokkybmabymyqnvzhgxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114443.7302866-1361-162427864517120/AnsiballZ_stat.py'
Nov 25 23:47:24 compute-0 sudo[219212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:47:24 compute-0 python3.9[219214]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 23:47:24 compute-0 sudo[219212]: pam_unix(sudo:session): session closed for user root
Nov 25 23:47:24 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v518: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:47:24 compute-0 sudo[219366]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufgfczpbucdqwmywstbabwztsxwakbky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114444.5474072-1369-133675201605341/AnsiballZ_command.py'
Nov 25 23:47:24 compute-0 sudo[219366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:47:25 compute-0 python3.9[219368]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:47:25 compute-0 sudo[219366]: pam_unix(sudo:session): session closed for user root
Nov 25 23:47:25 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:47:25 compute-0 ceph-mon[75654]: pgmap v518: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:47:25 compute-0 sudo[219521]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhtvinyzkbmyfmszhybbzuukpykmjmpw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114445.4593816-1377-281058659544572/AnsiballZ_file.py'
Nov 25 23:47:25 compute-0 sudo[219521]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:47:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:47:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:47:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:47:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:47:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:47:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:47:26 compute-0 python3.9[219523]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:47:26 compute-0 sudo[219521]: pam_unix(sudo:session): session closed for user root
Nov 25 23:47:26 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v519: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:47:26 compute-0 sudo[219673]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aevmzbpigojtnfyfirzhwnboqlcxildv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114446.3719506-1385-90066066387964/AnsiballZ_stat.py'
Nov 25 23:47:26 compute-0 sudo[219673]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:47:26 compute-0 python3.9[219675]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:47:26 compute-0 sudo[219673]: pam_unix(sudo:session): session closed for user root
Nov 25 23:47:27 compute-0 sudo[219796]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrxnablejzwrovslywfebttrjkanorlq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114446.3719506-1385-90066066387964/AnsiballZ_copy.py'
Nov 25 23:47:27 compute-0 sudo[219796]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:47:27 compute-0 python3.9[219798]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764114446.3719506-1385-90066066387964/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:47:27 compute-0 sudo[219796]: pam_unix(sudo:session): session closed for user root
Nov 25 23:47:27 compute-0 ceph-mon[75654]: pgmap v519: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:47:28 compute-0 sudo[219971]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfnbkzxthaogwyinvpszqkehesxcqlbw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114447.8088746-1400-19216227534575/AnsiballZ_stat.py'
Nov 25 23:47:28 compute-0 sudo[219929]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:47:28 compute-0 sudo[219971]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:47:28 compute-0 sudo[219929]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:47:28 compute-0 sudo[219929]: pam_unix(sudo:session): session closed for user root
Nov 25 23:47:28 compute-0 sudo[219976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:47:28 compute-0 sudo[219976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:47:28 compute-0 sudo[219976]: pam_unix(sudo:session): session closed for user root
Nov 25 23:47:28 compute-0 sudo[220001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:47:28 compute-0 sudo[220001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:47:28 compute-0 sudo[220001]: pam_unix(sudo:session): session closed for user root
Nov 25 23:47:28 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v520: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:47:28 compute-0 sudo[220026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 23:47:28 compute-0 sudo[220026]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:47:28 compute-0 python3.9[219974]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:47:28 compute-0 sudo[219971]: pam_unix(sudo:session): session closed for user root
Nov 25 23:47:28 compute-0 sudo[220026]: pam_unix(sudo:session): session closed for user root
Nov 25 23:47:28 compute-0 sudo[220202]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akmvldhjcdmrmiucvdluyrdxcjdhbhxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114447.8088746-1400-19216227534575/AnsiballZ_copy.py'
Nov 25 23:47:28 compute-0 sudo[220202]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:47:28 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 23:47:28 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:47:28 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 23:47:28 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 23:47:28 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 23:47:28 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:47:28 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev 6dfed50b-51db-4fb3-ba6d-90fdd6623b43 does not exist
Nov 25 23:47:28 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev 7d435c94-08f5-434e-9c93-96f15fab3652 does not exist
Nov 25 23:47:28 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev b2ccf56b-882e-4af5-b230-1a10eb0d140f does not exist
Nov 25 23:47:28 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 23:47:28 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 23:47:28 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 23:47:28 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 23:47:28 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 23:47:28 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:47:28 compute-0 sudo[220205]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:47:28 compute-0 sudo[220205]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:47:28 compute-0 sudo[220205]: pam_unix(sudo:session): session closed for user root
Nov 25 23:47:29 compute-0 sudo[220230]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:47:29 compute-0 sudo[220230]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:47:29 compute-0 sudo[220230]: pam_unix(sudo:session): session closed for user root
Nov 25 23:47:29 compute-0 python3.9[220204]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764114447.8088746-1400-19216227534575/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:47:29 compute-0 sudo[220202]: pam_unix(sudo:session): session closed for user root
Nov 25 23:47:29 compute-0 sudo[220255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:47:29 compute-0 sudo[220255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:47:29 compute-0 sudo[220255]: pam_unix(sudo:session): session closed for user root
Nov 25 23:47:29 compute-0 sudo[220298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 23:47:29 compute-0 sudo[220298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:47:29 compute-0 podman[220438]: 2025-11-25 23:47:29.555067217 +0000 UTC m=+0.041627136 container create 0ba15172a906220782e582f60acb7137e2db6dc234e1b0ee867b5d2753bc690b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_mendeleev, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:47:29 compute-0 systemd[1]: Started libpod-conmon-0ba15172a906220782e582f60acb7137e2db6dc234e1b0ee867b5d2753bc690b.scope.
Nov 25 23:47:29 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:47:29 compute-0 podman[220438]: 2025-11-25 23:47:29.534546762 +0000 UTC m=+0.021106721 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:47:29 compute-0 podman[220438]: 2025-11-25 23:47:29.630738255 +0000 UTC m=+0.117298184 container init 0ba15172a906220782e582f60acb7137e2db6dc234e1b0ee867b5d2753bc690b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_mendeleev, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 25 23:47:29 compute-0 podman[220438]: 2025-11-25 23:47:29.636080407 +0000 UTC m=+0.122640316 container start 0ba15172a906220782e582f60acb7137e2db6dc234e1b0ee867b5d2753bc690b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_mendeleev, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 23:47:29 compute-0 podman[220438]: 2025-11-25 23:47:29.639452586 +0000 UTC m=+0.126012525 container attach 0ba15172a906220782e582f60acb7137e2db6dc234e1b0ee867b5d2753bc690b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_mendeleev, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 23:47:29 compute-0 agitated_mendeleev[220480]: 167 167
Nov 25 23:47:29 compute-0 systemd[1]: libpod-0ba15172a906220782e582f60acb7137e2db6dc234e1b0ee867b5d2753bc690b.scope: Deactivated successfully.
Nov 25 23:47:29 compute-0 podman[220438]: 2025-11-25 23:47:29.643131184 +0000 UTC m=+0.129691103 container died 0ba15172a906220782e582f60acb7137e2db6dc234e1b0ee867b5d2753bc690b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_mendeleev, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:47:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-1ec4b9fd06cd33e9efdd00c36d4c19afbdcacc44a62155508549cb935a8f9bef-merged.mount: Deactivated successfully.
Nov 25 23:47:29 compute-0 podman[220438]: 2025-11-25 23:47:29.688259071 +0000 UTC m=+0.174818980 container remove 0ba15172a906220782e582f60acb7137e2db6dc234e1b0ee867b5d2753bc690b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_mendeleev, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 23:47:29 compute-0 ceph-mon[75654]: pgmap v520: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:47:29 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:47:29 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 23:47:29 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:47:29 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 23:47:29 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 23:47:29 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:47:29 compute-0 sudo[220526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvvqnwssixibhytsucqsfealbecmdwnl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114449.388985-1415-247234218831530/AnsiballZ_stat.py'
Nov 25 23:47:29 compute-0 systemd[1]: libpod-conmon-0ba15172a906220782e582f60acb7137e2db6dc234e1b0ee867b5d2753bc690b.scope: Deactivated successfully.
Nov 25 23:47:29 compute-0 sudo[220526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:47:29 compute-0 podman[220539]: 2025-11-25 23:47:29.857354408 +0000 UTC m=+0.049731971 container create ae426b67865cbf905617ba4644922d7761cb9be012aaf18da0bd6f316cb6d365 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_wing, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:47:29 compute-0 systemd[1]: Started libpod-conmon-ae426b67865cbf905617ba4644922d7761cb9be012aaf18da0bd6f316cb6d365.scope.
Nov 25 23:47:29 compute-0 python3.9[220533]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:47:29 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:47:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72f1501e18db30045fd9025befbad2abbd5eea486d1a12703c0620776ef9018b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 23:47:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72f1501e18db30045fd9025befbad2abbd5eea486d1a12703c0620776ef9018b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:47:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72f1501e18db30045fd9025befbad2abbd5eea486d1a12703c0620776ef9018b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:47:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72f1501e18db30045fd9025befbad2abbd5eea486d1a12703c0620776ef9018b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:47:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72f1501e18db30045fd9025befbad2abbd5eea486d1a12703c0620776ef9018b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 23:47:29 compute-0 sudo[220526]: pam_unix(sudo:session): session closed for user root
Nov 25 23:47:29 compute-0 podman[220539]: 2025-11-25 23:47:29.834556823 +0000 UTC m=+0.026934416 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:47:29 compute-0 podman[220539]: 2025-11-25 23:47:29.945291861 +0000 UTC m=+0.137669434 container init ae426b67865cbf905617ba4644922d7761cb9be012aaf18da0bd6f316cb6d365 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 25 23:47:29 compute-0 podman[220539]: 2025-11-25 23:47:29.956526679 +0000 UTC m=+0.148904242 container start ae426b67865cbf905617ba4644922d7761cb9be012aaf18da0bd6f316cb6d365 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_wing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:47:29 compute-0 podman[220539]: 2025-11-25 23:47:29.959921869 +0000 UTC m=+0.152299432 container attach ae426b67865cbf905617ba4644922d7761cb9be012aaf18da0bd6f316cb6d365 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:47:30 compute-0 sudo[220681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxcneajwbxmvtlfbboulimhtjocfokhm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114449.388985-1415-247234218831530/AnsiballZ_copy.py'
Nov 25 23:47:30 compute-0 sudo[220681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:47:30 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v521: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:47:30 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:47:30 compute-0 python3.9[220683]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764114449.388985-1415-247234218831530/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:47:30 compute-0 sudo[220681]: pam_unix(sudo:session): session closed for user root
Nov 25 23:47:30 compute-0 quizzical_wing[220556]: --> passed data devices: 0 physical, 3 LVM
Nov 25 23:47:30 compute-0 quizzical_wing[220556]: --> relative data size: 1.0
Nov 25 23:47:30 compute-0 quizzical_wing[220556]: --> All data devices are unavailable
Nov 25 23:47:30 compute-0 systemd[1]: libpod-ae426b67865cbf905617ba4644922d7761cb9be012aaf18da0bd6f316cb6d365.scope: Deactivated successfully.
Nov 25 23:47:30 compute-0 podman[220539]: 2025-11-25 23:47:30.99001331 +0000 UTC m=+1.182390903 container died ae426b67865cbf905617ba4644922d7761cb9be012aaf18da0bd6f316cb6d365 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_wing, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 23:47:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-72f1501e18db30045fd9025befbad2abbd5eea486d1a12703c0620776ef9018b-merged.mount: Deactivated successfully.
Nov 25 23:47:31 compute-0 podman[220539]: 2025-11-25 23:47:31.108394432 +0000 UTC m=+1.300772035 container remove ae426b67865cbf905617ba4644922d7761cb9be012aaf18da0bd6f316cb6d365 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_wing, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:47:31 compute-0 systemd[1]: libpod-conmon-ae426b67865cbf905617ba4644922d7761cb9be012aaf18da0bd6f316cb6d365.scope: Deactivated successfully.
Nov 25 23:47:31 compute-0 sudo[220298]: pam_unix(sudo:session): session closed for user root
Nov 25 23:47:31 compute-0 sudo[220872]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acndjyisxygtcxdbmxxsyoiyjpdmxuon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114450.763052-1430-10011498483112/AnsiballZ_systemd.py'
Nov 25 23:47:31 compute-0 sudo[220872]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:47:31 compute-0 sudo[220871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:47:31 compute-0 sudo[220871]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:47:31 compute-0 sudo[220871]: pam_unix(sudo:session): session closed for user root
Nov 25 23:47:31 compute-0 sudo[220899]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:47:31 compute-0 sudo[220899]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:47:31 compute-0 sudo[220899]: pam_unix(sudo:session): session closed for user root
Nov 25 23:47:31 compute-0 sudo[220924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:47:31 compute-0 sudo[220924]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:47:31 compute-0 sudo[220924]: pam_unix(sudo:session): session closed for user root
Nov 25 23:47:31 compute-0 sudo[220949]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 -- lvm list --format json
Nov 25 23:47:31 compute-0 sudo[220949]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:47:31 compute-0 python3.9[220885]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 23:47:31 compute-0 systemd[1]: Reloading.
Nov 25 23:47:31 compute-0 systemd-rc-local-generator[221001]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 23:47:31 compute-0 systemd-sysv-generator[221005]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 23:47:31 compute-0 ceph-mon[75654]: pgmap v521: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:47:31 compute-0 podman[221048]: 2025-11-25 23:47:31.831533968 +0000 UTC m=+0.053628674 container create 6c601b645ec8881f7d611855a0ddf32c598eddd2abb299bd63e75ced89700441 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:47:31 compute-0 systemd[1]: Reached target edpm_libvirt.target.
Nov 25 23:47:31 compute-0 sudo[220872]: pam_unix(sudo:session): session closed for user root
Nov 25 23:47:31 compute-0 systemd[1]: Started libpod-conmon-6c601b645ec8881f7d611855a0ddf32c598eddd2abb299bd63e75ced89700441.scope.
Nov 25 23:47:31 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:47:31 compute-0 podman[221048]: 2025-11-25 23:47:31.910367589 +0000 UTC m=+0.132462305 container init 6c601b645ec8881f7d611855a0ddf32c598eddd2abb299bd63e75ced89700441 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_archimedes, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:47:31 compute-0 podman[221048]: 2025-11-25 23:47:31.81766854 +0000 UTC m=+0.039763246 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:47:31 compute-0 podman[221048]: 2025-11-25 23:47:31.916935424 +0000 UTC m=+0.139030110 container start 6c601b645ec8881f7d611855a0ddf32c598eddd2abb299bd63e75ced89700441 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_archimedes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 23:47:31 compute-0 podman[221048]: 2025-11-25 23:47:31.919901372 +0000 UTC m=+0.141996088 container attach 6c601b645ec8881f7d611855a0ddf32c598eddd2abb299bd63e75ced89700441 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_archimedes, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:47:31 compute-0 exciting_archimedes[221067]: 167 167
Nov 25 23:47:31 compute-0 systemd[1]: libpod-6c601b645ec8881f7d611855a0ddf32c598eddd2abb299bd63e75ced89700441.scope: Deactivated successfully.
Nov 25 23:47:31 compute-0 podman[221048]: 2025-11-25 23:47:31.923078436 +0000 UTC m=+0.145173122 container died 6c601b645ec8881f7d611855a0ddf32c598eddd2abb299bd63e75ced89700441 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_archimedes, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:47:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-f22404195051bb93b8effc5b15dd6f4612eb9818105898794fab39ad807a8a4e-merged.mount: Deactivated successfully.
Nov 25 23:47:31 compute-0 podman[221048]: 2025-11-25 23:47:31.955151277 +0000 UTC m=+0.177245963 container remove 6c601b645ec8881f7d611855a0ddf32c598eddd2abb299bd63e75ced89700441 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_archimedes, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 25 23:47:31 compute-0 systemd[1]: libpod-conmon-6c601b645ec8881f7d611855a0ddf32c598eddd2abb299bd63e75ced89700441.scope: Deactivated successfully.
Nov 25 23:47:32 compute-0 podman[221139]: 2025-11-25 23:47:32.102679392 +0000 UTC m=+0.041018800 container create 0f8ea2f6a56fc71cf6a9ba13f59f8d67da340939b0f5af7adbeb82b3db0003f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_neumann, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:47:32 compute-0 systemd[1]: Started libpod-conmon-0f8ea2f6a56fc71cf6a9ba13f59f8d67da340939b0f5af7adbeb82b3db0003f8.scope.
Nov 25 23:47:32 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:47:32 compute-0 podman[221139]: 2025-11-25 23:47:32.084809708 +0000 UTC m=+0.023149086 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:47:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee98f585291b40e89044453c08febece093b4852a6b074a8f59a6a1d3a2a20e4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 23:47:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee98f585291b40e89044453c08febece093b4852a6b074a8f59a6a1d3a2a20e4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:47:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee98f585291b40e89044453c08febece093b4852a6b074a8f59a6a1d3a2a20e4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:47:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee98f585291b40e89044453c08febece093b4852a6b074a8f59a6a1d3a2a20e4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:47:32 compute-0 podman[221139]: 2025-11-25 23:47:32.23975953 +0000 UTC m=+0.178098918 container init 0f8ea2f6a56fc71cf6a9ba13f59f8d67da340939b0f5af7adbeb82b3db0003f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 23:47:32 compute-0 podman[221139]: 2025-11-25 23:47:32.248326077 +0000 UTC m=+0.186665435 container start 0f8ea2f6a56fc71cf6a9ba13f59f8d67da340939b0f5af7adbeb82b3db0003f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_neumann, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 25 23:47:32 compute-0 podman[221139]: 2025-11-25 23:47:32.26087616 +0000 UTC m=+0.199215568 container attach 0f8ea2f6a56fc71cf6a9ba13f59f8d67da340939b0f5af7adbeb82b3db0003f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_neumann, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:47:32 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v522: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:47:32 compute-0 sudo[221262]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcngvuuqndmdzvmuesuvahzgptjzotdt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114452.0568607-1438-241795323288209/AnsiballZ_systemd.py'
Nov 25 23:47:32 compute-0 sudo[221262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:47:32 compute-0 python3.9[221264]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 25 23:47:32 compute-0 systemd[1]: Reloading.
Nov 25 23:47:32 compute-0 systemd-rc-local-generator[221291]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 23:47:32 compute-0 systemd-sysv-generator[221296]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 23:47:32 compute-0 frosty_neumann[221184]: {
Nov 25 23:47:32 compute-0 frosty_neumann[221184]:     "0": [
Nov 25 23:47:32 compute-0 frosty_neumann[221184]:         {
Nov 25 23:47:32 compute-0 frosty_neumann[221184]:             "devices": [
Nov 25 23:47:32 compute-0 frosty_neumann[221184]:                 "/dev/loop3"
Nov 25 23:47:32 compute-0 frosty_neumann[221184]:             ],
Nov 25 23:47:32 compute-0 frosty_neumann[221184]:             "lv_name": "ceph_lv0",
Nov 25 23:47:32 compute-0 frosty_neumann[221184]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 23:47:32 compute-0 frosty_neumann[221184]:             "lv_size": "21470642176",
Nov 25 23:47:32 compute-0 frosty_neumann[221184]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c5ddf08a-6193-41e0-8332-60b5083aa62e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 23:47:32 compute-0 frosty_neumann[221184]:             "lv_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 23:47:32 compute-0 frosty_neumann[221184]:             "name": "ceph_lv0",
Nov 25 23:47:32 compute-0 frosty_neumann[221184]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 23:47:32 compute-0 frosty_neumann[221184]:             "tags": {
Nov 25 23:47:32 compute-0 frosty_neumann[221184]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 23:47:32 compute-0 frosty_neumann[221184]:                 "ceph.block_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 23:47:32 compute-0 frosty_neumann[221184]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 23:47:32 compute-0 frosty_neumann[221184]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:47:32 compute-0 frosty_neumann[221184]:                 "ceph.cluster_name": "ceph",
Nov 25 23:47:32 compute-0 frosty_neumann[221184]:                 "ceph.crush_device_class": "",
Nov 25 23:47:32 compute-0 frosty_neumann[221184]:                 "ceph.encrypted": "0",
Nov 25 23:47:32 compute-0 frosty_neumann[221184]:                 "ceph.osd_fsid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 23:47:32 compute-0 frosty_neumann[221184]:                 "ceph.osd_id": "0",
Nov 25 23:47:32 compute-0 frosty_neumann[221184]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 23:47:32 compute-0 frosty_neumann[221184]:                 "ceph.type": "block",
Nov 25 23:47:32 compute-0 frosty_neumann[221184]:                 "ceph.vdo": "0"
Nov 25 23:47:32 compute-0 frosty_neumann[221184]:             },
Nov 25 23:47:32 compute-0 frosty_neumann[221184]:             "type": "block",
Nov 25 23:47:32 compute-0 frosty_neumann[221184]:             "vg_name": "ceph_vg0"
Nov 25 23:47:32 compute-0 frosty_neumann[221184]:         }
Nov 25 23:47:32 compute-0 frosty_neumann[221184]:     ],
Nov 25 23:47:32 compute-0 frosty_neumann[221184]:     "1": [
Nov 25 23:47:32 compute-0 frosty_neumann[221184]:         {
Nov 25 23:47:32 compute-0 frosty_neumann[221184]:             "devices": [
Nov 25 23:47:32 compute-0 frosty_neumann[221184]:                 "/dev/loop4"
Nov 25 23:47:32 compute-0 frosty_neumann[221184]:             ],
Nov 25 23:47:32 compute-0 frosty_neumann[221184]:             "lv_name": "ceph_lv1",
Nov 25 23:47:32 compute-0 frosty_neumann[221184]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 23:47:32 compute-0 frosty_neumann[221184]:             "lv_size": "21470642176",
Nov 25 23:47:32 compute-0 frosty_neumann[221184]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21bbab34-bea3-466b-8bf7-812749fcef47,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 23:47:32 compute-0 frosty_neumann[221184]:             "lv_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 23:47:32 compute-0 frosty_neumann[221184]:             "name": "ceph_lv1",
Nov 25 23:47:32 compute-0 frosty_neumann[221184]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 23:47:32 compute-0 frosty_neumann[221184]:             "tags": {
Nov 25 23:47:32 compute-0 frosty_neumann[221184]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 23:47:32 compute-0 frosty_neumann[221184]:                 "ceph.block_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 23:47:32 compute-0 frosty_neumann[221184]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 23:47:32 compute-0 frosty_neumann[221184]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:47:32 compute-0 frosty_neumann[221184]:                 "ceph.cluster_name": "ceph",
Nov 25 23:47:32 compute-0 frosty_neumann[221184]:                 "ceph.crush_device_class": "",
Nov 25 23:47:32 compute-0 frosty_neumann[221184]:                 "ceph.encrypted": "0",
Nov 25 23:47:32 compute-0 frosty_neumann[221184]:                 "ceph.osd_fsid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 23:47:32 compute-0 frosty_neumann[221184]:                 "ceph.osd_id": "1",
Nov 25 23:47:32 compute-0 frosty_neumann[221184]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 23:47:32 compute-0 frosty_neumann[221184]:                 "ceph.type": "block",
Nov 25 23:47:32 compute-0 frosty_neumann[221184]:                 "ceph.vdo": "0"
Nov 25 23:47:32 compute-0 frosty_neumann[221184]:             },
Nov 25 23:47:32 compute-0 frosty_neumann[221184]:             "type": "block",
Nov 25 23:47:32 compute-0 frosty_neumann[221184]:             "vg_name": "ceph_vg1"
Nov 25 23:47:32 compute-0 frosty_neumann[221184]:         }
Nov 25 23:47:32 compute-0 frosty_neumann[221184]:     ],
Nov 25 23:47:32 compute-0 frosty_neumann[221184]:     "2": [
Nov 25 23:47:32 compute-0 frosty_neumann[221184]:         {
Nov 25 23:47:32 compute-0 frosty_neumann[221184]:             "devices": [
Nov 25 23:47:32 compute-0 frosty_neumann[221184]:                 "/dev/loop5"
Nov 25 23:47:32 compute-0 frosty_neumann[221184]:             ],
Nov 25 23:47:32 compute-0 frosty_neumann[221184]:             "lv_name": "ceph_lv2",
Nov 25 23:47:32 compute-0 frosty_neumann[221184]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 23:47:32 compute-0 frosty_neumann[221184]:             "lv_size": "21470642176",
Nov 25 23:47:32 compute-0 frosty_neumann[221184]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=019d967b-1a56-4e90-8682-a890da577e20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 23:47:32 compute-0 frosty_neumann[221184]:             "lv_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 23:47:32 compute-0 frosty_neumann[221184]:             "name": "ceph_lv2",
Nov 25 23:47:32 compute-0 frosty_neumann[221184]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 23:47:32 compute-0 frosty_neumann[221184]:             "tags": {
Nov 25 23:47:32 compute-0 frosty_neumann[221184]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 23:47:32 compute-0 frosty_neumann[221184]:                 "ceph.block_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 23:47:32 compute-0 frosty_neumann[221184]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 23:47:32 compute-0 frosty_neumann[221184]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:47:32 compute-0 frosty_neumann[221184]:                 "ceph.cluster_name": "ceph",
Nov 25 23:47:32 compute-0 frosty_neumann[221184]:                 "ceph.crush_device_class": "",
Nov 25 23:47:32 compute-0 frosty_neumann[221184]:                 "ceph.encrypted": "0",
Nov 25 23:47:32 compute-0 frosty_neumann[221184]:                 "ceph.osd_fsid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 23:47:32 compute-0 frosty_neumann[221184]:                 "ceph.osd_id": "2",
Nov 25 23:47:32 compute-0 frosty_neumann[221184]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 23:47:32 compute-0 frosty_neumann[221184]:                 "ceph.type": "block",
Nov 25 23:47:32 compute-0 frosty_neumann[221184]:                 "ceph.vdo": "0"
Nov 25 23:47:32 compute-0 frosty_neumann[221184]:             },
Nov 25 23:47:32 compute-0 frosty_neumann[221184]:             "type": "block",
Nov 25 23:47:32 compute-0 frosty_neumann[221184]:             "vg_name": "ceph_vg2"
Nov 25 23:47:32 compute-0 frosty_neumann[221184]:         }
Nov 25 23:47:32 compute-0 frosty_neumann[221184]:     ]
Nov 25 23:47:32 compute-0 frosty_neumann[221184]: }
Nov 25 23:47:32 compute-0 podman[221139]: 2025-11-25 23:47:32.958403117 +0000 UTC m=+0.896742475 container died 0f8ea2f6a56fc71cf6a9ba13f59f8d67da340939b0f5af7adbeb82b3db0003f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_neumann, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:47:33 compute-0 systemd[1]: libpod-0f8ea2f6a56fc71cf6a9ba13f59f8d67da340939b0f5af7adbeb82b3db0003f8.scope: Deactivated successfully.
Nov 25 23:47:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-ee98f585291b40e89044453c08febece093b4852a6b074a8f59a6a1d3a2a20e4-merged.mount: Deactivated successfully.
Nov 25 23:47:33 compute-0 podman[221139]: 2025-11-25 23:47:33.205849422 +0000 UTC m=+1.144188820 container remove 0f8ea2f6a56fc71cf6a9ba13f59f8d67da340939b0f5af7adbeb82b3db0003f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_neumann, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:47:33 compute-0 systemd[1]: libpod-conmon-0f8ea2f6a56fc71cf6a9ba13f59f8d67da340939b0f5af7adbeb82b3db0003f8.scope: Deactivated successfully.
Nov 25 23:47:33 compute-0 sudo[220949]: pam_unix(sudo:session): session closed for user root
Nov 25 23:47:33 compute-0 sudo[221320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:47:33 compute-0 sudo[221320]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:47:33 compute-0 sudo[221320]: pam_unix(sudo:session): session closed for user root
Nov 25 23:47:33 compute-0 sudo[221345]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:47:33 compute-0 sudo[221345]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:47:33 compute-0 sudo[221345]: pam_unix(sudo:session): session closed for user root
Nov 25 23:47:33 compute-0 sudo[221370]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:47:33 compute-0 sudo[221370]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:47:33 compute-0 sudo[221370]: pam_unix(sudo:session): session closed for user root
Nov 25 23:47:33 compute-0 sudo[221395]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 -- raw list --format json
Nov 25 23:47:33 compute-0 sudo[221395]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:47:33 compute-0 ceph-mon[75654]: pgmap v522: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:47:34 compute-0 podman[221459]: 2025-11-25 23:47:34.019692356 +0000 UTC m=+0.061982616 container create 06ebaab5236894bccf83f19611bf1d05526d4879096d38a4086b28aadde28927 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_stonebraker, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 23:47:34 compute-0 systemd[1]: Started libpod-conmon-06ebaab5236894bccf83f19611bf1d05526d4879096d38a4086b28aadde28927.scope.
Nov 25 23:47:34 compute-0 podman[221459]: 2025-11-25 23:47:33.99462339 +0000 UTC m=+0.036913700 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:47:34 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:47:34 compute-0 podman[221459]: 2025-11-25 23:47:34.124698922 +0000 UTC m=+0.166989162 container init 06ebaab5236894bccf83f19611bf1d05526d4879096d38a4086b28aadde28927 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_stonebraker, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 25 23:47:34 compute-0 podman[221459]: 2025-11-25 23:47:34.134374358 +0000 UTC m=+0.176664618 container start 06ebaab5236894bccf83f19611bf1d05526d4879096d38a4086b28aadde28927 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_stonebraker, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:47:34 compute-0 podman[221459]: 2025-11-25 23:47:34.138634281 +0000 UTC m=+0.180924521 container attach 06ebaab5236894bccf83f19611bf1d05526d4879096d38a4086b28aadde28927 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_stonebraker, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:47:34 compute-0 festive_stonebraker[221477]: 167 167
Nov 25 23:47:34 compute-0 systemd[1]: libpod-06ebaab5236894bccf83f19611bf1d05526d4879096d38a4086b28aadde28927.scope: Deactivated successfully.
Nov 25 23:47:34 compute-0 conmon[221477]: conmon 06ebaab5236894bccf83 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-06ebaab5236894bccf83f19611bf1d05526d4879096d38a4086b28aadde28927.scope/container/memory.events
Nov 25 23:47:34 compute-0 podman[221459]: 2025-11-25 23:47:34.144404564 +0000 UTC m=+0.186694794 container died 06ebaab5236894bccf83f19611bf1d05526d4879096d38a4086b28aadde28927 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_stonebraker, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 23:47:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-60e59a7c48d658790d0620432e8d60b073618f187c98b5a72d42cd6153b892e0-merged.mount: Deactivated successfully.
Nov 25 23:47:34 compute-0 podman[221459]: 2025-11-25 23:47:34.191995477 +0000 UTC m=+0.234285737 container remove 06ebaab5236894bccf83f19611bf1d05526d4879096d38a4086b28aadde28927 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_stonebraker, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 25 23:47:34 compute-0 systemd[1]: libpod-conmon-06ebaab5236894bccf83f19611bf1d05526d4879096d38a4086b28aadde28927.scope: Deactivated successfully.
Nov 25 23:47:34 compute-0 systemd[1]: Reloading.
Nov 25 23:47:34 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v523: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:47:34 compute-0 podman[221502]: 2025-11-25 23:47:34.37451457 +0000 UTC m=+0.057983090 container create de33ff04c4153d948937dfe8b580003d4b476a2c380d65426024ae2e8fc64542 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:47:34 compute-0 systemd-rc-local-generator[221538]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 23:47:34 compute-0 systemd-sysv-generator[221541]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 23:47:34 compute-0 podman[221502]: 2025-11-25 23:47:34.346830225 +0000 UTC m=+0.030298805 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:47:34 compute-0 systemd[1]: Started libpod-conmon-de33ff04c4153d948937dfe8b580003d4b476a2c380d65426024ae2e8fc64542.scope.
Nov 25 23:47:34 compute-0 sudo[221262]: pam_unix(sudo:session): session closed for user root
Nov 25 23:47:34 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:47:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2639ceb05ff352a4fffa2dfb6b0f09c3821ec8cd8e4248ea9b9ea2a4075620e1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 23:47:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2639ceb05ff352a4fffa2dfb6b0f09c3821ec8cd8e4248ea9b9ea2a4075620e1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:47:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2639ceb05ff352a4fffa2dfb6b0f09c3821ec8cd8e4248ea9b9ea2a4075620e1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:47:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2639ceb05ff352a4fffa2dfb6b0f09c3821ec8cd8e4248ea9b9ea2a4075620e1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:47:34 compute-0 podman[221502]: 2025-11-25 23:47:34.728676086 +0000 UTC m=+0.412144696 container init de33ff04c4153d948937dfe8b580003d4b476a2c380d65426024ae2e8fc64542 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_taussig, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 23:47:34 compute-0 podman[221502]: 2025-11-25 23:47:34.744348302 +0000 UTC m=+0.427816862 container start de33ff04c4153d948937dfe8b580003d4b476a2c380d65426024ae2e8fc64542 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_taussig, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:47:34 compute-0 podman[221502]: 2025-11-25 23:47:34.747824965 +0000 UTC m=+0.431293575 container attach de33ff04c4153d948937dfe8b580003d4b476a2c380d65426024ae2e8fc64542 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_taussig, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:47:35 compute-0 sshd-session[160849]: Connection closed by 192.168.122.30 port 53664
Nov 25 23:47:35 compute-0 sshd-session[160846]: pam_unix(sshd:session): session closed for user zuul
Nov 25 23:47:35 compute-0 systemd[1]: session-49.scope: Deactivated successfully.
Nov 25 23:47:35 compute-0 systemd[1]: session-49.scope: Consumed 3min 52.939s CPU time.
Nov 25 23:47:35 compute-0 systemd-logind[789]: Session 49 logged out. Waiting for processes to exit.
Nov 25 23:47:35 compute-0 systemd-logind[789]: Removed session 49.
Nov 25 23:47:35 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:47:35 compute-0 ceph-mon[75654]: pgmap v523: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:47:35 compute-0 optimistic_taussig[221551]: {
Nov 25 23:47:35 compute-0 optimistic_taussig[221551]:     "019d967b-1a56-4e90-8682-a890da577e20": {
Nov 25 23:47:35 compute-0 optimistic_taussig[221551]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:47:35 compute-0 optimistic_taussig[221551]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 23:47:35 compute-0 optimistic_taussig[221551]:         "osd_id": 2,
Nov 25 23:47:35 compute-0 optimistic_taussig[221551]:         "osd_uuid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 23:47:35 compute-0 optimistic_taussig[221551]:         "type": "bluestore"
Nov 25 23:47:35 compute-0 optimistic_taussig[221551]:     },
Nov 25 23:47:35 compute-0 optimistic_taussig[221551]:     "21bbab34-bea3-466b-8bf7-812749fcef47": {
Nov 25 23:47:35 compute-0 optimistic_taussig[221551]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:47:35 compute-0 optimistic_taussig[221551]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 23:47:35 compute-0 optimistic_taussig[221551]:         "osd_id": 1,
Nov 25 23:47:35 compute-0 optimistic_taussig[221551]:         "osd_uuid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 23:47:35 compute-0 optimistic_taussig[221551]:         "type": "bluestore"
Nov 25 23:47:35 compute-0 optimistic_taussig[221551]:     },
Nov 25 23:47:35 compute-0 optimistic_taussig[221551]:     "c5ddf08a-6193-41e0-8332-60b5083aa62e": {
Nov 25 23:47:35 compute-0 optimistic_taussig[221551]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:47:35 compute-0 optimistic_taussig[221551]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 23:47:35 compute-0 optimistic_taussig[221551]:         "osd_id": 0,
Nov 25 23:47:35 compute-0 optimistic_taussig[221551]:         "osd_uuid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 23:47:35 compute-0 optimistic_taussig[221551]:         "type": "bluestore"
Nov 25 23:47:35 compute-0 optimistic_taussig[221551]:     }
Nov 25 23:47:35 compute-0 optimistic_taussig[221551]: }
Nov 25 23:47:35 compute-0 systemd[1]: libpod-de33ff04c4153d948937dfe8b580003d4b476a2c380d65426024ae2e8fc64542.scope: Deactivated successfully.
Nov 25 23:47:35 compute-0 systemd[1]: libpod-de33ff04c4153d948937dfe8b580003d4b476a2c380d65426024ae2e8fc64542.scope: Consumed 1.083s CPU time.
Nov 25 23:47:35 compute-0 podman[221502]: 2025-11-25 23:47:35.819086368 +0000 UTC m=+1.502554928 container died de33ff04c4153d948937dfe8b580003d4b476a2c380d65426024ae2e8fc64542 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_taussig, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507)
Nov 25 23:47:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-2639ceb05ff352a4fffa2dfb6b0f09c3821ec8cd8e4248ea9b9ea2a4075620e1-merged.mount: Deactivated successfully.
Nov 25 23:47:35 compute-0 podman[221502]: 2025-11-25 23:47:35.887244367 +0000 UTC m=+1.570712907 container remove de33ff04c4153d948937dfe8b580003d4b476a2c380d65426024ae2e8fc64542 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_taussig, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 23:47:35 compute-0 systemd[1]: libpod-conmon-de33ff04c4153d948937dfe8b580003d4b476a2c380d65426024ae2e8fc64542.scope: Deactivated successfully.
Nov 25 23:47:35 compute-0 sudo[221395]: pam_unix(sudo:session): session closed for user root
Nov 25 23:47:35 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 23:47:35 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:47:35 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 23:47:35 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:47:35 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev 86f9501c-2e34-4d39-b870-c03c132443e2 does not exist
Nov 25 23:47:36 compute-0 sudo[221623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:47:36 compute-0 sudo[221623]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:47:36 compute-0 sudo[221623]: pam_unix(sudo:session): session closed for user root
Nov 25 23:47:36 compute-0 sudo[221654]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 23:47:36 compute-0 sudo[221654]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:47:36 compute-0 sudo[221654]: pam_unix(sudo:session): session closed for user root
Nov 25 23:47:36 compute-0 podman[221647]: 2025-11-25 23:47:36.238985879 +0000 UTC m=+0.134926831 container health_status 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:47:36 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v524: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:47:36 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:47:36 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:47:36 compute-0 ceph-mon[75654]: pgmap v524: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:47:38 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v525: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:47:39 compute-0 ceph-mon[75654]: pgmap v525: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:47:40 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v526: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:47:40 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:47:40 compute-0 sshd-session[221697]: Accepted publickey for zuul from 192.168.122.30 port 52070 ssh2: ECDSA SHA256:7fdySOwLsSW4/MY3qn2D8coSBpPDcuftesbV+ivPlmY
Nov 25 23:47:40 compute-0 systemd-logind[789]: New session 50 of user zuul.
Nov 25 23:47:40 compute-0 systemd[1]: Started Session 50 of User zuul.
Nov 25 23:47:40 compute-0 sshd-session[221697]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 23:47:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:47:40.754 160725 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 23:47:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:47:40.756 160725 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 23:47:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:47:40.756 160725 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 23:47:41 compute-0 ceph-mon[75654]: pgmap v526: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:47:41 compute-0 python3.9[221850]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 23:47:42 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v527: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:47:43 compute-0 python3.9[222004]: ansible-ansible.builtin.service_facts Invoked
Nov 25 23:47:43 compute-0 network[222021]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 25 23:47:43 compute-0 network[222022]: 'network-scripts' will be removed from distribution in near future.
Nov 25 23:47:43 compute-0 network[222023]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 25 23:47:43 compute-0 ceph-mon[75654]: pgmap v527: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:47:44 compute-0 podman[222030]: 2025-11-25 23:47:44.281658619 +0000 UTC m=+0.095657825 container health_status 9c33443d6f21f9126b440289b7d384a59ef92406382d0e40a2b11f1da5dc47d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 23:47:44 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v528: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:47:45 compute-0 ceph-mon[75654]: pgmap v528: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:47:45 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:47:46 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v529: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:47:47 compute-0 ceph-mon[75654]: pgmap v529: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:47:48 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v530: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:47:49 compute-0 sudo[222312]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixhfvlkyqlrlaqzytbmxsujvxfvfovir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114468.5919569-47-992640981724/AnsiballZ_setup.py'
Nov 25 23:47:49 compute-0 sudo[222312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:47:49 compute-0 python3.9[222314]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 23:47:49 compute-0 ceph-mon[75654]: pgmap v530: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:47:49 compute-0 sudo[222312]: pam_unix(sudo:session): session closed for user root
Nov 25 23:47:50 compute-0 sudo[222396]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwixdhoqltiplsybwbasumvsfzqfzdww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114468.5919569-47-992640981724/AnsiballZ_dnf.py'
Nov 25 23:47:50 compute-0 sudo[222396]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:47:50 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v531: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:47:50 compute-0 python3.9[222398]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 23:47:50 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:47:51 compute-0 ceph-mon[75654]: pgmap v531: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:47:52 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v532: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:47:52 compute-0 sshd-session[222400]: Received disconnect from 45.78.216.132 port 41772:11: Bye Bye [preauth]
Nov 25 23:47:52 compute-0 sshd-session[222400]: Disconnected from authenticating user root 45.78.216.132 port 41772 [preauth]
Nov 25 23:47:52 compute-0 sshd-session[222402]: Received disconnect from 171.244.140.22 port 59086:11: Bye Bye [preauth]
Nov 25 23:47:52 compute-0 sshd-session[222402]: Disconnected from authenticating user root 171.244.140.22 port 59086 [preauth]
Nov 25 23:47:53 compute-0 ceph-mon[75654]: pgmap v532: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:47:54 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v533: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:47:55 compute-0 sudo[222396]: pam_unix(sudo:session): session closed for user root
Nov 25 23:47:55 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:47:55 compute-0 ceph-mon[75654]: pgmap v533: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:47:56 compute-0 ceph-mgr[75954]: [balancer INFO root] Optimize plan auto_2025-11-25_23:47:56
Nov 25 23:47:56 compute-0 ceph-mgr[75954]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 23:47:56 compute-0 ceph-mgr[75954]: [balancer INFO root] do_upmap
Nov 25 23:47:56 compute-0 ceph-mgr[75954]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.data', 'images', 'vms', 'volumes', '.mgr', 'cephfs.cephfs.meta']
Nov 25 23:47:56 compute-0 ceph-mgr[75954]: [balancer INFO root] prepared 0/10 changes
Nov 25 23:47:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:47:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:47:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:47:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:47:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:47:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:47:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 23:47:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 23:47:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 23:47:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 23:47:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 23:47:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 23:47:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 23:47:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 23:47:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 23:47:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 23:47:56 compute-0 sudo[222553]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxszfewquubualxokhbxaiichizhorko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114475.7003121-59-184215283809323/AnsiballZ_stat.py'
Nov 25 23:47:56 compute-0 sudo[222553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:47:56 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v534: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:47:56 compute-0 python3.9[222555]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 23:47:56 compute-0 sudo[222553]: pam_unix(sudo:session): session closed for user root
Nov 25 23:47:57 compute-0 sudo[222705]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkxbkxxiiwrzflfuyugfevamcwqyfpyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114476.820096-69-222431035035358/AnsiballZ_command.py'
Nov 25 23:47:57 compute-0 sudo[222705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:47:57 compute-0 ceph-mon[75654]: pgmap v534: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:47:57 compute-0 python3.9[222707]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:47:57 compute-0 sudo[222705]: pam_unix(sudo:session): session closed for user root
Nov 25 23:47:58 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v535: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:47:58 compute-0 sudo[222858]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lykrvcxkbmrabjbvxhjjgfnudztxkaqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114478.0345628-79-134646719110252/AnsiballZ_stat.py'
Nov 25 23:47:58 compute-0 sudo[222858]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:47:58 compute-0 python3.9[222860]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 23:47:58 compute-0 sudo[222858]: pam_unix(sudo:session): session closed for user root
Nov 25 23:47:59 compute-0 sudo[223010]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-diujakizcbhutmadjnydybdciniznaqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114478.881028-87-114546543030701/AnsiballZ_command.py'
Nov 25 23:47:59 compute-0 sudo[223010]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:47:59 compute-0 ceph-mon[75654]: pgmap v535: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:47:59 compute-0 python3.9[223012]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:47:59 compute-0 sudo[223010]: pam_unix(sudo:session): session closed for user root
Nov 25 23:48:00 compute-0 sudo[223163]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvelbqwuolksfkuvivfcjvhyyomakpqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114479.8658173-95-15858991582194/AnsiballZ_stat.py'
Nov 25 23:48:00 compute-0 sudo[223163]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:48:00 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v536: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:48:00 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:48:00 compute-0 python3.9[223165]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:48:00 compute-0 sudo[223163]: pam_unix(sudo:session): session closed for user root
Nov 25 23:48:01 compute-0 sudo[223286]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ruwehmxkruznlnbneluvgyzzmacplncq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114479.8658173-95-15858991582194/AnsiballZ_copy.py'
Nov 25 23:48:01 compute-0 sudo[223286]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:48:01 compute-0 python3.9[223288]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764114479.8658173-95-15858991582194/.source.iscsi _original_basename=.jbmbs7j9 follow=False checksum=1bf7c32893e80615accff856e371c26699a7fd74 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:48:01 compute-0 sudo[223286]: pam_unix(sudo:session): session closed for user root
Nov 25 23:48:01 compute-0 ceph-mon[75654]: pgmap v536: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:48:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 23:48:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:48:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 23:48:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:48:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:48:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:48:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:48:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:48:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:48:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:48:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:48:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:48:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 23:48:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:48:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:48:02 compute-0 sudo[223438]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmauacjckksjwdwxjnpiulzgazubplmy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114481.6673918-110-52646615584947/AnsiballZ_file.py'
Nov 25 23:48:02 compute-0 sudo[223438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:48:02 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v537: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:48:02 compute-0 python3.9[223440]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:48:02 compute-0 sudo[223438]: pam_unix(sudo:session): session closed for user root
Nov 25 23:48:03 compute-0 sudo[223590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yuntoirnvmvzxxvpmozgwqcvmxhdjvya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114482.7095344-118-14193529329308/AnsiballZ_lineinfile.py'
Nov 25 23:48:03 compute-0 sudo[223590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:48:03 compute-0 python3.9[223592]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:48:03 compute-0 sudo[223590]: pam_unix(sudo:session): session closed for user root
Nov 25 23:48:03 compute-0 ceph-mon[75654]: pgmap v537: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:48:04 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v538: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:48:04 compute-0 sudo[223742]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkupksycrzipftawwdozupsflwcwrlmb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114483.7427309-127-132572285134480/AnsiballZ_systemd_service.py'
Nov 25 23:48:04 compute-0 sudo[223742]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:48:04 compute-0 python3.9[223744]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 23:48:04 compute-0 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Nov 25 23:48:04 compute-0 sudo[223742]: pam_unix(sudo:session): session closed for user root
Nov 25 23:48:05 compute-0 sudo[223898]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ciqtgeexycncsogjxskytelwohcvkmre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114485.0286326-135-225697911293525/AnsiballZ_systemd_service.py'
Nov 25 23:48:05 compute-0 sudo[223898]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:48:05 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:48:05 compute-0 ceph-mon[75654]: pgmap v538: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:48:05 compute-0 python3.9[223900]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 23:48:05 compute-0 systemd[1]: Reloading.
Nov 25 23:48:05 compute-0 systemd-rc-local-generator[223929]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 23:48:05 compute-0 systemd-sysv-generator[223933]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 23:48:06 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Nov 25 23:48:06 compute-0 systemd[1]: Starting Open-iSCSI...
Nov 25 23:48:06 compute-0 kernel: Loading iSCSI transport class v2.0-870.
Nov 25 23:48:06 compute-0 systemd[1]: Started Open-iSCSI.
Nov 25 23:48:06 compute-0 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Nov 25 23:48:06 compute-0 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Nov 25 23:48:06 compute-0 sudo[223898]: pam_unix(sudo:session): session closed for user root
Nov 25 23:48:06 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v539: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:48:06 compute-0 sudo[224109]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqoxbzhwpwushdpnduyqexyitujhxazr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114486.540675-146-50237141351165/AnsiballZ_service_facts.py'
Nov 25 23:48:06 compute-0 sudo[224109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:48:07 compute-0 podman[224073]: 2025-11-25 23:48:07.008787756 +0000 UTC m=+0.144522900 container health_status 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 25 23:48:07 compute-0 python3.9[224121]: ansible-ansible.builtin.service_facts Invoked
Nov 25 23:48:07 compute-0 network[224144]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 25 23:48:07 compute-0 network[224145]: 'network-scripts' will be removed from distribution in near future.
Nov 25 23:48:07 compute-0 network[224146]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 25 23:48:07 compute-0 ceph-mon[75654]: pgmap v539: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:48:08 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v540: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:48:09 compute-0 ceph-mon[75654]: pgmap v540: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:48:10 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v541: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:48:10 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:48:11 compute-0 sudo[224109]: pam_unix(sudo:session): session closed for user root
Nov 25 23:48:11 compute-0 ceph-mon[75654]: pgmap v541: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:48:12 compute-0 sudo[224416]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmsnondqygwpbibauemepvgkdgegssca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114491.7612398-156-252337288621107/AnsiballZ_file.py'
Nov 25 23:48:12 compute-0 sudo[224416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:48:12 compute-0 python3.9[224418]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 25 23:48:12 compute-0 sudo[224416]: pam_unix(sudo:session): session closed for user root
Nov 25 23:48:12 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v542: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:48:12 compute-0 sudo[224568]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sacbgqhoezwsafwprmueobulfpllrsnz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114492.4741545-164-202774711093310/AnsiballZ_modprobe.py'
Nov 25 23:48:12 compute-0 sudo[224568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:48:13 compute-0 python3.9[224570]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Nov 25 23:48:13 compute-0 sudo[224568]: pam_unix(sudo:session): session closed for user root
Nov 25 23:48:13 compute-0 ceph-mon[75654]: pgmap v542: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:48:13 compute-0 sudo[224724]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlazpxiqtcaetllxgcpsukzkcuapzpsj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114493.491119-172-10177492081552/AnsiballZ_stat.py'
Nov 25 23:48:13 compute-0 sudo[224724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:48:14 compute-0 python3.9[224726]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:48:14 compute-0 sudo[224724]: pam_unix(sudo:session): session closed for user root
Nov 25 23:48:14 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v543: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:48:14 compute-0 sudo[224865]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ystccsldehvukeusdqzsstxskawrxbpk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114493.491119-172-10177492081552/AnsiballZ_copy.py'
Nov 25 23:48:14 compute-0 sudo[224865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:48:14 compute-0 podman[224821]: 2025-11-25 23:48:14.64092085 +0000 UTC m=+0.062705633 container health_status 9c33443d6f21f9126b440289b7d384a59ef92406382d0e40a2b11f1da5dc47d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 25 23:48:14 compute-0 python3.9[224869]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764114493.491119-172-10177492081552/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:48:14 compute-0 sudo[224865]: pam_unix(sudo:session): session closed for user root
Nov 25 23:48:15 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:48:15 compute-0 sudo[225019]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvzsydlylqjafiygbhksrpjtkqmjtfmx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114495.1299105-188-120634497573462/AnsiballZ_lineinfile.py'
Nov 25 23:48:15 compute-0 sudo[225019]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:48:15 compute-0 ceph-mon[75654]: pgmap v543: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:48:15 compute-0 python3.9[225021]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:48:15 compute-0 sudo[225019]: pam_unix(sudo:session): session closed for user root
Nov 25 23:48:16 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v544: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:48:16 compute-0 sudo[225171]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhebohuqauzqxtqpcvosrdqsnatanshj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114496.012154-196-87859221824892/AnsiballZ_systemd.py'
Nov 25 23:48:16 compute-0 sudo[225171]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:48:17 compute-0 python3.9[225173]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 23:48:17 compute-0 ceph-mon[75654]: pgmap v544: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:48:18 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Nov 25 23:48:18 compute-0 systemd[1]: Stopped Load Kernel Modules.
Nov 25 23:48:18 compute-0 systemd[1]: Stopping Load Kernel Modules...
Nov 25 23:48:18 compute-0 systemd[1]: Starting Load Kernel Modules...
Nov 25 23:48:18 compute-0 systemd[1]: Finished Load Kernel Modules.
Nov 25 23:48:18 compute-0 sudo[225171]: pam_unix(sudo:session): session closed for user root
Nov 25 23:48:18 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v545: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:48:18 compute-0 sudo[225327]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whrqwsahupoyincihugoguvznjlzdcmq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114498.3845243-204-242231565950481/AnsiballZ_file.py'
Nov 25 23:48:18 compute-0 sudo[225327]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:48:18 compute-0 python3.9[225329]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:48:19 compute-0 sudo[225327]: pam_unix(sudo:session): session closed for user root
Nov 25 23:48:19 compute-0 sudo[225479]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwpyujmvhjoxddqculjaeotrrbbkpuhi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114499.2584896-213-119913405463730/AnsiballZ_stat.py'
Nov 25 23:48:19 compute-0 ceph-mon[75654]: pgmap v545: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:48:19 compute-0 sudo[225479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:48:19 compute-0 python3.9[225481]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 23:48:19 compute-0 sudo[225479]: pam_unix(sudo:session): session closed for user root
Nov 25 23:48:20 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v546: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:48:20 compute-0 sudo[225631]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxcpitkgvoanocyzgqbspyfkwxskhvzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114500.09081-222-115792257301167/AnsiballZ_stat.py'
Nov 25 23:48:20 compute-0 sudo[225631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:48:20 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:48:20 compute-0 python3.9[225633]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 23:48:20 compute-0 sudo[225631]: pam_unix(sudo:session): session closed for user root
Nov 25 23:48:21 compute-0 sudo[225783]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owwavsifhbkdqttgahtrpgmlkrjwuvmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114500.9145272-230-83199037616165/AnsiballZ_stat.py'
Nov 25 23:48:21 compute-0 sudo[225783]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:48:21 compute-0 python3.9[225785]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:48:21 compute-0 sudo[225783]: pam_unix(sudo:session): session closed for user root
Nov 25 23:48:21 compute-0 ceph-mon[75654]: pgmap v546: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:48:21 compute-0 sudo[225906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtbftlflbomssckqxgxcvvsnrsrupyeo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114500.9145272-230-83199037616165/AnsiballZ_copy.py'
Nov 25 23:48:21 compute-0 sudo[225906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:48:22 compute-0 python3.9[225908]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764114500.9145272-230-83199037616165/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:48:22 compute-0 sudo[225906]: pam_unix(sudo:session): session closed for user root
Nov 25 23:48:22 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v547: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:48:22 compute-0 sudo[226058]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpfrnfgrkicixpuzyywlwribivjhwimc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114502.357413-245-89545363858927/AnsiballZ_command.py'
Nov 25 23:48:22 compute-0 sudo[226058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:48:22 compute-0 python3.9[226060]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:48:22 compute-0 sudo[226058]: pam_unix(sudo:session): session closed for user root
Nov 25 23:48:24 compute-0 ceph-mon[75654]: pgmap v547: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:48:24 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v548: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:48:24 compute-0 sudo[226211]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvhlbqzdvvojpuuyxrumyooyeerbygvb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114503.1942034-253-14861196889876/AnsiballZ_lineinfile.py'
Nov 25 23:48:24 compute-0 sudo[226211]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:48:24 compute-0 python3.9[226213]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:48:24 compute-0 sudo[226211]: pam_unix(sudo:session): session closed for user root
Nov 25 23:48:25 compute-0 ceph-mon[75654]: pgmap v548: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:48:25 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:48:25 compute-0 sudo[226363]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eiqmmunrxvxnjvyduajclyjjqpeihcay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114504.9314308-261-102704883908174/AnsiballZ_replace.py'
Nov 25 23:48:25 compute-0 sudo[226363]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:48:25 compute-0 python3.9[226365]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:48:25 compute-0 sudo[226363]: pam_unix(sudo:session): session closed for user root
Nov 25 23:48:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:48:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:48:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:48:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:48:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:48:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:48:26 compute-0 sudo[226515]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxfkzmobbrxtddbrxqotgbvkzwfdxbdd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114505.9805646-269-166508788159549/AnsiballZ_replace.py'
Nov 25 23:48:26 compute-0 sudo[226515]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:48:26 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v549: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:48:26 compute-0 python3.9[226517]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:48:26 compute-0 sudo[226515]: pam_unix(sudo:session): session closed for user root
Nov 25 23:48:27 compute-0 sudo[226667]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfiswyvnhviszurpmoignnuyhlqucvfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114506.8488648-278-123258841911308/AnsiballZ_lineinfile.py'
Nov 25 23:48:27 compute-0 sudo[226667]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:48:27 compute-0 python3.9[226669]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:48:27 compute-0 sudo[226667]: pam_unix(sudo:session): session closed for user root
Nov 25 23:48:27 compute-0 ceph-mon[75654]: pgmap v549: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:48:28 compute-0 sudo[226819]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibywpupbtjxfejaaouqgmsqhtnjlbrcp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114507.5312545-278-262225635201775/AnsiballZ_lineinfile.py'
Nov 25 23:48:28 compute-0 sudo[226819]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:48:28 compute-0 python3.9[226821]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:48:28 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v550: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:48:28 compute-0 sudo[226819]: pam_unix(sudo:session): session closed for user root
Nov 25 23:48:28 compute-0 sudo[226971]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqlhficyxdjczovjjommteoyukqshixx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114508.5364888-278-194363442321563/AnsiballZ_lineinfile.py'
Nov 25 23:48:28 compute-0 sudo[226971]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:48:29 compute-0 python3.9[226973]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:48:29 compute-0 sudo[226971]: pam_unix(sudo:session): session closed for user root
Nov 25 23:48:29 compute-0 ceph-mon[75654]: pgmap v550: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:48:29 compute-0 sudo[227123]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbprukagtgsvppsnlzmigvwjfpcoxvsn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114509.2128007-278-276056278469530/AnsiballZ_lineinfile.py'
Nov 25 23:48:29 compute-0 sudo[227123]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:48:29 compute-0 python3.9[227125]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:48:29 compute-0 sudo[227123]: pam_unix(sudo:session): session closed for user root
Nov 25 23:48:30 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v551: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:48:30 compute-0 sudo[227275]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btxpsqxrjxjadtvnltbbmttkxthssuue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114510.0458028-307-132341829917107/AnsiballZ_stat.py'
Nov 25 23:48:30 compute-0 sudo[227275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:48:30 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:48:30 compute-0 python3.9[227277]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 23:48:30 compute-0 sudo[227275]: pam_unix(sudo:session): session closed for user root
Nov 25 23:48:31 compute-0 sudo[227429]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gakzsknwxdcafuwpwbqeepgrkczsiacs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114510.8732924-315-216055513238265/AnsiballZ_file.py'
Nov 25 23:48:31 compute-0 sudo[227429]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:48:31 compute-0 python3.9[227431]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:48:31 compute-0 ceph-mon[75654]: pgmap v551: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:48:31 compute-0 sudo[227429]: pam_unix(sudo:session): session closed for user root
Nov 25 23:48:32 compute-0 sudo[227581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gurttceievtnlbvmlrhhadamkjfmoimu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114511.7186747-324-170160029280412/AnsiballZ_file.py'
Nov 25 23:48:32 compute-0 sudo[227581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:48:32 compute-0 python3.9[227583]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:48:32 compute-0 sudo[227581]: pam_unix(sudo:session): session closed for user root
Nov 25 23:48:32 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v552: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:48:32 compute-0 sudo[227733]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqrjpaimweiwzggryogfocslkcqyytct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114512.5980191-332-123019913414498/AnsiballZ_stat.py'
Nov 25 23:48:32 compute-0 sudo[227733]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:48:33 compute-0 python3.9[227735]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:48:33 compute-0 sudo[227733]: pam_unix(sudo:session): session closed for user root
Nov 25 23:48:33 compute-0 ceph-mon[75654]: pgmap v552: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:48:33 compute-0 sudo[227811]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyliosihqmlhkmzskgzenzcaapnissqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114512.5980191-332-123019913414498/AnsiballZ_file.py'
Nov 25 23:48:33 compute-0 sudo[227811]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:48:33 compute-0 python3.9[227813]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:48:33 compute-0 sudo[227811]: pam_unix(sudo:session): session closed for user root
Nov 25 23:48:34 compute-0 sudo[227963]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwmbofnpkeakpvhnystoayeapsslbedu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114513.893961-332-26765536050870/AnsiballZ_stat.py'
Nov 25 23:48:34 compute-0 sudo[227963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:48:34 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v553: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:48:34 compute-0 python3.9[227965]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:48:34 compute-0 sudo[227963]: pam_unix(sudo:session): session closed for user root
Nov 25 23:48:34 compute-0 sudo[228041]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uffqyzgdztygvqpmxejuahurzyjwjkjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114513.893961-332-26765536050870/AnsiballZ_file.py'
Nov 25 23:48:34 compute-0 sudo[228041]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:48:34 compute-0 python3.9[228043]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:48:34 compute-0 sudo[228041]: pam_unix(sudo:session): session closed for user root
Nov 25 23:48:35 compute-0 sudo[228193]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sraqmaphonepjqgtmuifxflajehzakma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114515.1326492-355-105335305195707/AnsiballZ_file.py'
Nov 25 23:48:35 compute-0 sudo[228193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:48:35 compute-0 ceph-mon[75654]: pgmap v553: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:48:35 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:48:35 compute-0 python3.9[228195]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:48:35 compute-0 sudo[228193]: pam_unix(sudo:session): session closed for user root
Nov 25 23:48:36 compute-0 sudo[228319]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:48:36 compute-0 sudo[228319]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:48:36 compute-0 sudo[228319]: pam_unix(sudo:session): session closed for user root
Nov 25 23:48:36 compute-0 sudo[228368]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhqdhigwmswpfindlymbqhuuywbmnzgi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114515.785618-363-182803555991520/AnsiballZ_stat.py'
Nov 25 23:48:36 compute-0 sudo[228368]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:48:36 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v554: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:48:36 compute-0 sudo[228373]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:48:36 compute-0 sudo[228373]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:48:36 compute-0 sudo[228373]: pam_unix(sudo:session): session closed for user root
Nov 25 23:48:36 compute-0 sudo[228398]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:48:36 compute-0 sudo[228398]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:48:36 compute-0 sudo[228398]: pam_unix(sudo:session): session closed for user root
Nov 25 23:48:36 compute-0 python3.9[228372]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:48:36 compute-0 sudo[228368]: pam_unix(sudo:session): session closed for user root
Nov 25 23:48:36 compute-0 sudo[228423]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 23:48:36 compute-0 sudo[228423]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:48:36 compute-0 sudo[228536]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pyscpyfnkvjbjhdhmbnnmmsxbkjtjorz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114515.785618-363-182803555991520/AnsiballZ_file.py'
Nov 25 23:48:36 compute-0 sudo[228536]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:48:36 compute-0 python3.9[228539]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:48:36 compute-0 sudo[228536]: pam_unix(sudo:session): session closed for user root
Nov 25 23:48:37 compute-0 sudo[228423]: pam_unix(sudo:session): session closed for user root
Nov 25 23:48:37 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 23:48:37 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:48:37 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 23:48:37 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 23:48:37 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 23:48:37 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:48:37 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev 2f023514-b421-437b-a9ac-519d1fcd0e26 does not exist
Nov 25 23:48:37 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev 4fe13760-bfed-47ee-9742-0afdd5fb7c67 does not exist
Nov 25 23:48:37 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev 76465874-22b1-4e6a-b601-0acc79d16311 does not exist
Nov 25 23:48:37 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 23:48:37 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 23:48:37 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 23:48:37 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 23:48:37 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 23:48:37 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:48:37 compute-0 sudo[228581]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:48:37 compute-0 sudo[228581]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:48:37 compute-0 sudo[228581]: pam_unix(sudo:session): session closed for user root
Nov 25 23:48:37 compute-0 sudo[228620]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:48:37 compute-0 sudo[228620]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:48:37 compute-0 sudo[228620]: pam_unix(sudo:session): session closed for user root
Nov 25 23:48:37 compute-0 podman[228605]: 2025-11-25 23:48:37.232787051 +0000 UTC m=+0.076721653 container health_status 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 25 23:48:37 compute-0 sudo[228710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:48:37 compute-0 sudo[228710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:48:37 compute-0 sudo[228710]: pam_unix(sudo:session): session closed for user root
Nov 25 23:48:37 compute-0 sudo[228753]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 23:48:37 compute-0 sudo[228753]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:48:37 compute-0 sudo[228833]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdxaydutwkhpugkouuxilthvhnzcsnsy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114517.1803238-375-131932015203397/AnsiballZ_stat.py'
Nov 25 23:48:37 compute-0 ceph-mon[75654]: pgmap v554: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:48:37 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:48:37 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 23:48:37 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:48:37 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 23:48:37 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 23:48:37 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:48:37 compute-0 sudo[228833]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:48:37 compute-0 python3.9[228837]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:48:37 compute-0 sudo[228833]: pam_unix(sudo:session): session closed for user root
Nov 25 23:48:37 compute-0 podman[228877]: 2025-11-25 23:48:37.730010695 +0000 UTC m=+0.049796220 container create 75b06caa803ac3339f2f2c18da315e8dad6555e28f7c357783ff7e9d165bcbc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_satoshi, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 23:48:37 compute-0 systemd[1]: Started libpod-conmon-75b06caa803ac3339f2f2c18da315e8dad6555e28f7c357783ff7e9d165bcbc9.scope.
Nov 25 23:48:37 compute-0 podman[228877]: 2025-11-25 23:48:37.703326378 +0000 UTC m=+0.023111953 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:48:37 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:48:37 compute-0 podman[228877]: 2025-11-25 23:48:37.835695916 +0000 UTC m=+0.155481421 container init 75b06caa803ac3339f2f2c18da315e8dad6555e28f7c357783ff7e9d165bcbc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_satoshi, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 23:48:37 compute-0 podman[228877]: 2025-11-25 23:48:37.84263808 +0000 UTC m=+0.162423585 container start 75b06caa803ac3339f2f2c18da315e8dad6555e28f7c357783ff7e9d165bcbc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_satoshi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:48:37 compute-0 podman[228877]: 2025-11-25 23:48:37.84679189 +0000 UTC m=+0.166577375 container attach 75b06caa803ac3339f2f2c18da315e8dad6555e28f7c357783ff7e9d165bcbc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_satoshi, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 23:48:37 compute-0 dazzling_satoshi[228917]: 167 167
Nov 25 23:48:37 compute-0 systemd[1]: libpod-75b06caa803ac3339f2f2c18da315e8dad6555e28f7c357783ff7e9d165bcbc9.scope: Deactivated successfully.
Nov 25 23:48:37 compute-0 podman[228877]: 2025-11-25 23:48:37.849220975 +0000 UTC m=+0.169006480 container died 75b06caa803ac3339f2f2c18da315e8dad6555e28f7c357783ff7e9d165bcbc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_satoshi, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 23:48:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-7395c57757e58952d6e8a41f01282c69d68f9ab7d070e7050c27b7d8fa82befa-merged.mount: Deactivated successfully.
Nov 25 23:48:37 compute-0 podman[228877]: 2025-11-25 23:48:37.888153245 +0000 UTC m=+0.207938730 container remove 75b06caa803ac3339f2f2c18da315e8dad6555e28f7c357783ff7e9d165bcbc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_satoshi, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 23:48:37 compute-0 systemd[1]: libpod-conmon-75b06caa803ac3339f2f2c18da315e8dad6555e28f7c357783ff7e9d165bcbc9.scope: Deactivated successfully.
Nov 25 23:48:37 compute-0 sudo[228985]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adecjwcwcuacrvwiqcefpweiafeedtuu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114517.1803238-375-131932015203397/AnsiballZ_file.py'
Nov 25 23:48:37 compute-0 sudo[228985]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:48:38 compute-0 podman[228993]: 2025-11-25 23:48:38.036414344 +0000 UTC m=+0.037228137 container create d7e7f45669cd4a55b623df9bea79ebdcb4a5c89d886d15fec56dc735d4975d20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_chaum, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:48:38 compute-0 systemd[1]: Started libpod-conmon-d7e7f45669cd4a55b623df9bea79ebdcb4a5c89d886d15fec56dc735d4975d20.scope.
Nov 25 23:48:38 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:48:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe62e2b7021c2d8b4252bca24e93abb6590baa86d9d93e57fa500f5b2dc6c33e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 23:48:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe62e2b7021c2d8b4252bca24e93abb6590baa86d9d93e57fa500f5b2dc6c33e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:48:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe62e2b7021c2d8b4252bca24e93abb6590baa86d9d93e57fa500f5b2dc6c33e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:48:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe62e2b7021c2d8b4252bca24e93abb6590baa86d9d93e57fa500f5b2dc6c33e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:48:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe62e2b7021c2d8b4252bca24e93abb6590baa86d9d93e57fa500f5b2dc6c33e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 23:48:38 compute-0 python3.9[228987]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:48:38 compute-0 podman[228993]: 2025-11-25 23:48:38.019593898 +0000 UTC m=+0.020407731 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:48:38 compute-0 podman[228993]: 2025-11-25 23:48:38.128995986 +0000 UTC m=+0.129809809 container init d7e7f45669cd4a55b623df9bea79ebdcb4a5c89d886d15fec56dc735d4975d20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_chaum, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:48:38 compute-0 sudo[228985]: pam_unix(sudo:session): session closed for user root
Nov 25 23:48:38 compute-0 podman[228993]: 2025-11-25 23:48:38.13666903 +0000 UTC m=+0.137482813 container start d7e7f45669cd4a55b623df9bea79ebdcb4a5c89d886d15fec56dc735d4975d20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_chaum, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 25 23:48:38 compute-0 podman[228993]: 2025-11-25 23:48:38.139481655 +0000 UTC m=+0.140295448 container attach d7e7f45669cd4a55b623df9bea79ebdcb4a5c89d886d15fec56dc735d4975d20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_chaum, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 23:48:38 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v555: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:48:38 compute-0 sudo[229164]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eeggkrymtmtsglddbjspvmuytogsxmhm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114518.3249993-387-177017997308496/AnsiballZ_systemd.py'
Nov 25 23:48:38 compute-0 sudo[229164]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:48:38 compute-0 python3.9[229166]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 23:48:39 compute-0 systemd[1]: Reloading.
Nov 25 23:48:39 compute-0 systemd-rc-local-generator[229216]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 23:48:39 compute-0 systemd-sysv-generator[229219]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 23:48:39 compute-0 flamboyant_chaum[229010]: --> passed data devices: 0 physical, 3 LVM
Nov 25 23:48:39 compute-0 flamboyant_chaum[229010]: --> relative data size: 1.0
Nov 25 23:48:39 compute-0 flamboyant_chaum[229010]: --> All data devices are unavailable
Nov 25 23:48:39 compute-0 podman[228993]: 2025-11-25 23:48:39.198255707 +0000 UTC m=+1.199069490 container died d7e7f45669cd4a55b623df9bea79ebdcb4a5c89d886d15fec56dc735d4975d20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_chaum, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:48:39 compute-0 systemd[1]: libpod-d7e7f45669cd4a55b623df9bea79ebdcb4a5c89d886d15fec56dc735d4975d20.scope: Deactivated successfully.
Nov 25 23:48:39 compute-0 systemd[1]: libpod-d7e7f45669cd4a55b623df9bea79ebdcb4a5c89d886d15fec56dc735d4975d20.scope: Consumed 1.000s CPU time.
Nov 25 23:48:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-fe62e2b7021c2d8b4252bca24e93abb6590baa86d9d93e57fa500f5b2dc6c33e-merged.mount: Deactivated successfully.
Nov 25 23:48:39 compute-0 podman[228993]: 2025-11-25 23:48:39.416033927 +0000 UTC m=+1.416847710 container remove d7e7f45669cd4a55b623df9bea79ebdcb4a5c89d886d15fec56dc735d4975d20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_chaum, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:48:39 compute-0 systemd[1]: libpod-conmon-d7e7f45669cd4a55b623df9bea79ebdcb4a5c89d886d15fec56dc735d4975d20.scope: Deactivated successfully.
Nov 25 23:48:39 compute-0 sudo[228753]: pam_unix(sudo:session): session closed for user root
Nov 25 23:48:39 compute-0 sudo[229164]: pam_unix(sudo:session): session closed for user root
Nov 25 23:48:39 compute-0 ceph-mon[75654]: pgmap v555: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:48:39 compute-0 sudo[229242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:48:39 compute-0 sudo[229242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:48:39 compute-0 sudo[229242]: pam_unix(sudo:session): session closed for user root
Nov 25 23:48:39 compute-0 sudo[229291]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:48:39 compute-0 sudo[229291]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:48:39 compute-0 sudo[229291]: pam_unix(sudo:session): session closed for user root
Nov 25 23:48:39 compute-0 sudo[229316]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:48:39 compute-0 sudo[229316]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:48:39 compute-0 sudo[229316]: pam_unix(sudo:session): session closed for user root
Nov 25 23:48:39 compute-0 sudo[229364]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 -- lvm list --format json
Nov 25 23:48:39 compute-0 sudo[229364]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:48:40 compute-0 sudo[229516]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcauakzettvovhpqwmdlifblbyqodrvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114519.7012024-395-11718346817414/AnsiballZ_stat.py'
Nov 25 23:48:40 compute-0 sudo[229516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:48:40 compute-0 podman[229534]: 2025-11-25 23:48:40.206273894 +0000 UTC m=+0.048152656 container create 987dc143192e081c4c16b1bcd5fbbd8611a55efbf291eddc28508c070397a959 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_elion, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 25 23:48:40 compute-0 systemd[1]: Started libpod-conmon-987dc143192e081c4c16b1bcd5fbbd8611a55efbf291eddc28508c070397a959.scope.
Nov 25 23:48:40 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:48:40 compute-0 python3.9[229520]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:48:40 compute-0 podman[229534]: 2025-11-25 23:48:40.187235919 +0000 UTC m=+0.029114711 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:48:40 compute-0 podman[229534]: 2025-11-25 23:48:40.296941697 +0000 UTC m=+0.138820499 container init 987dc143192e081c4c16b1bcd5fbbd8611a55efbf291eddc28508c070397a959 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_elion, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 23:48:40 compute-0 podman[229534]: 2025-11-25 23:48:40.3046302 +0000 UTC m=+0.146508952 container start 987dc143192e081c4c16b1bcd5fbbd8611a55efbf291eddc28508c070397a959 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_elion, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:48:40 compute-0 podman[229534]: 2025-11-25 23:48:40.307514036 +0000 UTC m=+0.149392898 container attach 987dc143192e081c4c16b1bcd5fbbd8611a55efbf291eddc28508c070397a959 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_elion, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 25 23:48:40 compute-0 zen_elion[229550]: 167 167
Nov 25 23:48:40 compute-0 systemd[1]: libpod-987dc143192e081c4c16b1bcd5fbbd8611a55efbf291eddc28508c070397a959.scope: Deactivated successfully.
Nov 25 23:48:40 compute-0 podman[229534]: 2025-11-25 23:48:40.313452804 +0000 UTC m=+0.155331586 container died 987dc143192e081c4c16b1bcd5fbbd8611a55efbf291eddc28508c070397a959 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_elion, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:48:40 compute-0 sudo[229516]: pam_unix(sudo:session): session closed for user root
Nov 25 23:48:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-1defb88753c0853a087f2845bfe85fb0c129d72836d117dfb9c2c448b5c3e53c-merged.mount: Deactivated successfully.
Nov 25 23:48:40 compute-0 podman[229534]: 2025-11-25 23:48:40.364778884 +0000 UTC m=+0.206657646 container remove 987dc143192e081c4c16b1bcd5fbbd8611a55efbf291eddc28508c070397a959 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_elion, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:48:40 compute-0 systemd[1]: libpod-conmon-987dc143192e081c4c16b1bcd5fbbd8611a55efbf291eddc28508c070397a959.scope: Deactivated successfully.
Nov 25 23:48:40 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v556: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:48:40 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:48:40 compute-0 podman[229604]: 2025-11-25 23:48:40.548929783 +0000 UTC m=+0.051075955 container create 083afdcb919404ea1cf8148c4b490b8ad4c1a13258a3f7ba1baf7cb73b3730c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_germain, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:48:40 compute-0 systemd[1]: Started libpod-conmon-083afdcb919404ea1cf8148c4b490b8ad4c1a13258a3f7ba1baf7cb73b3730c6.scope.
Nov 25 23:48:40 compute-0 podman[229604]: 2025-11-25 23:48:40.526745105 +0000 UTC m=+0.028891297 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:48:40 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:48:40 compute-0 sudo[229666]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvlaoyeiugktfqcsiodwkafaaaqrfuxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114519.7012024-395-11718346817414/AnsiballZ_file.py'
Nov 25 23:48:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b12b18be1f0c93aa4eb12bf08b10fc50d51577a99af39b329bdf65ef21672ab9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 23:48:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b12b18be1f0c93aa4eb12bf08b10fc50d51577a99af39b329bdf65ef21672ab9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:48:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b12b18be1f0c93aa4eb12bf08b10fc50d51577a99af39b329bdf65ef21672ab9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:48:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b12b18be1f0c93aa4eb12bf08b10fc50d51577a99af39b329bdf65ef21672ab9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:48:40 compute-0 sudo[229666]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:48:40 compute-0 podman[229604]: 2025-11-25 23:48:40.643966731 +0000 UTC m=+0.146112873 container init 083afdcb919404ea1cf8148c4b490b8ad4c1a13258a3f7ba1baf7cb73b3730c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_germain, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:48:40 compute-0 podman[229604]: 2025-11-25 23:48:40.659905363 +0000 UTC m=+0.162051535 container start 083afdcb919404ea1cf8148c4b490b8ad4c1a13258a3f7ba1baf7cb73b3730c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_germain, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 23:48:40 compute-0 podman[229604]: 2025-11-25 23:48:40.663890809 +0000 UTC m=+0.166036991 container attach 083afdcb919404ea1cf8148c4b490b8ad4c1a13258a3f7ba1baf7cb73b3730c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 23:48:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:48:40.755 160725 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 23:48:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:48:40.757 160725 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 23:48:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:48:40.757 160725 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 23:48:40 compute-0 python3.9[229669]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:48:40 compute-0 sudo[229666]: pam_unix(sudo:session): session closed for user root
Nov 25 23:48:41 compute-0 sudo[229825]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgchyuioppswkvpvskcuxncgngtcwctp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114521.0452137-407-266942306151770/AnsiballZ_stat.py'
Nov 25 23:48:41 compute-0 sudo[229825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:48:41 compute-0 trusting_germain[229661]: {
Nov 25 23:48:41 compute-0 trusting_germain[229661]:     "0": [
Nov 25 23:48:41 compute-0 trusting_germain[229661]:         {
Nov 25 23:48:41 compute-0 trusting_germain[229661]:             "devices": [
Nov 25 23:48:41 compute-0 trusting_germain[229661]:                 "/dev/loop3"
Nov 25 23:48:41 compute-0 trusting_germain[229661]:             ],
Nov 25 23:48:41 compute-0 trusting_germain[229661]:             "lv_name": "ceph_lv0",
Nov 25 23:48:41 compute-0 trusting_germain[229661]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 23:48:41 compute-0 trusting_germain[229661]:             "lv_size": "21470642176",
Nov 25 23:48:41 compute-0 trusting_germain[229661]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c5ddf08a-6193-41e0-8332-60b5083aa62e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 23:48:41 compute-0 trusting_germain[229661]:             "lv_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 23:48:41 compute-0 trusting_germain[229661]:             "name": "ceph_lv0",
Nov 25 23:48:41 compute-0 trusting_germain[229661]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 23:48:41 compute-0 trusting_germain[229661]:             "tags": {
Nov 25 23:48:41 compute-0 trusting_germain[229661]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 23:48:41 compute-0 trusting_germain[229661]:                 "ceph.block_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 23:48:41 compute-0 trusting_germain[229661]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 23:48:41 compute-0 trusting_germain[229661]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:48:41 compute-0 trusting_germain[229661]:                 "ceph.cluster_name": "ceph",
Nov 25 23:48:41 compute-0 trusting_germain[229661]:                 "ceph.crush_device_class": "",
Nov 25 23:48:41 compute-0 trusting_germain[229661]:                 "ceph.encrypted": "0",
Nov 25 23:48:41 compute-0 trusting_germain[229661]:                 "ceph.osd_fsid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 23:48:41 compute-0 trusting_germain[229661]:                 "ceph.osd_id": "0",
Nov 25 23:48:41 compute-0 trusting_germain[229661]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 23:48:41 compute-0 trusting_germain[229661]:                 "ceph.type": "block",
Nov 25 23:48:41 compute-0 trusting_germain[229661]:                 "ceph.vdo": "0"
Nov 25 23:48:41 compute-0 trusting_germain[229661]:             },
Nov 25 23:48:41 compute-0 trusting_germain[229661]:             "type": "block",
Nov 25 23:48:41 compute-0 trusting_germain[229661]:             "vg_name": "ceph_vg0"
Nov 25 23:48:41 compute-0 trusting_germain[229661]:         }
Nov 25 23:48:41 compute-0 trusting_germain[229661]:     ],
Nov 25 23:48:41 compute-0 trusting_germain[229661]:     "1": [
Nov 25 23:48:41 compute-0 trusting_germain[229661]:         {
Nov 25 23:48:41 compute-0 trusting_germain[229661]:             "devices": [
Nov 25 23:48:41 compute-0 trusting_germain[229661]:                 "/dev/loop4"
Nov 25 23:48:41 compute-0 trusting_germain[229661]:             ],
Nov 25 23:48:41 compute-0 trusting_germain[229661]:             "lv_name": "ceph_lv1",
Nov 25 23:48:41 compute-0 trusting_germain[229661]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 23:48:41 compute-0 trusting_germain[229661]:             "lv_size": "21470642176",
Nov 25 23:48:41 compute-0 trusting_germain[229661]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21bbab34-bea3-466b-8bf7-812749fcef47,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 23:48:41 compute-0 trusting_germain[229661]:             "lv_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 23:48:41 compute-0 trusting_germain[229661]:             "name": "ceph_lv1",
Nov 25 23:48:41 compute-0 trusting_germain[229661]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 23:48:41 compute-0 trusting_germain[229661]:             "tags": {
Nov 25 23:48:41 compute-0 trusting_germain[229661]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 23:48:41 compute-0 trusting_germain[229661]:                 "ceph.block_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 23:48:41 compute-0 trusting_germain[229661]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 23:48:41 compute-0 trusting_germain[229661]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:48:41 compute-0 trusting_germain[229661]:                 "ceph.cluster_name": "ceph",
Nov 25 23:48:41 compute-0 trusting_germain[229661]:                 "ceph.crush_device_class": "",
Nov 25 23:48:41 compute-0 trusting_germain[229661]:                 "ceph.encrypted": "0",
Nov 25 23:48:41 compute-0 trusting_germain[229661]:                 "ceph.osd_fsid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 23:48:41 compute-0 trusting_germain[229661]:                 "ceph.osd_id": "1",
Nov 25 23:48:41 compute-0 trusting_germain[229661]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 23:48:41 compute-0 trusting_germain[229661]:                 "ceph.type": "block",
Nov 25 23:48:41 compute-0 trusting_germain[229661]:                 "ceph.vdo": "0"
Nov 25 23:48:41 compute-0 trusting_germain[229661]:             },
Nov 25 23:48:41 compute-0 trusting_germain[229661]:             "type": "block",
Nov 25 23:48:41 compute-0 trusting_germain[229661]:             "vg_name": "ceph_vg1"
Nov 25 23:48:41 compute-0 trusting_germain[229661]:         }
Nov 25 23:48:41 compute-0 trusting_germain[229661]:     ],
Nov 25 23:48:41 compute-0 trusting_germain[229661]:     "2": [
Nov 25 23:48:41 compute-0 trusting_germain[229661]:         {
Nov 25 23:48:41 compute-0 trusting_germain[229661]:             "devices": [
Nov 25 23:48:41 compute-0 trusting_germain[229661]:                 "/dev/loop5"
Nov 25 23:48:41 compute-0 trusting_germain[229661]:             ],
Nov 25 23:48:41 compute-0 trusting_germain[229661]:             "lv_name": "ceph_lv2",
Nov 25 23:48:41 compute-0 trusting_germain[229661]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 23:48:41 compute-0 trusting_germain[229661]:             "lv_size": "21470642176",
Nov 25 23:48:41 compute-0 trusting_germain[229661]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=019d967b-1a56-4e90-8682-a890da577e20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 23:48:41 compute-0 trusting_germain[229661]:             "lv_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 23:48:41 compute-0 trusting_germain[229661]:             "name": "ceph_lv2",
Nov 25 23:48:41 compute-0 trusting_germain[229661]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 23:48:41 compute-0 trusting_germain[229661]:             "tags": {
Nov 25 23:48:41 compute-0 trusting_germain[229661]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 23:48:41 compute-0 trusting_germain[229661]:                 "ceph.block_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 23:48:41 compute-0 trusting_germain[229661]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 23:48:41 compute-0 trusting_germain[229661]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:48:41 compute-0 trusting_germain[229661]:                 "ceph.cluster_name": "ceph",
Nov 25 23:48:41 compute-0 trusting_germain[229661]:                 "ceph.crush_device_class": "",
Nov 25 23:48:41 compute-0 trusting_germain[229661]:                 "ceph.encrypted": "0",
Nov 25 23:48:41 compute-0 trusting_germain[229661]:                 "ceph.osd_fsid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 23:48:41 compute-0 trusting_germain[229661]:                 "ceph.osd_id": "2",
Nov 25 23:48:41 compute-0 trusting_germain[229661]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 23:48:41 compute-0 trusting_germain[229661]:                 "ceph.type": "block",
Nov 25 23:48:41 compute-0 trusting_germain[229661]:                 "ceph.vdo": "0"
Nov 25 23:48:41 compute-0 trusting_germain[229661]:             },
Nov 25 23:48:41 compute-0 trusting_germain[229661]:             "type": "block",
Nov 25 23:48:41 compute-0 trusting_germain[229661]:             "vg_name": "ceph_vg2"
Nov 25 23:48:41 compute-0 trusting_germain[229661]:         }
Nov 25 23:48:41 compute-0 trusting_germain[229661]:     ]
Nov 25 23:48:41 compute-0 trusting_germain[229661]: }
Nov 25 23:48:41 compute-0 systemd[1]: libpod-083afdcb919404ea1cf8148c4b490b8ad4c1a13258a3f7ba1baf7cb73b3730c6.scope: Deactivated successfully.
Nov 25 23:48:41 compute-0 podman[229604]: 2025-11-25 23:48:41.443072623 +0000 UTC m=+0.945218765 container died 083afdcb919404ea1cf8148c4b490b8ad4c1a13258a3f7ba1baf7cb73b3730c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_germain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 25 23:48:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-b12b18be1f0c93aa4eb12bf08b10fc50d51577a99af39b329bdf65ef21672ab9-merged.mount: Deactivated successfully.
Nov 25 23:48:41 compute-0 ceph-mon[75654]: pgmap v556: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:48:41 compute-0 podman[229604]: 2025-11-25 23:48:41.504567832 +0000 UTC m=+1.006713984 container remove 083afdcb919404ea1cf8148c4b490b8ad4c1a13258a3f7ba1baf7cb73b3730c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_germain, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 23:48:41 compute-0 systemd[1]: libpod-conmon-083afdcb919404ea1cf8148c4b490b8ad4c1a13258a3f7ba1baf7cb73b3730c6.scope: Deactivated successfully.
Nov 25 23:48:41 compute-0 sudo[229364]: pam_unix(sudo:session): session closed for user root
Nov 25 23:48:41 compute-0 python3.9[229827]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:48:41 compute-0 sudo[229825]: pam_unix(sudo:session): session closed for user root
Nov 25 23:48:41 compute-0 sudo[229840]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:48:41 compute-0 sudo[229840]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:48:41 compute-0 sudo[229840]: pam_unix(sudo:session): session closed for user root
Nov 25 23:48:41 compute-0 sudo[229871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:48:41 compute-0 sudo[229871]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:48:41 compute-0 sudo[229871]: pam_unix(sudo:session): session closed for user root
Nov 25 23:48:41 compute-0 sudo[229915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:48:41 compute-0 sudo[229915]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:48:41 compute-0 sudo[229915]: pam_unix(sudo:session): session closed for user root
Nov 25 23:48:41 compute-0 sudo[230009]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xyxdclhewvtccknqtdkropgbblzhfgey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114521.0452137-407-266942306151770/AnsiballZ_file.py'
Nov 25 23:48:41 compute-0 sudo[230009]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:48:41 compute-0 sudo[229968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 -- raw list --format json
Nov 25 23:48:41 compute-0 sudo[229968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:48:42 compute-0 python3.9[230015]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:48:42 compute-0 sudo[230009]: pam_unix(sudo:session): session closed for user root
Nov 25 23:48:42 compute-0 podman[230082]: 2025-11-25 23:48:42.224482847 +0000 UTC m=+0.039802466 container create 14a96e9eeb8296019c343aa11a27503076792eec16ef3b6bc93926f1640524be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_thompson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 23:48:42 compute-0 systemd[1]: Started libpod-conmon-14a96e9eeb8296019c343aa11a27503076792eec16ef3b6bc93926f1640524be.scope.
Nov 25 23:48:42 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:48:42 compute-0 podman[230082]: 2025-11-25 23:48:42.206213312 +0000 UTC m=+0.021532911 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:48:42 compute-0 podman[230082]: 2025-11-25 23:48:42.310721652 +0000 UTC m=+0.126041321 container init 14a96e9eeb8296019c343aa11a27503076792eec16ef3b6bc93926f1640524be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_thompson, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:48:42 compute-0 podman[230082]: 2025-11-25 23:48:42.322034072 +0000 UTC m=+0.137353691 container start 14a96e9eeb8296019c343aa11a27503076792eec16ef3b6bc93926f1640524be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_thompson, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 23:48:42 compute-0 podman[230082]: 2025-11-25 23:48:42.327108186 +0000 UTC m=+0.142427855 container attach 14a96e9eeb8296019c343aa11a27503076792eec16ef3b6bc93926f1640524be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_thompson, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:48:42 compute-0 nifty_thompson[230098]: 167 167
Nov 25 23:48:42 compute-0 systemd[1]: libpod-14a96e9eeb8296019c343aa11a27503076792eec16ef3b6bc93926f1640524be.scope: Deactivated successfully.
Nov 25 23:48:42 compute-0 podman[230082]: 2025-11-25 23:48:42.32988036 +0000 UTC m=+0.145199949 container died 14a96e9eeb8296019c343aa11a27503076792eec16ef3b6bc93926f1640524be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_thompson, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:48:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-20720e136399b588dd29189fca98e21e10f1bfaeb592dcc8b1216546f49adb48-merged.mount: Deactivated successfully.
Nov 25 23:48:42 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v557: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:48:42 compute-0 podman[230082]: 2025-11-25 23:48:42.376983167 +0000 UTC m=+0.192302756 container remove 14a96e9eeb8296019c343aa11a27503076792eec16ef3b6bc93926f1640524be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_thompson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True)
Nov 25 23:48:42 compute-0 systemd[1]: libpod-conmon-14a96e9eeb8296019c343aa11a27503076792eec16ef3b6bc93926f1640524be.scope: Deactivated successfully.
Nov 25 23:48:42 compute-0 podman[230196]: 2025-11-25 23:48:42.606068517 +0000 UTC m=+0.059460336 container create 24f1fd6208bb06f6cc2df2c08fd3e8eeed524da7b4372627dbbc0f1b6067a716 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_shirley, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 23:48:42 compute-0 systemd[1]: Started libpod-conmon-24f1fd6208bb06f6cc2df2c08fd3e8eeed524da7b4372627dbbc0f1b6067a716.scope.
Nov 25 23:48:42 compute-0 podman[230196]: 2025-11-25 23:48:42.582975676 +0000 UTC m=+0.036367545 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:48:42 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:48:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c86c188d3969be041c82abc3f3be8ec5bf0e3760d31ef415585dbff19844995a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 23:48:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c86c188d3969be041c82abc3f3be8ec5bf0e3760d31ef415585dbff19844995a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:48:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c86c188d3969be041c82abc3f3be8ec5bf0e3760d31ef415585dbff19844995a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:48:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c86c188d3969be041c82abc3f3be8ec5bf0e3760d31ef415585dbff19844995a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:48:42 compute-0 podman[230196]: 2025-11-25 23:48:42.707817183 +0000 UTC m=+0.161209052 container init 24f1fd6208bb06f6cc2df2c08fd3e8eeed524da7b4372627dbbc0f1b6067a716 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_shirley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:48:42 compute-0 podman[230196]: 2025-11-25 23:48:42.720586222 +0000 UTC m=+0.173978061 container start 24f1fd6208bb06f6cc2df2c08fd3e8eeed524da7b4372627dbbc0f1b6067a716 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_shirley, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 25 23:48:42 compute-0 podman[230196]: 2025-11-25 23:48:42.725087691 +0000 UTC m=+0.178479530 container attach 24f1fd6208bb06f6cc2df2c08fd3e8eeed524da7b4372627dbbc0f1b6067a716 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_shirley, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:48:42 compute-0 sudo[230265]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btwspfygyxaxifyglcydcaxcrwgyxaeq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114522.3214815-419-270352959927274/AnsiballZ_systemd.py'
Nov 25 23:48:42 compute-0 sudo[230265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:48:43 compute-0 python3.9[230269]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 23:48:43 compute-0 systemd[1]: Reloading.
Nov 25 23:48:43 compute-0 systemd-rc-local-generator[230295]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 23:48:43 compute-0 systemd-sysv-generator[230301]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 23:48:43 compute-0 systemd[1]: Starting Create netns directory...
Nov 25 23:48:43 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 25 23:48:43 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 25 23:48:43 compute-0 systemd[1]: Finished Create netns directory.
Nov 25 23:48:43 compute-0 sudo[230265]: pam_unix(sudo:session): session closed for user root
Nov 25 23:48:43 compute-0 ceph-mon[75654]: pgmap v557: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:48:43 compute-0 competent_shirley[230236]: {
Nov 25 23:48:43 compute-0 competent_shirley[230236]:     "019d967b-1a56-4e90-8682-a890da577e20": {
Nov 25 23:48:43 compute-0 competent_shirley[230236]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:48:43 compute-0 competent_shirley[230236]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 23:48:43 compute-0 competent_shirley[230236]:         "osd_id": 2,
Nov 25 23:48:43 compute-0 competent_shirley[230236]:         "osd_uuid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 23:48:43 compute-0 competent_shirley[230236]:         "type": "bluestore"
Nov 25 23:48:43 compute-0 competent_shirley[230236]:     },
Nov 25 23:48:43 compute-0 competent_shirley[230236]:     "21bbab34-bea3-466b-8bf7-812749fcef47": {
Nov 25 23:48:43 compute-0 competent_shirley[230236]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:48:43 compute-0 competent_shirley[230236]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 23:48:43 compute-0 competent_shirley[230236]:         "osd_id": 1,
Nov 25 23:48:43 compute-0 competent_shirley[230236]:         "osd_uuid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 23:48:43 compute-0 competent_shirley[230236]:         "type": "bluestore"
Nov 25 23:48:43 compute-0 competent_shirley[230236]:     },
Nov 25 23:48:43 compute-0 competent_shirley[230236]:     "c5ddf08a-6193-41e0-8332-60b5083aa62e": {
Nov 25 23:48:43 compute-0 competent_shirley[230236]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:48:43 compute-0 competent_shirley[230236]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 23:48:43 compute-0 competent_shirley[230236]:         "osd_id": 0,
Nov 25 23:48:43 compute-0 competent_shirley[230236]:         "osd_uuid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 23:48:43 compute-0 competent_shirley[230236]:         "type": "bluestore"
Nov 25 23:48:43 compute-0 competent_shirley[230236]:     }
Nov 25 23:48:43 compute-0 competent_shirley[230236]: }
Nov 25 23:48:43 compute-0 systemd[1]: libpod-24f1fd6208bb06f6cc2df2c08fd3e8eeed524da7b4372627dbbc0f1b6067a716.scope: Deactivated successfully.
Nov 25 23:48:43 compute-0 systemd[1]: libpod-24f1fd6208bb06f6cc2df2c08fd3e8eeed524da7b4372627dbbc0f1b6067a716.scope: Consumed 1.059s CPU time.
Nov 25 23:48:43 compute-0 sshd-session[230270]: Invalid user work from 124.18.141.70 port 52832
Nov 25 23:48:43 compute-0 podman[230196]: 2025-11-25 23:48:43.779780325 +0000 UTC m=+1.233172164 container died 24f1fd6208bb06f6cc2df2c08fd3e8eeed524da7b4372627dbbc0f1b6067a716 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 23:48:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-c86c188d3969be041c82abc3f3be8ec5bf0e3760d31ef415585dbff19844995a-merged.mount: Deactivated successfully.
Nov 25 23:48:43 compute-0 podman[230196]: 2025-11-25 23:48:43.854651469 +0000 UTC m=+1.308043308 container remove 24f1fd6208bb06f6cc2df2c08fd3e8eeed524da7b4372627dbbc0f1b6067a716 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_shirley, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:48:43 compute-0 systemd[1]: libpod-conmon-24f1fd6208bb06f6cc2df2c08fd3e8eeed524da7b4372627dbbc0f1b6067a716.scope: Deactivated successfully.
Nov 25 23:48:43 compute-0 sudo[229968]: pam_unix(sudo:session): session closed for user root
Nov 25 23:48:43 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 23:48:43 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:48:43 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 23:48:43 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:48:43 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev a7d55164-c930-48df-8b5d-5193b4563b8c does not exist
Nov 25 23:48:43 compute-0 sshd-session[230270]: Received disconnect from 124.18.141.70 port 52832:11: Bye Bye [preauth]
Nov 25 23:48:43 compute-0 sshd-session[230270]: Disconnected from invalid user work 124.18.141.70 port 52832 [preauth]
Nov 25 23:48:44 compute-0 sudo[230412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:48:44 compute-0 sudo[230412]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:48:44 compute-0 sudo[230412]: pam_unix(sudo:session): session closed for user root
Nov 25 23:48:44 compute-0 sudo[230474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 23:48:44 compute-0 sudo[230474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:48:44 compute-0 sudo[230474]: pam_unix(sudo:session): session closed for user root
Nov 25 23:48:44 compute-0 sudo[230551]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhxbevzllxycfguzylazyjobrtarfxnd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114523.8846738-429-14475299255012/AnsiballZ_file.py'
Nov 25 23:48:44 compute-0 sudo[230551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:48:44 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v558: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:48:44 compute-0 python3.9[230553]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:48:44 compute-0 sudo[230551]: pam_unix(sudo:session): session closed for user root
Nov 25 23:48:44 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:48:44 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:48:44 compute-0 sudo[230713]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uysddiiofsbvzmzppzyqvsqudmydqbsf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114524.6593208-437-214497841918838/AnsiballZ_stat.py'
Nov 25 23:48:44 compute-0 sudo[230713]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:48:44 compute-0 podman[230677]: 2025-11-25 23:48:44.970098722 +0000 UTC m=+0.064602012 container health_status 9c33443d6f21f9126b440289b7d384a59ef92406382d0e40a2b11f1da5dc47d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Nov 25 23:48:45 compute-0 python3.9[230720]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:48:45 compute-0 sudo[230713]: pam_unix(sudo:session): session closed for user root
Nov 25 23:48:45 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:48:45 compute-0 sudo[230844]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oipyeclpgxkegvwzapczvwvcdlqeyoek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114524.6593208-437-214497841918838/AnsiballZ_copy.py'
Nov 25 23:48:45 compute-0 sudo[230844]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:48:45 compute-0 python3.9[230846]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764114524.6593208-437-214497841918838/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:48:45 compute-0 sudo[230844]: pam_unix(sudo:session): session closed for user root
Nov 25 23:48:45 compute-0 ceph-mon[75654]: pgmap v558: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:48:46 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v559: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:48:46 compute-0 sudo[230996]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzhpzpntwlhirqgejzzsvaetfaxoyxnm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114526.1153564-454-96659733170959/AnsiballZ_file.py'
Nov 25 23:48:46 compute-0 sudo[230996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:48:46 compute-0 python3.9[230998]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:48:46 compute-0 sudo[230996]: pam_unix(sudo:session): session closed for user root
Nov 25 23:48:47 compute-0 sudo[231148]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iahigsmgufkmingxrkfvajxcvnmzoqft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114526.8832836-462-61917942496976/AnsiballZ_stat.py'
Nov 25 23:48:47 compute-0 sudo[231148]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:48:47 compute-0 python3.9[231150]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:48:47 compute-0 sudo[231148]: pam_unix(sudo:session): session closed for user root
Nov 25 23:48:47 compute-0 sudo[231271]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btcznblloucytpludnomefdcabgjxtoy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114526.8832836-462-61917942496976/AnsiballZ_copy.py'
Nov 25 23:48:47 compute-0 sudo[231271]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:48:47 compute-0 ceph-mon[75654]: pgmap v559: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:48:48 compute-0 python3.9[231273]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764114526.8832836-462-61917942496976/.source.json _original_basename=.bu5w2_ts follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:48:48 compute-0 sudo[231271]: pam_unix(sudo:session): session closed for user root
Nov 25 23:48:48 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v560: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:48:48 compute-0 sudo[231423]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvwfmcphkbnqgafgpkgdvqknpbqxcmvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114528.2816408-477-236432299627083/AnsiballZ_file.py'
Nov 25 23:48:48 compute-0 sudo[231423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:48:48 compute-0 python3.9[231425]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:48:48 compute-0 sudo[231423]: pam_unix(sudo:session): session closed for user root
Nov 25 23:48:49 compute-0 sudo[231575]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxvmujkvckcfbxqfexdgvvkmokhtplye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114529.186818-485-58208078616845/AnsiballZ_stat.py'
Nov 25 23:48:49 compute-0 sudo[231575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:48:49 compute-0 sudo[231575]: pam_unix(sudo:session): session closed for user root
Nov 25 23:48:49 compute-0 ceph-mon[75654]: pgmap v560: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:48:50 compute-0 sudo[231698]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zebxlwkdqilfbpzmceamqnqrniokgcpo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114529.186818-485-58208078616845/AnsiballZ_copy.py'
Nov 25 23:48:50 compute-0 sudo[231698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:48:50 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v561: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:48:50 compute-0 sudo[231698]: pam_unix(sudo:session): session closed for user root
Nov 25 23:48:50 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:48:50 compute-0 systemd[1]: virtnodedevd.service: Deactivated successfully.
Nov 25 23:48:51 compute-0 sudo[231851]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwfzcnzpibfdqebwkhivecjkkodoeipn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114530.8776798-502-73494142041450/AnsiballZ_container_config_data.py'
Nov 25 23:48:51 compute-0 sudo[231851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:48:51 compute-0 python3.9[231853]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Nov 25 23:48:51 compute-0 sudo[231851]: pam_unix(sudo:session): session closed for user root
Nov 25 23:48:51 compute-0 ceph-mon[75654]: pgmap v561: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:48:52 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v562: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:48:52 compute-0 sudo[232003]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eydyuxixjwmowryxvxxtpusnelrahkkl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114531.9081771-511-129806705663109/AnsiballZ_container_config_hash.py'
Nov 25 23:48:52 compute-0 sudo[232003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:48:52 compute-0 python3.9[232005]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 25 23:48:52 compute-0 sudo[232003]: pam_unix(sudo:session): session closed for user root
Nov 25 23:48:53 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Nov 25 23:48:53 compute-0 sudo[232156]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uksqeuijlelgzionxurujzaprsvekpag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114533.0212753-520-8810369292673/AnsiballZ_podman_container_info.py'
Nov 25 23:48:53 compute-0 sudo[232156]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:48:53 compute-0 python3.9[232158]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 25 23:48:53 compute-0 ceph-mon[75654]: pgmap v562: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:48:54 compute-0 sudo[232156]: pam_unix(sudo:session): session closed for user root
Nov 25 23:48:54 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v563: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:48:55 compute-0 sudo[232335]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-komiytawrxalnvemcjlonhcezcdeprgz ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764114534.8674018-533-245722212383592/AnsiballZ_edpm_container_manage.py'
Nov 25 23:48:55 compute-0 sudo[232335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:48:55 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:48:55 compute-0 python3[232337]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 25 23:48:55 compute-0 ceph-mon[75654]: pgmap v563: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:48:56 compute-0 ceph-mgr[75954]: [balancer INFO root] Optimize plan auto_2025-11-25_23:48:56
Nov 25 23:48:56 compute-0 ceph-mgr[75954]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 23:48:56 compute-0 ceph-mgr[75954]: [balancer INFO root] do_upmap
Nov 25 23:48:56 compute-0 ceph-mgr[75954]: [balancer INFO root] pools ['cephfs.cephfs.data', 'volumes', 'cephfs.cephfs.meta', 'backups', 'vms', '.mgr', 'images']
Nov 25 23:48:56 compute-0 ceph-mgr[75954]: [balancer INFO root] prepared 0/10 changes
Nov 25 23:48:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:48:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:48:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:48:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:48:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:48:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:48:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 23:48:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 23:48:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 23:48:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 23:48:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 23:48:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 23:48:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 23:48:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 23:48:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 23:48:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 23:48:56 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v564: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:48:57 compute-0 podman[232350]: 2025-11-25 23:48:57.165395247 +0000 UTC m=+1.324060681 image pull 5a87eb2d1bea5c4c3bce654551fc0b05a96cf5556b36110e17bddeee8189b072 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Nov 25 23:48:57 compute-0 podman[232405]: 2025-11-25 23:48:57.307029181 +0000 UTC m=+0.046559196 container create b33bb0f8771548517ad62d4e0193b93d84fd686bd9113462f17aea9d51587084 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251118, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3)
Nov 25 23:48:57 compute-0 podman[232405]: 2025-11-25 23:48:57.284544015 +0000 UTC m=+0.024074000 image pull 5a87eb2d1bea5c4c3bce654551fc0b05a96cf5556b36110e17bddeee8189b072 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Nov 25 23:48:57 compute-0 python3[232337]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Nov 25 23:48:57 compute-0 sudo[232335]: pam_unix(sudo:session): session closed for user root
Nov 25 23:48:57 compute-0 ceph-mon[75654]: pgmap v564: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:48:57 compute-0 sudo[232593]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcnvemsengiehbxawdsacwsqdsvpqanq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114537.646781-541-73424956128745/AnsiballZ_stat.py'
Nov 25 23:48:57 compute-0 sudo[232593]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:48:58 compute-0 python3.9[232595]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 23:48:58 compute-0 sudo[232593]: pam_unix(sudo:session): session closed for user root
Nov 25 23:48:58 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v565: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:48:58 compute-0 sudo[232747]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbbvmzrobreeirzfdvoymqbfceggplpy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114538.5397573-550-23805210738081/AnsiballZ_file.py'
Nov 25 23:48:58 compute-0 sudo[232747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:48:59 compute-0 python3.9[232749]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:48:59 compute-0 sudo[232747]: pam_unix(sudo:session): session closed for user root
Nov 25 23:48:59 compute-0 sudo[232823]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lizzyuzenudndflsidisrnwdudcaxcho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114538.5397573-550-23805210738081/AnsiballZ_stat.py'
Nov 25 23:48:59 compute-0 sudo[232823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:48:59 compute-0 python3.9[232825]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 23:48:59 compute-0 sudo[232823]: pam_unix(sudo:session): session closed for user root
Nov 25 23:48:59 compute-0 ceph-mon[75654]: pgmap v565: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:49:00 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v566: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:49:00 compute-0 sudo[232974]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-siidgnxevnzdpidgffutalyeubgapcxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114539.8635983-550-268183740930661/AnsiballZ_copy.py'
Nov 25 23:49:00 compute-0 sudo[232974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:49:00 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:49:00 compute-0 python3.9[232976]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764114539.8635983-550-268183740930661/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:49:00 compute-0 sudo[232974]: pam_unix(sudo:session): session closed for user root
Nov 25 23:49:01 compute-0 sudo[233050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlgutmzvzmsyfydthmaqrjnerfjbjmod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114539.8635983-550-268183740930661/AnsiballZ_systemd.py'
Nov 25 23:49:01 compute-0 sudo[233050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:49:01 compute-0 python3.9[233052]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 23:49:01 compute-0 systemd[1]: Reloading.
Nov 25 23:49:01 compute-0 systemd-rc-local-generator[233082]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 23:49:01 compute-0 systemd-sysv-generator[233085]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 23:49:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 23:49:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:49:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 23:49:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:49:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:49:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:49:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:49:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:49:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:49:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:49:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:49:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:49:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 23:49:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:49:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:49:01 compute-0 sudo[233050]: pam_unix(sudo:session): session closed for user root
Nov 25 23:49:01 compute-0 ceph-mon[75654]: pgmap v566: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:49:02 compute-0 sudo[233161]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivocgyuwvqnvnimmjrljakqhltbtwwua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114539.8635983-550-268183740930661/AnsiballZ_systemd.py'
Nov 25 23:49:02 compute-0 sudo[233161]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:49:02 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v567: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:49:02 compute-0 python3.9[233163]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 23:49:02 compute-0 systemd[1]: Reloading.
Nov 25 23:49:02 compute-0 systemd-rc-local-generator[233192]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 23:49:02 compute-0 systemd-sysv-generator[233195]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 23:49:02 compute-0 systemd[1]: Starting multipathd container...
Nov 25 23:49:03 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:49:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc456fe93271dcda37c18dbbb2a83b50dbdcad4cd5a84a4efe9cd6b2652013a9/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 25 23:49:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc456fe93271dcda37c18dbbb2a83b50dbdcad4cd5a84a4efe9cd6b2652013a9/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 25 23:49:03 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run b33bb0f8771548517ad62d4e0193b93d84fd686bd9113462f17aea9d51587084.
Nov 25 23:49:03 compute-0 podman[233202]: 2025-11-25 23:49:03.073360999 +0000 UTC m=+0.138948492 container init b33bb0f8771548517ad62d4e0193b93d84fd686bd9113462f17aea9d51587084 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251118, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 25 23:49:03 compute-0 multipathd[233218]: + sudo -E kolla_set_configs
Nov 25 23:49:03 compute-0 podman[233202]: 2025-11-25 23:49:03.110321038 +0000 UTC m=+0.175908531 container start b33bb0f8771548517ad62d4e0193b93d84fd686bd9113462f17aea9d51587084 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 23:49:03 compute-0 podman[233202]: multipathd
Nov 25 23:49:03 compute-0 systemd[1]: Started multipathd container.
Nov 25 23:49:03 compute-0 sudo[233225]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Nov 25 23:49:03 compute-0 sudo[233225]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 25 23:49:03 compute-0 sudo[233225]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Nov 25 23:49:03 compute-0 sudo[233161]: pam_unix(sudo:session): session closed for user root
Nov 25 23:49:03 compute-0 multipathd[233218]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 25 23:49:03 compute-0 multipathd[233218]: INFO:__main__:Validating config file
Nov 25 23:49:03 compute-0 multipathd[233218]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 25 23:49:03 compute-0 multipathd[233218]: INFO:__main__:Writing out command to execute
Nov 25 23:49:03 compute-0 podman[233224]: 2025-11-25 23:49:03.217399806 +0000 UTC m=+0.090507739 container health_status b33bb0f8771548517ad62d4e0193b93d84fd686bd9113462f17aea9d51587084 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 25 23:49:03 compute-0 sudo[233225]: pam_unix(sudo:session): session closed for user root
Nov 25 23:49:03 compute-0 multipathd[233218]: ++ cat /run_command
Nov 25 23:49:03 compute-0 systemd[1]: b33bb0f8771548517ad62d4e0193b93d84fd686bd9113462f17aea9d51587084-31ede1e73f9bb0c4.service: Main process exited, code=exited, status=1/FAILURE
Nov 25 23:49:03 compute-0 systemd[1]: b33bb0f8771548517ad62d4e0193b93d84fd686bd9113462f17aea9d51587084-31ede1e73f9bb0c4.service: Failed with result 'exit-code'.
Nov 25 23:49:03 compute-0 multipathd[233218]: + CMD='/usr/sbin/multipathd -d'
Nov 25 23:49:03 compute-0 multipathd[233218]: + ARGS=
Nov 25 23:49:03 compute-0 multipathd[233218]: + sudo kolla_copy_cacerts
Nov 25 23:49:03 compute-0 sudo[233250]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Nov 25 23:49:03 compute-0 sudo[233250]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 25 23:49:03 compute-0 sudo[233250]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Nov 25 23:49:03 compute-0 sudo[233250]: pam_unix(sudo:session): session closed for user root
Nov 25 23:49:03 compute-0 multipathd[233218]: + [[ ! -n '' ]]
Nov 25 23:49:03 compute-0 multipathd[233218]: + . kolla_extend_start
Nov 25 23:49:03 compute-0 multipathd[233218]: Running command: '/usr/sbin/multipathd -d'
Nov 25 23:49:03 compute-0 multipathd[233218]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Nov 25 23:49:03 compute-0 multipathd[233218]: + umask 0022
Nov 25 23:49:03 compute-0 multipathd[233218]: + exec /usr/sbin/multipathd -d
Nov 25 23:49:03 compute-0 multipathd[233218]: 3356.007641 | --------start up--------
Nov 25 23:49:03 compute-0 multipathd[233218]: 3356.007674 | read /etc/multipath.conf
Nov 25 23:49:03 compute-0 multipathd[233218]: 3356.018367 | path checkers start up
Nov 25 23:49:04 compute-0 ceph-mon[75654]: pgmap v567: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:49:04 compute-0 python3.9[233404]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 23:49:04 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Nov 25 23:49:04 compute-0 systemd[1]: virtqemud.service: Deactivated successfully.
Nov 25 23:49:04 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v568: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:49:04 compute-0 sudo[233558]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iylphvixzlouztxkeqattoozghqnrkom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114544.404355-586-61653274882044/AnsiballZ_command.py'
Nov 25 23:49:04 compute-0 sudo[233558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:49:04 compute-0 python3.9[233560]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:49:05 compute-0 sudo[233558]: pam_unix(sudo:session): session closed for user root
Nov 25 23:49:05 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:49:05 compute-0 ceph-mon[75654]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Nov 25 23:49:05 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:49:05.471023) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 23:49:05 compute-0 ceph-mon[75654]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Nov 25 23:49:05 compute-0 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764114545471057, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1042, "num_deletes": 250, "total_data_size": 1047556, "memory_usage": 1068456, "flush_reason": "Manual Compaction"}
Nov 25 23:49:05 compute-0 ceph-mon[75654]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Nov 25 23:49:05 compute-0 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764114545480043, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 643564, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 11105, "largest_seqno": 12146, "table_properties": {"data_size": 639580, "index_size": 1636, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 10103, "raw_average_key_size": 20, "raw_value_size": 631040, "raw_average_value_size": 1249, "num_data_blocks": 75, "num_entries": 505, "num_filter_entries": 505, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764114442, "oldest_key_time": 1764114442, "file_creation_time": 1764114545, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cf57a6b1-796f-4cfa-b350-53eb10a4554d", "db_session_id": "Q7VS70283MEZ1V621ZPR", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Nov 25 23:49:05 compute-0 ceph-mon[75654]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 9067 microseconds, and 3279 cpu microseconds.
Nov 25 23:49:05 compute-0 ceph-mon[75654]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 23:49:05 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:49:05.480087) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 643564 bytes OK
Nov 25 23:49:05 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:49:05.480105) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Nov 25 23:49:05 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:49:05.481922) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Nov 25 23:49:05 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:49:05.481934) EVENT_LOG_v1 {"time_micros": 1764114545481930, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 23:49:05 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:49:05.481948) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 23:49:05 compute-0 ceph-mon[75654]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 1042676, prev total WAL file size 1042676, number of live WAL files 2.
Nov 25 23:49:05 compute-0 ceph-mon[75654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 23:49:05 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:49:05.482376) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323530' seq:72057594037927935, type:22 .. '6D67727374617400353031' seq:0, type:0; will stop at (end)
Nov 25 23:49:05 compute-0 ceph-mon[75654]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 23:49:05 compute-0 ceph-mon[75654]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(628KB)], [29(5760KB)]
Nov 25 23:49:05 compute-0 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764114545482435, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 6542131, "oldest_snapshot_seqno": -1}
Nov 25 23:49:05 compute-0 ceph-mon[75654]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 3247 keys, 4842013 bytes, temperature: kUnknown
Nov 25 23:49:05 compute-0 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764114545506051, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 4842013, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 4819084, "index_size": 13732, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8133, "raw_key_size": 75031, "raw_average_key_size": 23, "raw_value_size": 4759699, "raw_average_value_size": 1465, "num_data_blocks": 610, "num_entries": 3247, "num_filter_entries": 3247, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764113467, "oldest_key_time": 0, "file_creation_time": 1764114545, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cf57a6b1-796f-4cfa-b350-53eb10a4554d", "db_session_id": "Q7VS70283MEZ1V621ZPR", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Nov 25 23:49:05 compute-0 ceph-mon[75654]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 23:49:05 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:49:05.506321) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 4842013 bytes
Nov 25 23:49:05 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:49:05.507539) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 276.1 rd, 204.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 5.6 +0.0 blob) out(4.6 +0.0 blob), read-write-amplify(17.7) write-amplify(7.5) OK, records in: 3713, records dropped: 466 output_compression: NoCompression
Nov 25 23:49:05 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:49:05.507567) EVENT_LOG_v1 {"time_micros": 1764114545507555, "job": 12, "event": "compaction_finished", "compaction_time_micros": 23697, "compaction_time_cpu_micros": 12256, "output_level": 6, "num_output_files": 1, "total_output_size": 4842013, "num_input_records": 3713, "num_output_records": 3247, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 23:49:05 compute-0 ceph-mon[75654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 23:49:05 compute-0 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764114545507869, "job": 12, "event": "table_file_deletion", "file_number": 31}
Nov 25 23:49:05 compute-0 ceph-mon[75654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 23:49:05 compute-0 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764114545509714, "job": 12, "event": "table_file_deletion", "file_number": 29}
Nov 25 23:49:05 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:49:05.482279) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 23:49:05 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:49:05.509755) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 23:49:05 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:49:05.509757) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 23:49:05 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:49:05.509759) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 23:49:05 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:49:05.509760) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 23:49:05 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:49:05.509762) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 23:49:05 compute-0 sudo[233723]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzhluxliksmlpzcgyesykxwxsssnrqnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114545.3429093-594-128727637442830/AnsiballZ_systemd.py'
Nov 25 23:49:05 compute-0 sudo[233723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:49:05 compute-0 python3.9[233725]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 23:49:06 compute-0 ceph-mon[75654]: pgmap v568: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:49:06 compute-0 systemd[1]: Stopping multipathd container...
Nov 25 23:49:06 compute-0 multipathd[233218]: 3358.858389 | exit (signal)
Nov 25 23:49:06 compute-0 multipathd[233218]: 3358.859090 | --------shut down-------
Nov 25 23:49:06 compute-0 systemd[1]: libpod-b33bb0f8771548517ad62d4e0193b93d84fd686bd9113462f17aea9d51587084.scope: Deactivated successfully.
Nov 25 23:49:06 compute-0 podman[233729]: 2025-11-25 23:49:06.170808496 +0000 UTC m=+0.119071446 container died b33bb0f8771548517ad62d4e0193b93d84fd686bd9113462f17aea9d51587084 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 25 23:49:06 compute-0 systemd[1]: b33bb0f8771548517ad62d4e0193b93d84fd686bd9113462f17aea9d51587084-31ede1e73f9bb0c4.timer: Deactivated successfully.
Nov 25 23:49:06 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run b33bb0f8771548517ad62d4e0193b93d84fd686bd9113462f17aea9d51587084.
Nov 25 23:49:06 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b33bb0f8771548517ad62d4e0193b93d84fd686bd9113462f17aea9d51587084-userdata-shm.mount: Deactivated successfully.
Nov 25 23:49:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-cc456fe93271dcda37c18dbbb2a83b50dbdcad4cd5a84a4efe9cd6b2652013a9-merged.mount: Deactivated successfully.
Nov 25 23:49:06 compute-0 podman[233729]: 2025-11-25 23:49:06.233548978 +0000 UTC m=+0.181811938 container cleanup b33bb0f8771548517ad62d4e0193b93d84fd686bd9113462f17aea9d51587084 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 23:49:06 compute-0 podman[233729]: multipathd
Nov 25 23:49:06 compute-0 podman[233758]: multipathd
Nov 25 23:49:06 compute-0 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Nov 25 23:49:06 compute-0 systemd[1]: Stopped multipathd container.
Nov 25 23:49:06 compute-0 systemd[1]: Starting multipathd container...
Nov 25 23:49:06 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v569: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:49:06 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:49:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc456fe93271dcda37c18dbbb2a83b50dbdcad4cd5a84a4efe9cd6b2652013a9/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 25 23:49:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc456fe93271dcda37c18dbbb2a83b50dbdcad4cd5a84a4efe9cd6b2652013a9/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 25 23:49:06 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run b33bb0f8771548517ad62d4e0193b93d84fd686bd9113462f17aea9d51587084.
Nov 25 23:49:06 compute-0 podman[233771]: 2025-11-25 23:49:06.437201535 +0000 UTC m=+0.110275853 container init b33bb0f8771548517ad62d4e0193b93d84fd686bd9113462f17aea9d51587084 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 25 23:49:06 compute-0 multipathd[233786]: + sudo -E kolla_set_configs
Nov 25 23:49:06 compute-0 sudo[233792]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Nov 25 23:49:06 compute-0 podman[233771]: 2025-11-25 23:49:06.470970469 +0000 UTC m=+0.144044757 container start b33bb0f8771548517ad62d4e0193b93d84fd686bd9113462f17aea9d51587084 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 23:49:06 compute-0 sudo[233792]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 25 23:49:06 compute-0 sudo[233792]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Nov 25 23:49:06 compute-0 podman[233771]: multipathd
Nov 25 23:49:06 compute-0 systemd[1]: Started multipathd container.
Nov 25 23:49:06 compute-0 sudo[233723]: pam_unix(sudo:session): session closed for user root
Nov 25 23:49:06 compute-0 multipathd[233786]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 25 23:49:06 compute-0 multipathd[233786]: INFO:__main__:Validating config file
Nov 25 23:49:06 compute-0 multipathd[233786]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 25 23:49:06 compute-0 multipathd[233786]: INFO:__main__:Writing out command to execute
Nov 25 23:49:06 compute-0 sudo[233792]: pam_unix(sudo:session): session closed for user root
Nov 25 23:49:06 compute-0 multipathd[233786]: ++ cat /run_command
Nov 25 23:49:06 compute-0 multipathd[233786]: + CMD='/usr/sbin/multipathd -d'
Nov 25 23:49:06 compute-0 multipathd[233786]: + ARGS=
Nov 25 23:49:06 compute-0 multipathd[233786]: + sudo kolla_copy_cacerts
Nov 25 23:49:06 compute-0 podman[233793]: 2025-11-25 23:49:06.563040858 +0000 UTC m=+0.085205048 container health_status b33bb0f8771548517ad62d4e0193b93d84fd686bd9113462f17aea9d51587084 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 23:49:06 compute-0 sudo[233815]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Nov 25 23:49:06 compute-0 sudo[233815]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 25 23:49:06 compute-0 sudo[233815]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Nov 25 23:49:06 compute-0 systemd[1]: b33bb0f8771548517ad62d4e0193b93d84fd686bd9113462f17aea9d51587084-41b6612c68b5e00b.service: Main process exited, code=exited, status=1/FAILURE
Nov 25 23:49:06 compute-0 systemd[1]: b33bb0f8771548517ad62d4e0193b93d84fd686bd9113462f17aea9d51587084-41b6612c68b5e00b.service: Failed with result 'exit-code'.
Nov 25 23:49:06 compute-0 sudo[233815]: pam_unix(sudo:session): session closed for user root
Nov 25 23:49:06 compute-0 multipathd[233786]: Running command: '/usr/sbin/multipathd -d'
Nov 25 23:49:06 compute-0 multipathd[233786]: + [[ ! -n '' ]]
Nov 25 23:49:06 compute-0 multipathd[233786]: + . kolla_extend_start
Nov 25 23:49:06 compute-0 multipathd[233786]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Nov 25 23:49:06 compute-0 multipathd[233786]: + umask 0022
Nov 25 23:49:06 compute-0 multipathd[233786]: + exec /usr/sbin/multipathd -d
Nov 25 23:49:06 compute-0 multipathd[233786]: 3359.309771 | --------start up--------
Nov 25 23:49:06 compute-0 multipathd[233786]: 3359.309788 | read /etc/multipath.conf
Nov 25 23:49:06 compute-0 multipathd[233786]: 3359.316411 | path checkers start up
Nov 25 23:49:07 compute-0 sudo[233974]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-toglvqvyewzmwhbzenwigrpkznzmmlpq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114546.733244-602-84589147390550/AnsiballZ_file.py'
Nov 25 23:49:07 compute-0 sudo[233974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:49:07 compute-0 python3.9[233976]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:49:07 compute-0 sudo[233974]: pam_unix(sudo:session): session closed for user root
Nov 25 23:49:08 compute-0 ceph-mon[75654]: pgmap v569: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:49:08 compute-0 sudo[234134]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igwmrmobjkxycpeyiqucwycmzkbiptpc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114547.6873088-614-231700615647712/AnsiballZ_file.py'
Nov 25 23:49:08 compute-0 sudo[234134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:49:08 compute-0 podman[234100]: 2025-11-25 23:49:08.109782419 +0000 UTC m=+0.099215410 container health_status 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Nov 25 23:49:08 compute-0 python3.9[234141]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 25 23:49:08 compute-0 sudo[234134]: pam_unix(sudo:session): session closed for user root
Nov 25 23:49:08 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v570: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:49:08 compute-0 sudo[234304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zigxiwhnbqfwsmmdnntzfmkchyhgbndk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114548.4936373-622-194722373340560/AnsiballZ_modprobe.py'
Nov 25 23:49:08 compute-0 sudo[234304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:49:09 compute-0 python3.9[234306]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Nov 25 23:49:09 compute-0 kernel: Key type psk registered
Nov 25 23:49:09 compute-0 sudo[234304]: pam_unix(sudo:session): session closed for user root
Nov 25 23:49:09 compute-0 sudo[234467]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ssxwelwjxcjjpyeqqtozouxsuhekkvxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114549.5238996-630-41314381482995/AnsiballZ_stat.py'
Nov 25 23:49:09 compute-0 sudo[234467]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:49:10 compute-0 ceph-mon[75654]: pgmap v570: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:49:10 compute-0 python3.9[234469]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:49:10 compute-0 sudo[234467]: pam_unix(sudo:session): session closed for user root
Nov 25 23:49:10 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v571: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:49:10 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:49:10 compute-0 sudo[234590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbnziocqejzbumywzxllaeciwpjoicds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114549.5238996-630-41314381482995/AnsiballZ_copy.py'
Nov 25 23:49:10 compute-0 sudo[234590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:49:10 compute-0 python3.9[234592]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764114549.5238996-630-41314381482995/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:49:10 compute-0 sudo[234590]: pam_unix(sudo:session): session closed for user root
Nov 25 23:49:11 compute-0 sudo[234742]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xikpcntlyxquuphrzjscwtvxpbutxltf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114551.0461478-646-88223597353316/AnsiballZ_lineinfile.py'
Nov 25 23:49:11 compute-0 sudo[234742]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:49:11 compute-0 python3.9[234744]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:49:11 compute-0 sudo[234742]: pam_unix(sudo:session): session closed for user root
Nov 25 23:49:12 compute-0 ceph-mon[75654]: pgmap v571: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:49:12 compute-0 sudo[234894]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eumrzpafgtksayxpzvllhyindvjuphbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114551.885049-654-207023849256016/AnsiballZ_systemd.py'
Nov 25 23:49:12 compute-0 sudo[234894]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:49:12 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v572: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:49:12 compute-0 python3.9[234896]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 23:49:12 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Nov 25 23:49:12 compute-0 systemd[1]: Stopped Load Kernel Modules.
Nov 25 23:49:12 compute-0 systemd[1]: Stopping Load Kernel Modules...
Nov 25 23:49:12 compute-0 systemd[1]: Starting Load Kernel Modules...
Nov 25 23:49:12 compute-0 systemd[1]: Finished Load Kernel Modules.
Nov 25 23:49:12 compute-0 sudo[234894]: pam_unix(sudo:session): session closed for user root
Nov 25 23:49:13 compute-0 sudo[235050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yefrisqlmxptgymgyrqjxtgchlptmkwz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114552.9708796-662-89930295861500/AnsiballZ_dnf.py'
Nov 25 23:49:13 compute-0 sudo[235050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:49:13 compute-0 python3.9[235052]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 23:49:14 compute-0 ceph-mon[75654]: pgmap v572: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:49:14 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v573: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:49:15 compute-0 podman[235055]: 2025-11-25 23:49:15.263049485 +0000 UTC m=+0.079882267 container health_status 9c33443d6f21f9126b440289b7d384a59ef92406382d0e40a2b11f1da5dc47d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 25 23:49:15 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:49:15 compute-0 systemd[1]: Reloading.
Nov 25 23:49:16 compute-0 systemd-rc-local-generator[235095]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 23:49:16 compute-0 systemd-sysv-generator[235102]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 23:49:16 compute-0 ceph-mon[75654]: pgmap v573: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:49:16 compute-0 systemd[1]: Reloading.
Nov 25 23:49:16 compute-0 systemd-rc-local-generator[235142]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 23:49:16 compute-0 systemd-sysv-generator[235147]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 23:49:16 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v574: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:49:16 compute-0 systemd-logind[789]: Watching system buttons on /dev/input/event0 (Power Button)
Nov 25 23:49:16 compute-0 systemd-logind[789]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Nov 25 23:49:16 compute-0 lvm[235186]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 25 23:49:16 compute-0 lvm[235186]: VG ceph_vg1 finished
Nov 25 23:49:16 compute-0 lvm[235188]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 23:49:16 compute-0 lvm[235188]: VG ceph_vg0 finished
Nov 25 23:49:16 compute-0 lvm[235187]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 25 23:49:16 compute-0 lvm[235187]: VG ceph_vg2 finished
Nov 25 23:49:16 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 25 23:49:16 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 25 23:49:16 compute-0 systemd[1]: Reloading.
Nov 25 23:49:17 compute-0 systemd-rc-local-generator[235243]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 23:49:17 compute-0 systemd-sysv-generator[235247]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 23:49:17 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 25 23:49:17 compute-0 sudo[235050]: pam_unix(sudo:session): session closed for user root
Nov 25 23:49:18 compute-0 ceph-mon[75654]: pgmap v574: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:49:18 compute-0 sudo[236492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqbzatrekfiulrtnfhcstarazqtchenh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114557.9705062-670-12239027066281/AnsiballZ_systemd_service.py'
Nov 25 23:49:18 compute-0 sudo[236492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:49:18 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v575: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:49:18 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 25 23:49:18 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 25 23:49:18 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.817s CPU time.
Nov 25 23:49:18 compute-0 systemd[1]: run-r6eb606b3add0414f97fec54969519a24.service: Deactivated successfully.
Nov 25 23:49:18 compute-0 python3.9[236512]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 23:49:18 compute-0 systemd[1]: Stopping Open-iSCSI...
Nov 25 23:49:18 compute-0 iscsid[223940]: iscsid shutting down.
Nov 25 23:49:18 compute-0 systemd[1]: iscsid.service: Deactivated successfully.
Nov 25 23:49:18 compute-0 systemd[1]: Stopped Open-iSCSI.
Nov 25 23:49:18 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Nov 25 23:49:18 compute-0 systemd[1]: Starting Open-iSCSI...
Nov 25 23:49:18 compute-0 systemd[1]: Started Open-iSCSI.
Nov 25 23:49:18 compute-0 sudo[236492]: pam_unix(sudo:session): session closed for user root
Nov 25 23:49:19 compute-0 python3.9[236686]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 23:49:20 compute-0 ceph-mon[75654]: pgmap v575: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:49:20 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v576: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:49:20 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:49:20 compute-0 sudo[236840]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbbrsqenzyqfnykgysovtvsocelwhemr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114560.1512203-688-93884999396843/AnsiballZ_file.py'
Nov 25 23:49:20 compute-0 sudo[236840]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:49:20 compute-0 python3.9[236842]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:49:20 compute-0 sudo[236840]: pam_unix(sudo:session): session closed for user root
Nov 25 23:49:21 compute-0 sudo[236992]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rafdexubrpdtlwwyijylygnabdloebdk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114561.2131622-699-246763052040542/AnsiballZ_systemd_service.py'
Nov 25 23:49:21 compute-0 sudo[236992]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:49:21 compute-0 python3.9[236994]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 23:49:21 compute-0 systemd[1]: Reloading.
Nov 25 23:49:22 compute-0 systemd-rc-local-generator[237019]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 23:49:22 compute-0 ceph-mon[75654]: pgmap v576: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:49:22 compute-0 systemd-sysv-generator[237023]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 23:49:22 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v577: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:49:22 compute-0 sudo[236992]: pam_unix(sudo:session): session closed for user root
Nov 25 23:49:23 compute-0 python3.9[237178]: ansible-ansible.builtin.service_facts Invoked
Nov 25 23:49:23 compute-0 ceph-mon[75654]: pgmap v577: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:49:23 compute-0 network[237195]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 25 23:49:23 compute-0 network[237196]: 'network-scripts' will be removed from distribution in near future.
Nov 25 23:49:23 compute-0 network[237197]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 25 23:49:24 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v578: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:49:25 compute-0 ceph-mon[75654]: pgmap v578: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:49:25 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:49:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:49:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:49:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:49:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:49:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:49:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:49:26 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v579: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:49:27 compute-0 ceph-mon[75654]: pgmap v579: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:49:27 compute-0 sudo[237470]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zidkqawpdefehiijoknrpvvjijrsxlfv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114567.369062-718-72143961683630/AnsiballZ_systemd_service.py'
Nov 25 23:49:27 compute-0 sudo[237470]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:49:28 compute-0 python3.9[237472]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 23:49:28 compute-0 sudo[237470]: pam_unix(sudo:session): session closed for user root
Nov 25 23:49:28 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v580: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:49:29 compute-0 sudo[237623]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojjjjrdtcfycmnuggoauybmwflpiajix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114568.2747831-718-32841141746232/AnsiballZ_systemd_service.py'
Nov 25 23:49:29 compute-0 sudo[237623]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:49:29 compute-0 python3.9[237625]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 23:49:29 compute-0 ceph-mon[75654]: pgmap v580: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:49:29 compute-0 sudo[237623]: pam_unix(sudo:session): session closed for user root
Nov 25 23:49:30 compute-0 sudo[237776]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbxpouileighdldtwbgtcllzfeiznkul ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114569.6717553-718-30016698922677/AnsiballZ_systemd_service.py'
Nov 25 23:49:30 compute-0 sudo[237776]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:49:30 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v581: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:49:30 compute-0 python3.9[237778]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 23:49:30 compute-0 sudo[237776]: pam_unix(sudo:session): session closed for user root
Nov 25 23:49:30 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:49:31 compute-0 sudo[237929]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmtsxvzbvcicveeuobocfmxrjzhnknji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114570.661424-718-146026398947041/AnsiballZ_systemd_service.py'
Nov 25 23:49:31 compute-0 sudo[237929]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:49:31 compute-0 python3.9[237931]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 23:49:31 compute-0 ceph-mon[75654]: pgmap v581: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:49:32 compute-0 sudo[237929]: pam_unix(sudo:session): session closed for user root
Nov 25 23:49:32 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v582: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:49:32 compute-0 sudo[238082]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-byvsmccuhmyyzorwzmofywpdpyyzpjwe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114572.5551803-718-88796344833658/AnsiballZ_systemd_service.py'
Nov 25 23:49:32 compute-0 sudo[238082]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:49:33 compute-0 python3.9[238084]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 23:49:33 compute-0 sudo[238082]: pam_unix(sudo:session): session closed for user root
Nov 25 23:49:33 compute-0 ceph-mon[75654]: pgmap v582: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:49:33 compute-0 sudo[238235]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnvyrmlszbsffdldyygxsjuxrgoapiwn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114573.5172946-718-254213734544029/AnsiballZ_systemd_service.py'
Nov 25 23:49:33 compute-0 sudo[238235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:49:34 compute-0 python3.9[238237]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 23:49:34 compute-0 sudo[238235]: pam_unix(sudo:session): session closed for user root
Nov 25 23:49:34 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v583: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:49:34 compute-0 sudo[238388]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmbmvywoxaeajxumfxhyvlchmkxyjjww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114574.4706976-718-96334758520389/AnsiballZ_systemd_service.py'
Nov 25 23:49:34 compute-0 sudo[238388]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:49:35 compute-0 python3.9[238390]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 23:49:35 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:49:35 compute-0 ceph-mon[75654]: pgmap v583: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:49:36 compute-0 sudo[238388]: pam_unix(sudo:session): session closed for user root
Nov 25 23:49:36 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v584: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:49:36 compute-0 sudo[238555]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bljiufgvfkmwpfsoluihjidvljwmgrqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114576.4390597-718-231049991719099/AnsiballZ_systemd_service.py'
Nov 25 23:49:36 compute-0 sudo[238555]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:49:36 compute-0 podman[238515]: 2025-11-25 23:49:36.863029227 +0000 UTC m=+0.094015792 container health_status b33bb0f8771548517ad62d4e0193b93d84fd686bd9113462f17aea9d51587084 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 23:49:37 compute-0 python3.9[238560]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 23:49:37 compute-0 sudo[238555]: pam_unix(sudo:session): session closed for user root
Nov 25 23:49:37 compute-0 ceph-mon[75654]: pgmap v584: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:49:38 compute-0 sudo[238712]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfkoztckwzpywilavpylasjxwtlxeblg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114577.6596205-777-186839377871455/AnsiballZ_file.py'
Nov 25 23:49:38 compute-0 sudo[238712]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:49:38 compute-0 python3.9[238714]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:49:38 compute-0 podman[238715]: 2025-11-25 23:49:38.298196071 +0000 UTC m=+0.122126156 container health_status 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 23:49:38 compute-0 sudo[238712]: pam_unix(sudo:session): session closed for user root
Nov 25 23:49:38 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v585: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:49:38 compute-0 sudo[238890]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbtkbjiwyhrypzsgmqochrzkbvdskhso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114578.5099633-777-24438899969468/AnsiballZ_file.py'
Nov 25 23:49:38 compute-0 sudo[238890]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:49:39 compute-0 python3.9[238892]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:49:39 compute-0 sudo[238890]: pam_unix(sudo:session): session closed for user root
Nov 25 23:49:39 compute-0 ceph-mon[75654]: pgmap v585: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:49:39 compute-0 sudo[239042]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awdnncfpsywcdamxsnjcsqjdcwxopvre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114579.2600777-777-258721703590608/AnsiballZ_file.py'
Nov 25 23:49:39 compute-0 sudo[239042]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:49:39 compute-0 python3.9[239044]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:49:39 compute-0 sudo[239042]: pam_unix(sudo:session): session closed for user root
Nov 25 23:49:40 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v586: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:49:40 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:49:40 compute-0 sudo[239194]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtgeprmnckryposozkgftywpcbcqglqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114580.1056874-777-162866650413192/AnsiballZ_file.py'
Nov 25 23:49:40 compute-0 sudo[239194]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:49:40 compute-0 python3.9[239196]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:49:40 compute-0 sudo[239194]: pam_unix(sudo:session): session closed for user root
Nov 25 23:49:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:49:40.757 160725 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 23:49:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:49:40.757 160725 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 23:49:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:49:40.757 160725 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 23:49:41 compute-0 sudo[239346]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tizwvgcpzcpdwjfnsikapavcjbmtazeb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114580.917924-777-246782438044973/AnsiballZ_file.py'
Nov 25 23:49:41 compute-0 sudo[239346]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:49:41 compute-0 ceph-mon[75654]: pgmap v586: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:49:41 compute-0 python3.9[239348]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:49:41 compute-0 sudo[239346]: pam_unix(sudo:session): session closed for user root
Nov 25 23:49:41 compute-0 sudo[239498]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckalbxrrwbcnfksuuflptsbttlkivnkg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114581.678654-777-40228132652987/AnsiballZ_file.py'
Nov 25 23:49:41 compute-0 sudo[239498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:49:42 compute-0 python3.9[239500]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:49:42 compute-0 sudo[239498]: pam_unix(sudo:session): session closed for user root
Nov 25 23:49:42 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v587: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:49:42 compute-0 sudo[239650]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igdslvsaplqcvdnyhrgkaskvcdybgpau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114582.358582-777-92907940833926/AnsiballZ_file.py'
Nov 25 23:49:42 compute-0 sudo[239650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:49:42 compute-0 python3.9[239652]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:49:42 compute-0 sudo[239650]: pam_unix(sudo:session): session closed for user root
Nov 25 23:49:43 compute-0 ceph-mon[75654]: pgmap v587: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:49:43 compute-0 sudo[239802]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjfzucansvgrgfsmeihjxembmkymtdsn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114583.2433279-777-2522485599715/AnsiballZ_file.py'
Nov 25 23:49:43 compute-0 sudo[239802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:49:43 compute-0 python3.9[239804]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:49:43 compute-0 sudo[239802]: pam_unix(sudo:session): session closed for user root
Nov 25 23:49:44 compute-0 sudo[239860]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:49:44 compute-0 sudo[239860]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:49:44 compute-0 sudo[239860]: pam_unix(sudo:session): session closed for user root
Nov 25 23:49:44 compute-0 sudo[239908]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:49:44 compute-0 sudo[239908]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:49:44 compute-0 sudo[239908]: pam_unix(sudo:session): session closed for user root
Nov 25 23:49:44 compute-0 sudo[239954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:49:44 compute-0 sudo[239954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:49:44 compute-0 sudo[239954]: pam_unix(sudo:session): session closed for user root
Nov 25 23:49:44 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v588: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:49:44 compute-0 sudo[240003]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 23:49:44 compute-0 sudo[240003]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:49:44 compute-0 sudo[240052]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfsdhqdjmvbkkxnzxfocskozfvvqqaer ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114584.0864406-834-185020963421388/AnsiballZ_file.py'
Nov 25 23:49:44 compute-0 sudo[240052]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:49:44 compute-0 python3.9[240056]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:49:44 compute-0 sudo[240052]: pam_unix(sudo:session): session closed for user root
Nov 25 23:49:44 compute-0 sudo[240003]: pam_unix(sudo:session): session closed for user root
Nov 25 23:49:45 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 23:49:45 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:49:45 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 23:49:45 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 23:49:45 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 23:49:45 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:49:45 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev b405495c-904f-4c77-a287-2f6e033468ee does not exist
Nov 25 23:49:45 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev e5fdc532-761c-4705-bc09-e94e532c29b5 does not exist
Nov 25 23:49:45 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev 9bb2f837-000c-427b-99fb-7f94f695e8cc does not exist
Nov 25 23:49:45 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 23:49:45 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 23:49:45 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 23:49:45 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 23:49:45 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 23:49:45 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:49:45 compute-0 sudo[240195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:49:45 compute-0 sudo[240195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:49:45 compute-0 sudo[240195]: pam_unix(sudo:session): session closed for user root
Nov 25 23:49:45 compute-0 sudo[240275]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eaionlslvlnkfxfwdyherkvsjscdlncl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114584.8462884-834-234053021989666/AnsiballZ_file.py'
Nov 25 23:49:45 compute-0 sudo[240275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:49:45 compute-0 sudo[240253]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:49:45 compute-0 sudo[240253]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:49:45 compute-0 sudo[240253]: pam_unix(sudo:session): session closed for user root
Nov 25 23:49:45 compute-0 sudo[240290]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:49:45 compute-0 sudo[240290]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:49:45 compute-0 sudo[240290]: pam_unix(sudo:session): session closed for user root
Nov 25 23:49:45 compute-0 sudo[240316]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 23:49:45 compute-0 sudo[240316]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:49:45 compute-0 podman[240314]: 2025-11-25 23:49:45.407273276 +0000 UTC m=+0.074448913 container health_status 9c33443d6f21f9126b440289b7d384a59ef92406382d0e40a2b11f1da5dc47d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 23:49:45 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:49:45 compute-0 python3.9[240288]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:49:45 compute-0 sudo[240275]: pam_unix(sudo:session): session closed for user root
Nov 25 23:49:45 compute-0 ceph-mon[75654]: pgmap v588: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:49:45 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:49:45 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 23:49:45 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:49:45 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 23:49:45 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 23:49:45 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:49:45 compute-0 podman[240476]: 2025-11-25 23:49:45.815276037 +0000 UTC m=+0.058502762 container create 197e2dec1d9dd2da19c4974772e2e2dc5f942deba33c3bd262e1376a60065234 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_taussig, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:49:45 compute-0 systemd[1]: Started libpod-conmon-197e2dec1d9dd2da19c4974772e2e2dc5f942deba33c3bd262e1376a60065234.scope.
Nov 25 23:49:45 compute-0 podman[240476]: 2025-11-25 23:49:45.785811436 +0000 UTC m=+0.029038211 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:49:45 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:49:45 compute-0 podman[240476]: 2025-11-25 23:49:45.909046911 +0000 UTC m=+0.152273626 container init 197e2dec1d9dd2da19c4974772e2e2dc5f942deba33c3bd262e1376a60065234 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_taussig, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 23:49:45 compute-0 podman[240476]: 2025-11-25 23:49:45.923011331 +0000 UTC m=+0.166238056 container start 197e2dec1d9dd2da19c4974772e2e2dc5f942deba33c3bd262e1376a60065234 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_taussig, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 25 23:49:45 compute-0 podman[240476]: 2025-11-25 23:49:45.927414818 +0000 UTC m=+0.170641543 container attach 197e2dec1d9dd2da19c4974772e2e2dc5f942deba33c3bd262e1376a60065234 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_taussig, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:49:45 compute-0 jolly_taussig[240527]: 167 167
Nov 25 23:49:45 compute-0 systemd[1]: libpod-197e2dec1d9dd2da19c4974772e2e2dc5f942deba33c3bd262e1376a60065234.scope: Deactivated successfully.
Nov 25 23:49:45 compute-0 podman[240476]: 2025-11-25 23:49:45.932374599 +0000 UTC m=+0.175601314 container died 197e2dec1d9dd2da19c4974772e2e2dc5f942deba33c3bd262e1376a60065234 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 25 23:49:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-f8c9f2b897a7b3bd15dc3bb5f48c009791c826ece9d5aa8f40167bda26822df4-merged.mount: Deactivated successfully.
Nov 25 23:49:45 compute-0 podman[240476]: 2025-11-25 23:49:45.982506267 +0000 UTC m=+0.225732962 container remove 197e2dec1d9dd2da19c4974772e2e2dc5f942deba33c3bd262e1376a60065234 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_taussig, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:49:45 compute-0 systemd[1]: libpod-conmon-197e2dec1d9dd2da19c4974772e2e2dc5f942deba33c3bd262e1376a60065234.scope: Deactivated successfully.
Nov 25 23:49:46 compute-0 sudo[240583]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qsejulalnqelfdxsqkyxlgzbijkpjljf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114585.6361475-834-183014053416660/AnsiballZ_file.py'
Nov 25 23:49:46 compute-0 sudo[240583]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:49:46 compute-0 python3.9[240587]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:49:46 compute-0 podman[240593]: 2025-11-25 23:49:46.217950615 +0000 UTC m=+0.059353293 container create da3f562ce76e0781fa7fb73b7f182164041efc9f5043d5705e0e7fcfe51d167e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_nightingale, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 23:49:46 compute-0 sudo[240583]: pam_unix(sudo:session): session closed for user root
Nov 25 23:49:46 compute-0 systemd[1]: Started libpod-conmon-da3f562ce76e0781fa7fb73b7f182164041efc9f5043d5705e0e7fcfe51d167e.scope.
Nov 25 23:49:46 compute-0 podman[240593]: 2025-11-25 23:49:46.195306455 +0000 UTC m=+0.036709133 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:49:46 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:49:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2ab1efb19df076ed233d07ec13bf33cf3cd0051e122c57e21da21a016375677/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 23:49:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2ab1efb19df076ed233d07ec13bf33cf3cd0051e122c57e21da21a016375677/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:49:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2ab1efb19df076ed233d07ec13bf33cf3cd0051e122c57e21da21a016375677/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:49:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2ab1efb19df076ed233d07ec13bf33cf3cd0051e122c57e21da21a016375677/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:49:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2ab1efb19df076ed233d07ec13bf33cf3cd0051e122c57e21da21a016375677/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 23:49:46 compute-0 podman[240593]: 2025-11-25 23:49:46.3166396 +0000 UTC m=+0.158042278 container init da3f562ce76e0781fa7fb73b7f182164041efc9f5043d5705e0e7fcfe51d167e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_nightingale, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Nov 25 23:49:46 compute-0 podman[240593]: 2025-11-25 23:49:46.325459323 +0000 UTC m=+0.166861971 container start da3f562ce76e0781fa7fb73b7f182164041efc9f5043d5705e0e7fcfe51d167e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 23:49:46 compute-0 podman[240593]: 2025-11-25 23:49:46.331047252 +0000 UTC m=+0.172449900 container attach da3f562ce76e0781fa7fb73b7f182164041efc9f5043d5705e0e7fcfe51d167e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_nightingale, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:49:46 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v589: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:49:46 compute-0 sudo[240763]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-konsmboszmsgiuogiqezewibpchfqlte ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114586.3775704-834-7940080939112/AnsiballZ_file.py'
Nov 25 23:49:46 compute-0 sudo[240763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:49:46 compute-0 python3.9[240765]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:49:46 compute-0 sudo[240763]: pam_unix(sudo:session): session closed for user root
Nov 25 23:49:47 compute-0 sudo[240938]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oklfjrdkhcploagmvvqsbulmlppkqdca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114587.1571257-834-255120565227126/AnsiballZ_file.py'
Nov 25 23:49:47 compute-0 sudo[240938]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:49:47 compute-0 exciting_nightingale[240627]: --> passed data devices: 0 physical, 3 LVM
Nov 25 23:49:47 compute-0 exciting_nightingale[240627]: --> relative data size: 1.0
Nov 25 23:49:47 compute-0 exciting_nightingale[240627]: --> All data devices are unavailable
Nov 25 23:49:47 compute-0 systemd[1]: libpod-da3f562ce76e0781fa7fb73b7f182164041efc9f5043d5705e0e7fcfe51d167e.scope: Deactivated successfully.
Nov 25 23:49:47 compute-0 systemd[1]: libpod-da3f562ce76e0781fa7fb73b7f182164041efc9f5043d5705e0e7fcfe51d167e.scope: Consumed 1.078s CPU time.
Nov 25 23:49:47 compute-0 podman[240593]: 2025-11-25 23:49:47.549010842 +0000 UTC m=+1.390413480 container died da3f562ce76e0781fa7fb73b7f182164041efc9f5043d5705e0e7fcfe51d167e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_nightingale, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:49:47 compute-0 ceph-mon[75654]: pgmap v589: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:49:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-e2ab1efb19df076ed233d07ec13bf33cf3cd0051e122c57e21da21a016375677-merged.mount: Deactivated successfully.
Nov 25 23:49:47 compute-0 podman[240593]: 2025-11-25 23:49:47.630443979 +0000 UTC m=+1.471846627 container remove da3f562ce76e0781fa7fb73b7f182164041efc9f5043d5705e0e7fcfe51d167e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 23:49:47 compute-0 systemd[1]: libpod-conmon-da3f562ce76e0781fa7fb73b7f182164041efc9f5043d5705e0e7fcfe51d167e.scope: Deactivated successfully.
Nov 25 23:49:47 compute-0 python3.9[240941]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:49:47 compute-0 sudo[240316]: pam_unix(sudo:session): session closed for user root
Nov 25 23:49:47 compute-0 sudo[240938]: pam_unix(sudo:session): session closed for user root
Nov 25 23:49:47 compute-0 sudo[240954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:49:47 compute-0 sudo[240954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:49:47 compute-0 sudo[240954]: pam_unix(sudo:session): session closed for user root
Nov 25 23:49:47 compute-0 sudo[241003]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:49:47 compute-0 sudo[241003]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:49:47 compute-0 sudo[241003]: pam_unix(sudo:session): session closed for user root
Nov 25 23:49:47 compute-0 sudo[241051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:49:47 compute-0 sudo[241051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:49:47 compute-0 sudo[241051]: pam_unix(sudo:session): session closed for user root
Nov 25 23:49:47 compute-0 sudo[241104]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 -- lvm list --format json
Nov 25 23:49:47 compute-0 sudo[241104]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:49:48 compute-0 sudo[241225]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xuhztyhtgjgktvjjpiymkpvwxlcqzwbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114587.8402774-834-201008894770697/AnsiballZ_file.py'
Nov 25 23:49:48 compute-0 sudo[241225]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:49:48 compute-0 podman[241245]: 2025-11-25 23:49:48.399959338 +0000 UTC m=+0.059275912 container create e1d8b91cc4241479695bfb0d340a3404a8eb5669b9e3df740d45e83ea68a355f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_mccarthy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:49:48 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v590: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:49:48 compute-0 systemd[1]: Started libpod-conmon-e1d8b91cc4241479695bfb0d340a3404a8eb5669b9e3df740d45e83ea68a355f.scope.
Nov 25 23:49:48 compute-0 python3.9[241230]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:49:48 compute-0 sudo[241225]: pam_unix(sudo:session): session closed for user root
Nov 25 23:49:48 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:49:48 compute-0 podman[241245]: 2025-11-25 23:49:48.376420564 +0000 UTC m=+0.035737128 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:49:48 compute-0 podman[241245]: 2025-11-25 23:49:48.486387458 +0000 UTC m=+0.145704022 container init e1d8b91cc4241479695bfb0d340a3404a8eb5669b9e3df740d45e83ea68a355f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_mccarthy, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:49:48 compute-0 podman[241245]: 2025-11-25 23:49:48.492406457 +0000 UTC m=+0.151723011 container start e1d8b91cc4241479695bfb0d340a3404a8eb5669b9e3df740d45e83ea68a355f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_mccarthy, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 23:49:48 compute-0 podman[241245]: 2025-11-25 23:49:48.495677763 +0000 UTC m=+0.154994337 container attach e1d8b91cc4241479695bfb0d340a3404a8eb5669b9e3df740d45e83ea68a355f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_mccarthy, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 23:49:48 compute-0 boring_mccarthy[241261]: 167 167
Nov 25 23:49:48 compute-0 systemd[1]: libpod-e1d8b91cc4241479695bfb0d340a3404a8eb5669b9e3df740d45e83ea68a355f.scope: Deactivated successfully.
Nov 25 23:49:48 compute-0 podman[241245]: 2025-11-25 23:49:48.497311486 +0000 UTC m=+0.156628030 container died e1d8b91cc4241479695bfb0d340a3404a8eb5669b9e3df740d45e83ea68a355f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_mccarthy, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:49:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-610c56506e33f9b124375e4b36562a252773b3a4637a567b66210558e6edec30-merged.mount: Deactivated successfully.
Nov 25 23:49:48 compute-0 podman[241245]: 2025-11-25 23:49:48.531234576 +0000 UTC m=+0.190551120 container remove e1d8b91cc4241479695bfb0d340a3404a8eb5669b9e3df740d45e83ea68a355f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 23:49:48 compute-0 systemd[1]: libpod-conmon-e1d8b91cc4241479695bfb0d340a3404a8eb5669b9e3df740d45e83ea68a355f.scope: Deactivated successfully.
Nov 25 23:49:48 compute-0 podman[241348]: 2025-11-25 23:49:48.709419357 +0000 UTC m=+0.049975286 container create caaeb111aa972fc6ff0dda5d2bfd6691ff98c3bff1e8bd8efaf3fd7bdeb5e079 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_lederberg, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 23:49:48 compute-0 systemd[1]: Started libpod-conmon-caaeb111aa972fc6ff0dda5d2bfd6691ff98c3bff1e8bd8efaf3fd7bdeb5e079.scope.
Nov 25 23:49:48 compute-0 podman[241348]: 2025-11-25 23:49:48.683151701 +0000 UTC m=+0.023707690 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:49:48 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:49:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8c8eda9efeccb508495fa979b01194210606a9369ac41022d4fd7d05a15f8b3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 23:49:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8c8eda9efeccb508495fa979b01194210606a9369ac41022d4fd7d05a15f8b3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:49:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8c8eda9efeccb508495fa979b01194210606a9369ac41022d4fd7d05a15f8b3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:49:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8c8eda9efeccb508495fa979b01194210606a9369ac41022d4fd7d05a15f8b3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:49:48 compute-0 podman[241348]: 2025-11-25 23:49:48.818058325 +0000 UTC m=+0.158614244 container init caaeb111aa972fc6ff0dda5d2bfd6691ff98c3bff1e8bd8efaf3fd7bdeb5e079 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_lederberg, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:49:48 compute-0 podman[241348]: 2025-11-25 23:49:48.833187606 +0000 UTC m=+0.173743505 container start caaeb111aa972fc6ff0dda5d2bfd6691ff98c3bff1e8bd8efaf3fd7bdeb5e079 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_lederberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:49:48 compute-0 podman[241348]: 2025-11-25 23:49:48.838131087 +0000 UTC m=+0.178686986 container attach caaeb111aa972fc6ff0dda5d2bfd6691ff98c3bff1e8bd8efaf3fd7bdeb5e079 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_lederberg, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:49:48 compute-0 sudo[241455]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdjgoclewjaraavmotycivibufecedea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114588.6110249-834-163533498154078/AnsiballZ_file.py'
Nov 25 23:49:48 compute-0 sudo[241455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:49:49 compute-0 python3.9[241457]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:49:49 compute-0 sudo[241455]: pam_unix(sudo:session): session closed for user root
Nov 25 23:49:49 compute-0 ceph-mon[75654]: pgmap v590: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:49:49 compute-0 sad_lederberg[241398]: {
Nov 25 23:49:49 compute-0 sad_lederberg[241398]:     "0": [
Nov 25 23:49:49 compute-0 sad_lederberg[241398]:         {
Nov 25 23:49:49 compute-0 sad_lederberg[241398]:             "devices": [
Nov 25 23:49:49 compute-0 sad_lederberg[241398]:                 "/dev/loop3"
Nov 25 23:49:49 compute-0 sad_lederberg[241398]:             ],
Nov 25 23:49:49 compute-0 sad_lederberg[241398]:             "lv_name": "ceph_lv0",
Nov 25 23:49:49 compute-0 sad_lederberg[241398]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 23:49:49 compute-0 sad_lederberg[241398]:             "lv_size": "21470642176",
Nov 25 23:49:49 compute-0 sad_lederberg[241398]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c5ddf08a-6193-41e0-8332-60b5083aa62e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 23:49:49 compute-0 sad_lederberg[241398]:             "lv_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 23:49:49 compute-0 sad_lederberg[241398]:             "name": "ceph_lv0",
Nov 25 23:49:49 compute-0 sad_lederberg[241398]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 23:49:49 compute-0 sad_lederberg[241398]:             "tags": {
Nov 25 23:49:49 compute-0 sad_lederberg[241398]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 23:49:49 compute-0 sad_lederberg[241398]:                 "ceph.block_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 23:49:49 compute-0 sad_lederberg[241398]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 23:49:49 compute-0 sad_lederberg[241398]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:49:49 compute-0 sad_lederberg[241398]:                 "ceph.cluster_name": "ceph",
Nov 25 23:49:49 compute-0 sad_lederberg[241398]:                 "ceph.crush_device_class": "",
Nov 25 23:49:49 compute-0 sad_lederberg[241398]:                 "ceph.encrypted": "0",
Nov 25 23:49:49 compute-0 sad_lederberg[241398]:                 "ceph.osd_fsid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 23:49:49 compute-0 sad_lederberg[241398]:                 "ceph.osd_id": "0",
Nov 25 23:49:49 compute-0 sad_lederberg[241398]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 23:49:49 compute-0 sad_lederberg[241398]:                 "ceph.type": "block",
Nov 25 23:49:49 compute-0 sad_lederberg[241398]:                 "ceph.vdo": "0"
Nov 25 23:49:49 compute-0 sad_lederberg[241398]:             },
Nov 25 23:49:49 compute-0 sad_lederberg[241398]:             "type": "block",
Nov 25 23:49:49 compute-0 sad_lederberg[241398]:             "vg_name": "ceph_vg0"
Nov 25 23:49:49 compute-0 sad_lederberg[241398]:         }
Nov 25 23:49:49 compute-0 sad_lederberg[241398]:     ],
Nov 25 23:49:49 compute-0 sad_lederberg[241398]:     "1": [
Nov 25 23:49:49 compute-0 sad_lederberg[241398]:         {
Nov 25 23:49:49 compute-0 sad_lederberg[241398]:             "devices": [
Nov 25 23:49:49 compute-0 sad_lederberg[241398]:                 "/dev/loop4"
Nov 25 23:49:49 compute-0 sad_lederberg[241398]:             ],
Nov 25 23:49:49 compute-0 sad_lederberg[241398]:             "lv_name": "ceph_lv1",
Nov 25 23:49:49 compute-0 sad_lederberg[241398]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 23:49:49 compute-0 sad_lederberg[241398]:             "lv_size": "21470642176",
Nov 25 23:49:49 compute-0 sad_lederberg[241398]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21bbab34-bea3-466b-8bf7-812749fcef47,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 23:49:49 compute-0 sad_lederberg[241398]:             "lv_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 23:49:49 compute-0 sad_lederberg[241398]:             "name": "ceph_lv1",
Nov 25 23:49:49 compute-0 sad_lederberg[241398]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 23:49:49 compute-0 sad_lederberg[241398]:             "tags": {
Nov 25 23:49:49 compute-0 sad_lederberg[241398]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 23:49:49 compute-0 sad_lederberg[241398]:                 "ceph.block_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 23:49:49 compute-0 sad_lederberg[241398]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 23:49:49 compute-0 sad_lederberg[241398]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:49:49 compute-0 sad_lederberg[241398]:                 "ceph.cluster_name": "ceph",
Nov 25 23:49:49 compute-0 sad_lederberg[241398]:                 "ceph.crush_device_class": "",
Nov 25 23:49:49 compute-0 sad_lederberg[241398]:                 "ceph.encrypted": "0",
Nov 25 23:49:49 compute-0 sad_lederberg[241398]:                 "ceph.osd_fsid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 23:49:49 compute-0 sad_lederberg[241398]:                 "ceph.osd_id": "1",
Nov 25 23:49:49 compute-0 sad_lederberg[241398]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 23:49:49 compute-0 sad_lederberg[241398]:                 "ceph.type": "block",
Nov 25 23:49:49 compute-0 sad_lederberg[241398]:                 "ceph.vdo": "0"
Nov 25 23:49:49 compute-0 sad_lederberg[241398]:             },
Nov 25 23:49:49 compute-0 sad_lederberg[241398]:             "type": "block",
Nov 25 23:49:49 compute-0 sad_lederberg[241398]:             "vg_name": "ceph_vg1"
Nov 25 23:49:49 compute-0 sad_lederberg[241398]:         }
Nov 25 23:49:49 compute-0 sad_lederberg[241398]:     ],
Nov 25 23:49:49 compute-0 sad_lederberg[241398]:     "2": [
Nov 25 23:49:49 compute-0 sad_lederberg[241398]:         {
Nov 25 23:49:49 compute-0 sad_lederberg[241398]:             "devices": [
Nov 25 23:49:49 compute-0 sad_lederberg[241398]:                 "/dev/loop5"
Nov 25 23:49:49 compute-0 sad_lederberg[241398]:             ],
Nov 25 23:49:49 compute-0 sad_lederberg[241398]:             "lv_name": "ceph_lv2",
Nov 25 23:49:49 compute-0 sad_lederberg[241398]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 23:49:49 compute-0 sad_lederberg[241398]:             "lv_size": "21470642176",
Nov 25 23:49:49 compute-0 sad_lederberg[241398]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=019d967b-1a56-4e90-8682-a890da577e20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 23:49:49 compute-0 sad_lederberg[241398]:             "lv_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 23:49:49 compute-0 sad_lederberg[241398]:             "name": "ceph_lv2",
Nov 25 23:49:49 compute-0 sad_lederberg[241398]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 23:49:49 compute-0 sad_lederberg[241398]:             "tags": {
Nov 25 23:49:49 compute-0 sad_lederberg[241398]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 23:49:49 compute-0 sad_lederberg[241398]:                 "ceph.block_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 23:49:49 compute-0 sad_lederberg[241398]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 23:49:49 compute-0 sad_lederberg[241398]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:49:49 compute-0 sad_lederberg[241398]:                 "ceph.cluster_name": "ceph",
Nov 25 23:49:49 compute-0 sad_lederberg[241398]:                 "ceph.crush_device_class": "",
Nov 25 23:49:49 compute-0 sad_lederberg[241398]:                 "ceph.encrypted": "0",
Nov 25 23:49:49 compute-0 sad_lederberg[241398]:                 "ceph.osd_fsid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 23:49:49 compute-0 sad_lederberg[241398]:                 "ceph.osd_id": "2",
Nov 25 23:49:49 compute-0 sad_lederberg[241398]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 23:49:49 compute-0 sad_lederberg[241398]:                 "ceph.type": "block",
Nov 25 23:49:49 compute-0 sad_lederberg[241398]:                 "ceph.vdo": "0"
Nov 25 23:49:49 compute-0 sad_lederberg[241398]:             },
Nov 25 23:49:49 compute-0 sad_lederberg[241398]:             "type": "block",
Nov 25 23:49:49 compute-0 sad_lederberg[241398]:             "vg_name": "ceph_vg2"
Nov 25 23:49:49 compute-0 sad_lederberg[241398]:         }
Nov 25 23:49:49 compute-0 sad_lederberg[241398]:     ]
Nov 25 23:49:49 compute-0 sad_lederberg[241398]: }
Nov 25 23:49:49 compute-0 systemd[1]: libpod-caaeb111aa972fc6ff0dda5d2bfd6691ff98c3bff1e8bd8efaf3fd7bdeb5e079.scope: Deactivated successfully.
Nov 25 23:49:49 compute-0 podman[241348]: 2025-11-25 23:49:49.628696702 +0000 UTC m=+0.969252611 container died caaeb111aa972fc6ff0dda5d2bfd6691ff98c3bff1e8bd8efaf3fd7bdeb5e079 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_lederberg, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 25 23:49:49 compute-0 sudo[241611]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpfzvgmzindhzrtsrdyfkljnchewfwby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114589.3426454-834-260571770359529/AnsiballZ_file.py'
Nov 25 23:49:49 compute-0 sudo[241611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:49:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-f8c8eda9efeccb508495fa979b01194210606a9369ac41022d4fd7d05a15f8b3-merged.mount: Deactivated successfully.
Nov 25 23:49:49 compute-0 podman[241348]: 2025-11-25 23:49:49.683506145 +0000 UTC m=+1.024062064 container remove caaeb111aa972fc6ff0dda5d2bfd6691ff98c3bff1e8bd8efaf3fd7bdeb5e079 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_lederberg, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 23:49:49 compute-0 systemd[1]: libpod-conmon-caaeb111aa972fc6ff0dda5d2bfd6691ff98c3bff1e8bd8efaf3fd7bdeb5e079.scope: Deactivated successfully.
Nov 25 23:49:49 compute-0 sudo[241104]: pam_unix(sudo:session): session closed for user root
Nov 25 23:49:49 compute-0 sudo[241625]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:49:49 compute-0 sudo[241625]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:49:49 compute-0 sudo[241625]: pam_unix(sudo:session): session closed for user root
Nov 25 23:49:49 compute-0 sudo[241650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:49:49 compute-0 sudo[241650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:49:49 compute-0 sudo[241650]: pam_unix(sudo:session): session closed for user root
Nov 25 23:49:49 compute-0 python3.9[241614]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:49:49 compute-0 sudo[241611]: pam_unix(sudo:session): session closed for user root
Nov 25 23:49:49 compute-0 sudo[241675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:49:49 compute-0 sudo[241675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:49:49 compute-0 sudo[241675]: pam_unix(sudo:session): session closed for user root
Nov 25 23:49:49 compute-0 sudo[241701]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 -- raw list --format json
Nov 25 23:49:49 compute-0 sudo[241701]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:49:50 compute-0 sshd-session[241437]: Invalid user user5 from 171.244.140.22 port 39058
Nov 25 23:49:50 compute-0 podman[241840]: 2025-11-25 23:49:50.293701672 +0000 UTC m=+0.046622476 container create bbfd73e87335e5d9235ea2f0621e6a2836f25423dcf21722dd8ad9fb45e29d59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ritchie, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 25 23:49:50 compute-0 systemd[1]: Started libpod-conmon-bbfd73e87335e5d9235ea2f0621e6a2836f25423dcf21722dd8ad9fb45e29d59.scope.
Nov 25 23:49:50 compute-0 podman[241840]: 2025-11-25 23:49:50.273493277 +0000 UTC m=+0.026414091 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:49:50 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:49:50 compute-0 podman[241840]: 2025-11-25 23:49:50.395806367 +0000 UTC m=+0.148727211 container init bbfd73e87335e5d9235ea2f0621e6a2836f25423dcf21722dd8ad9fb45e29d59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ritchie, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 23:49:50 compute-0 podman[241840]: 2025-11-25 23:49:50.401794396 +0000 UTC m=+0.154715210 container start bbfd73e87335e5d9235ea2f0621e6a2836f25423dcf21722dd8ad9fb45e29d59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ritchie, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 23:49:50 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v591: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:49:50 compute-0 podman[241840]: 2025-11-25 23:49:50.40456404 +0000 UTC m=+0.157484884 container attach bbfd73e87335e5d9235ea2f0621e6a2836f25423dcf21722dd8ad9fb45e29d59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ritchie, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 23:49:50 compute-0 gallant_ritchie[241887]: 167 167
Nov 25 23:49:50 compute-0 systemd[1]: libpod-bbfd73e87335e5d9235ea2f0621e6a2836f25423dcf21722dd8ad9fb45e29d59.scope: Deactivated successfully.
Nov 25 23:49:50 compute-0 podman[241840]: 2025-11-25 23:49:50.408149664 +0000 UTC m=+0.161070478 container died bbfd73e87335e5d9235ea2f0621e6a2836f25423dcf21722dd8ad9fb45e29d59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:49:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-380de1be2e0748c1d3ec35ff30b31956962cf8bb6e4e3fde3a81baea615deefa-merged.mount: Deactivated successfully.
Nov 25 23:49:50 compute-0 sudo[241941]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wycrgjfxlgaopytxxvqlpjzbeetorazz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114590.1653986-892-62626196571574/AnsiballZ_command.py'
Nov 25 23:49:50 compute-0 sudo[241941]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:49:50 compute-0 podman[241840]: 2025-11-25 23:49:50.445555926 +0000 UTC m=+0.198476740 container remove bbfd73e87335e5d9235ea2f0621e6a2836f25423dcf21722dd8ad9fb45e29d59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ritchie, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:49:50 compute-0 systemd[1]: libpod-conmon-bbfd73e87335e5d9235ea2f0621e6a2836f25423dcf21722dd8ad9fb45e29d59.scope: Deactivated successfully.
Nov 25 23:49:50 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:49:50 compute-0 sshd-session[241437]: Received disconnect from 171.244.140.22 port 39058:11: Bye Bye [preauth]
Nov 25 23:49:50 compute-0 sshd-session[241437]: Disconnected from invalid user user5 171.244.140.22 port 39058 [preauth]
Nov 25 23:49:50 compute-0 podman[241956]: 2025-11-25 23:49:50.595984111 +0000 UTC m=+0.037935526 container create 63d1afc96c7baf251365ca1a368a19398d172b746c379cacca903cc1d9462c46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:49:50 compute-0 python3.9[241948]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:49:50 compute-0 systemd[1]: Started libpod-conmon-63d1afc96c7baf251365ca1a368a19398d172b746c379cacca903cc1d9462c46.scope.
Nov 25 23:49:50 compute-0 sudo[241941]: pam_unix(sudo:session): session closed for user root
Nov 25 23:49:50 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:49:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1857217acaec6086b19605bba9dfcfe1608a73ddbcdd61338e9127b0669a118b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 23:49:50 compute-0 podman[241956]: 2025-11-25 23:49:50.58088287 +0000 UTC m=+0.022834305 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:49:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1857217acaec6086b19605bba9dfcfe1608a73ddbcdd61338e9127b0669a118b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:49:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1857217acaec6086b19605bba9dfcfe1608a73ddbcdd61338e9127b0669a118b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:49:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1857217acaec6086b19605bba9dfcfe1608a73ddbcdd61338e9127b0669a118b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:49:50 compute-0 podman[241956]: 2025-11-25 23:49:50.688401729 +0000 UTC m=+0.130353164 container init 63d1afc96c7baf251365ca1a368a19398d172b746c379cacca903cc1d9462c46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_swanson, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:49:50 compute-0 podman[241956]: 2025-11-25 23:49:50.70049841 +0000 UTC m=+0.142449835 container start 63d1afc96c7baf251365ca1a368a19398d172b746c379cacca903cc1d9462c46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_swanson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:49:50 compute-0 podman[241956]: 2025-11-25 23:49:50.703845118 +0000 UTC m=+0.145796543 container attach 63d1afc96c7baf251365ca1a368a19398d172b746c379cacca903cc1d9462c46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_swanson, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 23:49:51 compute-0 python3.9[242130]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 25 23:49:51 compute-0 ceph-mon[75654]: pgmap v591: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:49:51 compute-0 jolly_swanson[241976]: {
Nov 25 23:49:51 compute-0 jolly_swanson[241976]:     "019d967b-1a56-4e90-8682-a890da577e20": {
Nov 25 23:49:51 compute-0 jolly_swanson[241976]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:49:51 compute-0 jolly_swanson[241976]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 23:49:51 compute-0 jolly_swanson[241976]:         "osd_id": 2,
Nov 25 23:49:51 compute-0 jolly_swanson[241976]:         "osd_uuid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 23:49:51 compute-0 jolly_swanson[241976]:         "type": "bluestore"
Nov 25 23:49:51 compute-0 jolly_swanson[241976]:     },
Nov 25 23:49:51 compute-0 jolly_swanson[241976]:     "21bbab34-bea3-466b-8bf7-812749fcef47": {
Nov 25 23:49:51 compute-0 jolly_swanson[241976]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:49:51 compute-0 jolly_swanson[241976]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 23:49:51 compute-0 jolly_swanson[241976]:         "osd_id": 1,
Nov 25 23:49:51 compute-0 jolly_swanson[241976]:         "osd_uuid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 23:49:51 compute-0 jolly_swanson[241976]:         "type": "bluestore"
Nov 25 23:49:51 compute-0 jolly_swanson[241976]:     },
Nov 25 23:49:51 compute-0 jolly_swanson[241976]:     "c5ddf08a-6193-41e0-8332-60b5083aa62e": {
Nov 25 23:49:51 compute-0 jolly_swanson[241976]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:49:51 compute-0 jolly_swanson[241976]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 23:49:51 compute-0 jolly_swanson[241976]:         "osd_id": 0,
Nov 25 23:49:51 compute-0 jolly_swanson[241976]:         "osd_uuid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 23:49:51 compute-0 jolly_swanson[241976]:         "type": "bluestore"
Nov 25 23:49:51 compute-0 jolly_swanson[241976]:     }
Nov 25 23:49:51 compute-0 jolly_swanson[241976]: }
Nov 25 23:49:51 compute-0 systemd[1]: libpod-63d1afc96c7baf251365ca1a368a19398d172b746c379cacca903cc1d9462c46.scope: Deactivated successfully.
Nov 25 23:49:51 compute-0 systemd[1]: libpod-63d1afc96c7baf251365ca1a368a19398d172b746c379cacca903cc1d9462c46.scope: Consumed 1.045s CPU time.
Nov 25 23:49:51 compute-0 podman[241956]: 2025-11-25 23:49:51.74443648 +0000 UTC m=+1.186387925 container died 63d1afc96c7baf251365ca1a368a19398d172b746c379cacca903cc1d9462c46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_swanson, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:49:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-1857217acaec6086b19605bba9dfcfe1608a73ddbcdd61338e9127b0669a118b-merged.mount: Deactivated successfully.
Nov 25 23:49:51 compute-0 podman[241956]: 2025-11-25 23:49:51.818480821 +0000 UTC m=+1.260432246 container remove 63d1afc96c7baf251365ca1a368a19398d172b746c379cacca903cc1d9462c46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_swanson, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:49:51 compute-0 systemd[1]: libpod-conmon-63d1afc96c7baf251365ca1a368a19398d172b746c379cacca903cc1d9462c46.scope: Deactivated successfully.
Nov 25 23:49:51 compute-0 sudo[241701]: pam_unix(sudo:session): session closed for user root
Nov 25 23:49:51 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 23:49:51 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:49:51 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 23:49:51 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:49:51 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev 3f3de300-2790-40d3-8b7c-1bc201f3279a does not exist
Nov 25 23:49:51 compute-0 sudo[242218]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:49:51 compute-0 sudo[242218]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:49:51 compute-0 sudo[242218]: pam_unix(sudo:session): session closed for user root
Nov 25 23:49:52 compute-0 sudo[242272]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 23:49:52 compute-0 sudo[242272]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:49:52 compute-0 sudo[242272]: pam_unix(sudo:session): session closed for user root
Nov 25 23:49:52 compute-0 sudo[242370]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qukqvarbrlpsffvurjldcykmvqifidvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114591.866541-910-130919638843974/AnsiballZ_systemd_service.py'
Nov 25 23:49:52 compute-0 sudo[242370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:49:52 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v592: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:49:52 compute-0 python3.9[242372]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 23:49:52 compute-0 systemd[1]: Reloading.
Nov 25 23:49:52 compute-0 systemd-rc-local-generator[242399]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 23:49:52 compute-0 systemd-sysv-generator[242402]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 23:49:52 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:49:52 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:49:52 compute-0 sudo[242370]: pam_unix(sudo:session): session closed for user root
Nov 25 23:49:53 compute-0 sudo[242556]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjrucofcjkwxdjcvmpywfkmmigzjojjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114593.2001007-918-134881445481641/AnsiballZ_command.py'
Nov 25 23:49:53 compute-0 sudo[242556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:49:53 compute-0 python3.9[242558]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:49:53 compute-0 sudo[242556]: pam_unix(sudo:session): session closed for user root
Nov 25 23:49:53 compute-0 ceph-mon[75654]: pgmap v592: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:49:54 compute-0 sudo[242709]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drpecqzcdebtmxhryyqovpgkylcuycle ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114594.0315104-918-130652238613741/AnsiballZ_command.py'
Nov 25 23:49:54 compute-0 sudo[242709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:49:54 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v593: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:49:54 compute-0 python3.9[242711]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:49:54 compute-0 sudo[242709]: pam_unix(sudo:session): session closed for user root
Nov 25 23:49:55 compute-0 sudo[242862]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-birlcvzagmgdxqqcvfmzooxypfxuaqkm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114594.7885973-918-16125534832886/AnsiballZ_command.py'
Nov 25 23:49:55 compute-0 sudo[242862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:49:55 compute-0 python3.9[242864]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:49:55 compute-0 sudo[242862]: pam_unix(sudo:session): session closed for user root
Nov 25 23:49:55 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:49:55 compute-0 ceph-mon[75654]: pgmap v593: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:49:55 compute-0 sudo[243015]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aelovjcgkinoioatnofchvgcnwahhpwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114595.6039703-918-162620063149593/AnsiballZ_command.py'
Nov 25 23:49:55 compute-0 sudo[243015]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:49:56 compute-0 ceph-mgr[75954]: [balancer INFO root] Optimize plan auto_2025-11-25_23:49:56
Nov 25 23:49:56 compute-0 ceph-mgr[75954]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 23:49:56 compute-0 ceph-mgr[75954]: [balancer INFO root] do_upmap
Nov 25 23:49:56 compute-0 ceph-mgr[75954]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'vms', 'images', 'backups', '.mgr', 'volumes', 'cephfs.cephfs.data']
Nov 25 23:49:56 compute-0 ceph-mgr[75954]: [balancer INFO root] prepared 0/10 changes
Nov 25 23:49:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:49:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:49:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:49:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:49:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:49:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:49:56 compute-0 python3.9[243017]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:49:56 compute-0 sudo[243015]: pam_unix(sudo:session): session closed for user root
Nov 25 23:49:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 23:49:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 23:49:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 23:49:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 23:49:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 23:49:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 23:49:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 23:49:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 23:49:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 23:49:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 23:49:56 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v594: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:49:56 compute-0 sudo[243168]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxdfdvntgefmmcbxnkalxobqtqjzukkj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114596.2909324-918-261572290828633/AnsiballZ_command.py'
Nov 25 23:49:56 compute-0 sudo[243168]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:49:56 compute-0 python3.9[243170]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:49:56 compute-0 sudo[243168]: pam_unix(sudo:session): session closed for user root
Nov 25 23:49:57 compute-0 sudo[243321]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krxrzpqzomtbtocociqxmekrizhraoac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114597.0100257-918-7314925077491/AnsiballZ_command.py'
Nov 25 23:49:57 compute-0 sudo[243321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:49:57 compute-0 python3.9[243323]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:49:57 compute-0 sudo[243321]: pam_unix(sudo:session): session closed for user root
Nov 25 23:49:57 compute-0 ceph-mon[75654]: pgmap v594: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:49:58 compute-0 sudo[243474]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkmpvygyosbrpmmgloeyrkemikocwwpl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114597.7344248-918-254863243865615/AnsiballZ_command.py'
Nov 25 23:49:58 compute-0 sudo[243474]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:49:58 compute-0 python3.9[243476]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:49:58 compute-0 sudo[243474]: pam_unix(sudo:session): session closed for user root
Nov 25 23:49:58 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v595: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:49:58 compute-0 sudo[243627]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-csnbgmhhcsneszrqxchajzpxnnnobvly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114598.4183223-918-265176448653980/AnsiballZ_command.py'
Nov 25 23:49:58 compute-0 sudo[243627]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:49:58 compute-0 python3.9[243629]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 23:49:59 compute-0 sudo[243627]: pam_unix(sudo:session): session closed for user root
Nov 25 23:49:59 compute-0 ceph-mon[75654]: pgmap v595: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:50:00 compute-0 sudo[243780]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skmjgzrlxfbcwhdvswkioudftexfyucr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114600.0308642-997-151010355628351/AnsiballZ_file.py'
Nov 25 23:50:00 compute-0 sudo[243780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:50:00 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v596: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:50:00 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:50:00 compute-0 python3.9[243782]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:50:00 compute-0 sudo[243780]: pam_unix(sudo:session): session closed for user root
Nov 25 23:50:01 compute-0 sudo[243932]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xadfwtzqdzjghvyaxtqkzogrtiwjmuhb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114600.7988398-997-221443166667779/AnsiballZ_file.py'
Nov 25 23:50:01 compute-0 sudo[243932]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:50:01 compute-0 python3.9[243934]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:50:01 compute-0 sudo[243932]: pam_unix(sudo:session): session closed for user root
Nov 25 23:50:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 23:50:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:50:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 23:50:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:50:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:50:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:50:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:50:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:50:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:50:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:50:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:50:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:50:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 23:50:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:50:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:50:01 compute-0 ceph-mon[75654]: pgmap v596: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:50:02 compute-0 sudo[244084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqzvxduaubwpvugxtcvpcmviakbsbeno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114601.6793947-997-8016673246220/AnsiballZ_file.py'
Nov 25 23:50:02 compute-0 sudo[244084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:50:02 compute-0 python3.9[244086]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:50:02 compute-0 sudo[244084]: pam_unix(sudo:session): session closed for user root
Nov 25 23:50:02 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v597: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:50:03 compute-0 sudo[244236]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ceyyklfvarptkxtkusdiqspenduvpaao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114602.576851-1019-190455770815857/AnsiballZ_file.py'
Nov 25 23:50:03 compute-0 sudo[244236]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:50:03 compute-0 python3.9[244238]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:50:03 compute-0 sudo[244236]: pam_unix(sudo:session): session closed for user root
Nov 25 23:50:03 compute-0 sudo[244388]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzmxlfjxxdeolooyslzuyxazdlydsywg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114603.4518807-1019-264832008607756/AnsiballZ_file.py'
Nov 25 23:50:03 compute-0 sudo[244388]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:50:03 compute-0 ceph-mon[75654]: pgmap v597: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:50:04 compute-0 python3.9[244390]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:50:04 compute-0 sudo[244388]: pam_unix(sudo:session): session closed for user root
Nov 25 23:50:04 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v598: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:50:04 compute-0 sudo[244540]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwotzdvptbmdaqsguakkfjnkatfdjbxi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114604.2752101-1019-244396053425166/AnsiballZ_file.py'
Nov 25 23:50:04 compute-0 sudo[244540]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:50:04 compute-0 python3.9[244542]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:50:04 compute-0 sudo[244540]: pam_unix(sudo:session): session closed for user root
Nov 25 23:50:05 compute-0 sudo[244692]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zuzsijxfpnmydcvyhaeffuwmusejeeik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114605.058252-1019-85203113806250/AnsiballZ_file.py'
Nov 25 23:50:05 compute-0 sudo[244692]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:50:05 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:50:05 compute-0 python3.9[244694]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:50:05 compute-0 sudo[244692]: pam_unix(sudo:session): session closed for user root
Nov 25 23:50:05 compute-0 ceph-mon[75654]: pgmap v598: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:50:06 compute-0 sudo[244844]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjspwhhmfrbjgaryhjrctegheafwskfq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114605.8147638-1019-221562268661621/AnsiballZ_file.py'
Nov 25 23:50:06 compute-0 sudo[244844]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:50:06 compute-0 python3.9[244846]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:50:06 compute-0 sudo[244844]: pam_unix(sudo:session): session closed for user root
Nov 25 23:50:06 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v599: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:50:06 compute-0 sudo[244996]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahornxqptgkqntlfxpcwkxsbewzuhoif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114606.5053911-1019-261485327418721/AnsiballZ_file.py'
Nov 25 23:50:06 compute-0 sudo[244996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:50:07 compute-0 python3.9[244998]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:50:07 compute-0 sudo[244996]: pam_unix(sudo:session): session closed for user root
Nov 25 23:50:07 compute-0 podman[244999]: 2025-11-25 23:50:07.127439581 +0000 UTC m=+0.074902315 container health_status b33bb0f8771548517ad62d4e0193b93d84fd686bd9113462f17aea9d51587084 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 25 23:50:07 compute-0 sudo[245170]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgttjoghheycwqkxuediadkmqecamtyl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114607.2330468-1019-209911181827267/AnsiballZ_file.py'
Nov 25 23:50:07 compute-0 sudo[245170]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:50:07 compute-0 python3.9[245172]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:50:07 compute-0 sudo[245170]: pam_unix(sudo:session): session closed for user root
Nov 25 23:50:07 compute-0 ceph-mon[75654]: pgmap v599: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:50:08 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v600: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:50:09 compute-0 podman[245197]: 2025-11-25 23:50:09.269079724 +0000 UTC m=+0.100314429 container health_status 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.build-date=20251118)
Nov 25 23:50:09 compute-0 ceph-mon[75654]: pgmap v600: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:50:10 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v601: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:50:10 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:50:11 compute-0 ceph-mon[75654]: pgmap v601: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:50:12 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v602: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:50:13 compute-0 sudo[245349]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jiaxyszulfiaralqzrgkyslyhuaxwixj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114613.0870395-1208-220591205983181/AnsiballZ_getent.py'
Nov 25 23:50:13 compute-0 sudo[245349]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:50:13 compute-0 python3.9[245351]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Nov 25 23:50:13 compute-0 sudo[245349]: pam_unix(sudo:session): session closed for user root
Nov 25 23:50:13 compute-0 ceph-mon[75654]: pgmap v602: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:50:14 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v603: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:50:14 compute-0 sudo[245502]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kebkkrucprzzhxzdhwousjwrrhwttuar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114614.0955594-1216-5405479613049/AnsiballZ_group.py'
Nov 25 23:50:14 compute-0 sudo[245502]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:50:14 compute-0 python3.9[245504]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 25 23:50:14 compute-0 groupadd[245505]: group added to /etc/group: name=nova, GID=42436
Nov 25 23:50:14 compute-0 groupadd[245505]: group added to /etc/gshadow: name=nova
Nov 25 23:50:14 compute-0 groupadd[245505]: new group: name=nova, GID=42436
Nov 25 23:50:14 compute-0 sudo[245502]: pam_unix(sudo:session): session closed for user root
Nov 25 23:50:15 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:50:15 compute-0 sudo[245673]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrfmkawcnwpqxlpmuwmlyxlhlvkjbwgr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114615.1681964-1224-66342293924153/AnsiballZ_user.py'
Nov 25 23:50:15 compute-0 sudo[245673]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:50:15 compute-0 podman[245634]: 2025-11-25 23:50:15.845785475 +0000 UTC m=+0.085904177 container health_status 9c33443d6f21f9126b440289b7d384a59ef92406382d0e40a2b11f1da5dc47d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 23:50:15 compute-0 ceph-mon[75654]: pgmap v603: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:50:16 compute-0 python3.9[245681]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 25 23:50:16 compute-0 useradd[245683]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/0
Nov 25 23:50:16 compute-0 rsyslogd[1001]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 23:50:16 compute-0 rsyslogd[1001]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 23:50:16 compute-0 useradd[245683]: add 'nova' to group 'libvirt'
Nov 25 23:50:16 compute-0 useradd[245683]: add 'nova' to shadow group 'libvirt'
Nov 25 23:50:16 compute-0 sudo[245673]: pam_unix(sudo:session): session closed for user root
Nov 25 23:50:16 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v604: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:50:17 compute-0 sshd-session[245715]: Accepted publickey for zuul from 192.168.122.30 port 59382 ssh2: ECDSA SHA256:7fdySOwLsSW4/MY3qn2D8coSBpPDcuftesbV+ivPlmY
Nov 25 23:50:17 compute-0 systemd-logind[789]: New session 51 of user zuul.
Nov 25 23:50:17 compute-0 systemd[1]: Started Session 51 of User zuul.
Nov 25 23:50:17 compute-0 sshd-session[245715]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 23:50:17 compute-0 sshd-session[245718]: Received disconnect from 192.168.122.30 port 59382:11: disconnected by user
Nov 25 23:50:17 compute-0 sshd-session[245718]: Disconnected from user zuul 192.168.122.30 port 59382
Nov 25 23:50:17 compute-0 sshd-session[245715]: pam_unix(sshd:session): session closed for user zuul
Nov 25 23:50:17 compute-0 systemd-logind[789]: Session 51 logged out. Waiting for processes to exit.
Nov 25 23:50:17 compute-0 systemd[1]: session-51.scope: Deactivated successfully.
Nov 25 23:50:17 compute-0 systemd-logind[789]: Removed session 51.
Nov 25 23:50:17 compute-0 ceph-mon[75654]: pgmap v604: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:50:18 compute-0 python3.9[245868]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:50:18 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v605: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:50:18 compute-0 python3.9[245989]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764114617.5388978-1249-116114722696709/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:50:19 compute-0 python3.9[246139]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:50:19 compute-0 python3.9[246215]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:50:19 compute-0 ceph-mon[75654]: pgmap v605: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:50:20 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v606: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:50:20 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:50:20 compute-0 python3.9[246365]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:50:21 compute-0 python3.9[246486]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764114620.10584-1249-135969349598811/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:50:21 compute-0 ceph-mon[75654]: pgmap v606: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:50:22 compute-0 python3.9[246636]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:50:22 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v607: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:50:22 compute-0 python3.9[246757]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764114621.5166802-1249-55214467628208/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:50:23 compute-0 python3.9[246907]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:50:24 compute-0 ceph-mon[75654]: pgmap v607: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:50:24 compute-0 python3.9[247028]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764114622.9751227-1249-263740768742195/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:50:24 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v608: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:50:24 compute-0 python3.9[247178]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:50:25 compute-0 python3.9[247299]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764114624.3093696-1249-7496938757179/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:50:25 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:50:26 compute-0 ceph-mon[75654]: pgmap v608: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:50:26 compute-0 sudo[247450]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sansbmhziyffmgaaqiohnlgodymowwbj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114625.7332108-1332-277388076138893/AnsiballZ_file.py'
Nov 25 23:50:26 compute-0 sudo[247450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:50:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:50:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:50:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:50:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:50:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:50:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:50:26 compute-0 python3.9[247452]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:50:26 compute-0 sudo[247450]: pam_unix(sudo:session): session closed for user root
Nov 25 23:50:26 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v609: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:50:26 compute-0 sudo[247602]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nebilbyalixpxoynmhxzapnhebreuaqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114626.5117857-1340-41885873250751/AnsiballZ_copy.py'
Nov 25 23:50:26 compute-0 sudo[247602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:50:27 compute-0 python3.9[247604]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:50:27 compute-0 sudo[247602]: pam_unix(sudo:session): session closed for user root
Nov 25 23:50:27 compute-0 sudo[247754]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sakyexlkocfvhfrsqukgehnaislqadpt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114627.4173527-1348-31875847867695/AnsiballZ_stat.py'
Nov 25 23:50:27 compute-0 sudo[247754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:50:28 compute-0 python3.9[247756]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 23:50:28 compute-0 sudo[247754]: pam_unix(sudo:session): session closed for user root
Nov 25 23:50:28 compute-0 ceph-mon[75654]: pgmap v609: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:50:28 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v610: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:50:28 compute-0 sudo[247906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aiwsftxytbbnqhpetrclqqchnfhezmkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114628.2549345-1356-271612989107131/AnsiballZ_stat.py'
Nov 25 23:50:28 compute-0 sudo[247906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:50:28 compute-0 python3.9[247908]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:50:28 compute-0 sudo[247906]: pam_unix(sudo:session): session closed for user root
Nov 25 23:50:29 compute-0 sudo[248029]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwbqucamgdxnfxczgykdctunvnwhzxdv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114628.2549345-1356-271612989107131/AnsiballZ_copy.py'
Nov 25 23:50:29 compute-0 sudo[248029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:50:29 compute-0 python3.9[248031]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1764114628.2549345-1356-271612989107131/.source _original_basename=.l3qcxbuz follow=False checksum=8498dcb380c8ad9e7713c1be800fda9fd5956bfd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Nov 25 23:50:29 compute-0 sudo[248029]: pam_unix(sudo:session): session closed for user root
Nov 25 23:50:30 compute-0 ceph-mon[75654]: pgmap v610: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:50:30 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v611: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:50:30 compute-0 python3.9[248183]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 23:50:30 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:50:30 compute-0 sshd-session[247324]: Connection closed by 45.78.216.132 port 49644 [preauth]
Nov 25 23:50:31 compute-0 python3.9[248336]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:50:31 compute-0 python3.9[248457]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764114630.7055583-1382-261293425673410/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=211ffd0bca4b407eb4de45a749ef70116a7806fd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:50:32 compute-0 ceph-mon[75654]: pgmap v611: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:50:32 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v612: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:50:32 compute-0 python3.9[248607]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 23:50:33 compute-0 python3.9[248728]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764114632.0482116-1397-84951417737157/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 23:50:33 compute-0 sudo[248878]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-levhkgxlcnfqacfgroadripxgjqcncqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114633.6087708-1414-46282373934026/AnsiballZ_container_config_data.py'
Nov 25 23:50:33 compute-0 sudo[248878]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:50:34 compute-0 ceph-mon[75654]: pgmap v612: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:50:34 compute-0 python3.9[248880]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Nov 25 23:50:34 compute-0 sudo[248878]: pam_unix(sudo:session): session closed for user root
Nov 25 23:50:34 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v613: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:50:34 compute-0 sudo[249030]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggvwbmqfchrlfioofwxfoedfllgwsvxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114634.4510286-1423-78269713765794/AnsiballZ_container_config_hash.py'
Nov 25 23:50:34 compute-0 sudo[249030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:50:34 compute-0 python3.9[249032]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 25 23:50:34 compute-0 sudo[249030]: pam_unix(sudo:session): session closed for user root
Nov 25 23:50:35 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:50:35 compute-0 sudo[249182]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjytuoovivsclrgxocqfykfmbiubcnir ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764114635.3622372-1433-50237225066321/AnsiballZ_edpm_container_manage.py'
Nov 25 23:50:35 compute-0 sudo[249182]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:50:36 compute-0 python3[249184]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Nov 25 23:50:36 compute-0 ceph-mon[75654]: pgmap v613: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:50:36 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v614: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:50:37 compute-0 podman[249221]: 2025-11-25 23:50:37.242093999 +0000 UTC m=+0.071893766 container health_status b33bb0f8771548517ad62d4e0193b93d84fd686bd9113462f17aea9d51587084 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Nov 25 23:50:38 compute-0 ceph-mon[75654]: pgmap v614: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:50:38 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v615: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:50:39 compute-0 ceph-mon[75654]: pgmap v615: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:50:40 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v616: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:50:40 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:50:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:50:40.757 160725 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 23:50:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:50:40.758 160725 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 23:50:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:50:40.758 160725 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 23:50:41 compute-0 podman[249258]: 2025-11-25 23:50:41.124664957 +0000 UTC m=+0.941673951 container health_status 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 23:50:41 compute-0 ceph-mon[75654]: pgmap v616: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:50:42 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v617: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:50:44 compute-0 ceph-mon[75654]: pgmap v617: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:50:44 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v618: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:50:44 compute-0 podman[249198]: 2025-11-25 23:50:44.634052478 +0000 UTC m=+8.523447550 image pull 8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Nov 25 23:50:44 compute-0 podman[249342]: 2025-11-25 23:50:44.858460044 +0000 UTC m=+0.072333457 container create a8340f2d2830922dc4913321305004a758198acf74b70a177957c7f85ee2db07 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=nova_compute_init, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_id=edpm)
Nov 25 23:50:44 compute-0 podman[249342]: 2025-11-25 23:50:44.824821992 +0000 UTC m=+0.038695475 image pull 8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Nov 25 23:50:44 compute-0 python3[249184]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Nov 25 23:50:45 compute-0 sudo[249182]: pam_unix(sudo:session): session closed for user root
Nov 25 23:50:45 compute-0 ceph-mon[75654]: pgmap v618: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:50:45 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:50:45 compute-0 sudo[249530]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxexogurnadnhyetltywmjyxknnrmbsu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114645.2410467-1441-192734629658706/AnsiballZ_stat.py'
Nov 25 23:50:45 compute-0 sudo[249530]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:50:45 compute-0 python3.9[249532]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 23:50:45 compute-0 sudo[249530]: pam_unix(sudo:session): session closed for user root
Nov 25 23:50:46 compute-0 podman[249559]: 2025-11-25 23:50:46.238618321 +0000 UTC m=+0.057648188 container health_status 9c33443d6f21f9126b440289b7d384a59ef92406382d0e40a2b11f1da5dc47d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 23:50:46 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v619: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:50:46 compute-0 sudo[249703]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wuzwpyppqvxfithwnkkwdirfrddofqlu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114646.3116395-1453-166400923912020/AnsiballZ_container_config_data.py'
Nov 25 23:50:46 compute-0 sudo[249703]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:50:46 compute-0 python3.9[249705]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Nov 25 23:50:46 compute-0 sudo[249703]: pam_unix(sudo:session): session closed for user root
Nov 25 23:50:47 compute-0 ceph-mon[75654]: pgmap v619: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:50:47 compute-0 sudo[249855]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-syuwtwbskbfopejkrufewrkqyxvmjytp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114647.188906-1462-67851987211011/AnsiballZ_container_config_hash.py'
Nov 25 23:50:47 compute-0 sudo[249855]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:50:47 compute-0 python3.9[249857]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 25 23:50:47 compute-0 sudo[249855]: pam_unix(sudo:session): session closed for user root
Nov 25 23:50:48 compute-0 sudo[250007]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzllenllezetjdhzmjdwhlkctrcikzdd ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764114648.0524576-1472-65222518125432/AnsiballZ_edpm_container_manage.py'
Nov 25 23:50:48 compute-0 sudo[250007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:50:48 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v620: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:50:48 compute-0 python3[250009]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Nov 25 23:50:48 compute-0 podman[250047]: 2025-11-25 23:50:48.857099077 +0000 UTC m=+0.051899556 container create 16db7246babd293fdae6bf9e6f9c643fcddebf1647d88b7f21c23906a1f96b9a (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 25 23:50:48 compute-0 podman[250047]: 2025-11-25 23:50:48.825789048 +0000 UTC m=+0.020589547 image pull 8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Nov 25 23:50:48 compute-0 python3[250009]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Nov 25 23:50:48 compute-0 sudo[250007]: pam_unix(sudo:session): session closed for user root
Nov 25 23:50:49 compute-0 sudo[250236]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmefiaolpvrfbecbqminxemcestrgpod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114649.1660652-1480-151627360699436/AnsiballZ_stat.py'
Nov 25 23:50:49 compute-0 sudo[250236]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:50:49 compute-0 ceph-mon[75654]: pgmap v620: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:50:49 compute-0 python3.9[250238]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 23:50:49 compute-0 sudo[250236]: pam_unix(sudo:session): session closed for user root
Nov 25 23:50:50 compute-0 sudo[250390]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzhfyzbcpdxmyojnlsjoxpmxaelloavx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114649.941988-1489-126898029537550/AnsiballZ_file.py'
Nov 25 23:50:50 compute-0 sudo[250390]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:50:50 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v621: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:50:50 compute-0 python3.9[250392]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:50:50 compute-0 sudo[250390]: pam_unix(sudo:session): session closed for user root
Nov 25 23:50:50 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:50:51 compute-0 sudo[250541]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzsdirpdyjcdgnrmwaevakawvmfowtkd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114650.5445268-1489-246375322935149/AnsiballZ_copy.py'
Nov 25 23:50:51 compute-0 sudo[250541]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:50:51 compute-0 python3.9[250543]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764114650.5445268-1489-246375322935149/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 23:50:51 compute-0 sudo[250541]: pam_unix(sudo:session): session closed for user root
Nov 25 23:50:51 compute-0 sudo[250617]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hernvffgoujqkolvnkzukvmjztbmbtdq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114650.5445268-1489-246375322935149/AnsiballZ_systemd.py'
Nov 25 23:50:51 compute-0 sudo[250617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:50:51 compute-0 ceph-mon[75654]: pgmap v621: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:50:51 compute-0 python3.9[250619]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 23:50:51 compute-0 systemd[1]: Reloading.
Nov 25 23:50:51 compute-0 systemd-sysv-generator[250645]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 23:50:51 compute-0 systemd-rc-local-generator[250642]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 23:50:52 compute-0 sudo[250617]: pam_unix(sudo:session): session closed for user root
Nov 25 23:50:52 compute-0 sudo[250654]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:50:52 compute-0 sudo[250654]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:50:52 compute-0 sudo[250654]: pam_unix(sudo:session): session closed for user root
Nov 25 23:50:52 compute-0 sudo[250679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:50:52 compute-0 sudo[250679]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:50:52 compute-0 sudo[250679]: pam_unix(sudo:session): session closed for user root
Nov 25 23:50:52 compute-0 sudo[250712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:50:52 compute-0 sudo[250712]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:50:52 compute-0 sudo[250712]: pam_unix(sudo:session): session closed for user root
Nov 25 23:50:52 compute-0 sudo[250752]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 23:50:52 compute-0 sudo[250752]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:50:52 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v622: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:50:52 compute-0 sudo[250827]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-newtjqspwxdvpblmprfvyrpjscijmyth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114650.5445268-1489-246375322935149/AnsiballZ_systemd.py'
Nov 25 23:50:52 compute-0 sudo[250827]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:50:52 compute-0 python3.9[250829]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 23:50:52 compute-0 sudo[250752]: pam_unix(sudo:session): session closed for user root
Nov 25 23:50:52 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 23:50:52 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:50:52 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 23:50:52 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 23:50:52 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 23:50:52 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:50:52 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev a30c7d7d-19ef-409a-9846-f7d9f6673888 does not exist
Nov 25 23:50:52 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev e0d96717-f431-430c-bd71-8cc71f0c692d does not exist
Nov 25 23:50:52 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev b09e8da1-2ed8-47cd-8479-92d1b119d5c0 does not exist
Nov 25 23:50:52 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 23:50:52 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 23:50:52 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 23:50:52 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 23:50:52 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 23:50:52 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:50:52 compute-0 systemd[1]: Reloading.
Nov 25 23:50:52 compute-0 systemd-rc-local-generator[250908]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 23:50:52 compute-0 systemd-sysv-generator[250912]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 23:50:53 compute-0 sudo[250862]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:50:53 compute-0 systemd[1]: Starting nova_compute container...
Nov 25 23:50:53 compute-0 sudo[250862]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:50:53 compute-0 sudo[250862]: pam_unix(sudo:session): session closed for user root
Nov 25 23:50:53 compute-0 sudo[250924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:50:53 compute-0 sudo[250924]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:50:53 compute-0 sudo[250924]: pam_unix(sudo:session): session closed for user root
Nov 25 23:50:53 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:50:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a06730aa2f15c477494b4ba76fd0f586c8dd22dd7e42216f85d7145b07b57045/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Nov 25 23:50:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a06730aa2f15c477494b4ba76fd0f586c8dd22dd7e42216f85d7145b07b57045/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 25 23:50:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a06730aa2f15c477494b4ba76fd0f586c8dd22dd7e42216f85d7145b07b57045/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 25 23:50:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a06730aa2f15c477494b4ba76fd0f586c8dd22dd7e42216f85d7145b07b57045/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 25 23:50:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a06730aa2f15c477494b4ba76fd0f586c8dd22dd7e42216f85d7145b07b57045/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 25 23:50:53 compute-0 podman[250922]: 2025-11-25 23:50:53.324307696 +0000 UTC m=+0.136806066 container init 16db7246babd293fdae6bf9e6f9c643fcddebf1647d88b7f21c23906a1f96b9a (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 25 23:50:53 compute-0 sudo[250960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:50:53 compute-0 podman[250922]: 2025-11-25 23:50:53.331806524 +0000 UTC m=+0.144304894 container start 16db7246babd293fdae6bf9e6f9c643fcddebf1647d88b7f21c23906a1f96b9a (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=nova_compute, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm)
Nov 25 23:50:53 compute-0 sudo[250960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:50:53 compute-0 podman[250922]: nova_compute
Nov 25 23:50:53 compute-0 sudo[250960]: pam_unix(sudo:session): session closed for user root
Nov 25 23:50:53 compute-0 nova_compute[250966]: + sudo -E kolla_set_configs
Nov 25 23:50:53 compute-0 systemd[1]: Started nova_compute container.
Nov 25 23:50:53 compute-0 sudo[250827]: pam_unix(sudo:session): session closed for user root
Nov 25 23:50:53 compute-0 sudo[250993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 23:50:53 compute-0 sudo[250993]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:50:53 compute-0 nova_compute[250966]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 25 23:50:53 compute-0 nova_compute[250966]: INFO:__main__:Validating config file
Nov 25 23:50:53 compute-0 nova_compute[250966]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 25 23:50:53 compute-0 nova_compute[250966]: INFO:__main__:Copying service configuration files
Nov 25 23:50:53 compute-0 nova_compute[250966]: INFO:__main__:Deleting /etc/nova/nova.conf
Nov 25 23:50:53 compute-0 nova_compute[250966]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Nov 25 23:50:53 compute-0 nova_compute[250966]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Nov 25 23:50:53 compute-0 nova_compute[250966]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Nov 25 23:50:53 compute-0 nova_compute[250966]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Nov 25 23:50:53 compute-0 nova_compute[250966]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 25 23:50:53 compute-0 nova_compute[250966]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 25 23:50:53 compute-0 nova_compute[250966]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 25 23:50:53 compute-0 nova_compute[250966]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 25 23:50:53 compute-0 nova_compute[250966]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Nov 25 23:50:53 compute-0 nova_compute[250966]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Nov 25 23:50:53 compute-0 nova_compute[250966]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 25 23:50:53 compute-0 nova_compute[250966]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 25 23:50:53 compute-0 nova_compute[250966]: INFO:__main__:Deleting /etc/ceph
Nov 25 23:50:53 compute-0 nova_compute[250966]: INFO:__main__:Creating directory /etc/ceph
Nov 25 23:50:53 compute-0 nova_compute[250966]: INFO:__main__:Setting permission for /etc/ceph
Nov 25 23:50:53 compute-0 nova_compute[250966]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Nov 25 23:50:53 compute-0 nova_compute[250966]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 25 23:50:53 compute-0 nova_compute[250966]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Nov 25 23:50:53 compute-0 nova_compute[250966]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 25 23:50:53 compute-0 nova_compute[250966]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Nov 25 23:50:53 compute-0 nova_compute[250966]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 25 23:50:53 compute-0 nova_compute[250966]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Nov 25 23:50:53 compute-0 nova_compute[250966]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 25 23:50:53 compute-0 nova_compute[250966]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Nov 25 23:50:53 compute-0 nova_compute[250966]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Nov 25 23:50:53 compute-0 nova_compute[250966]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Nov 25 23:50:53 compute-0 nova_compute[250966]: INFO:__main__:Writing out command to execute
Nov 25 23:50:53 compute-0 nova_compute[250966]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 25 23:50:53 compute-0 nova_compute[250966]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 25 23:50:53 compute-0 nova_compute[250966]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Nov 25 23:50:53 compute-0 nova_compute[250966]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 25 23:50:53 compute-0 nova_compute[250966]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 25 23:50:53 compute-0 nova_compute[250966]: ++ cat /run_command
Nov 25 23:50:53 compute-0 nova_compute[250966]: + CMD=nova-compute
Nov 25 23:50:53 compute-0 nova_compute[250966]: + ARGS=
Nov 25 23:50:53 compute-0 nova_compute[250966]: + sudo kolla_copy_cacerts
Nov 25 23:50:53 compute-0 nova_compute[250966]: + [[ ! -n '' ]]
Nov 25 23:50:53 compute-0 nova_compute[250966]: + . kolla_extend_start
Nov 25 23:50:53 compute-0 nova_compute[250966]: + echo 'Running command: '\''nova-compute'\'''
Nov 25 23:50:53 compute-0 nova_compute[250966]: Running command: 'nova-compute'
Nov 25 23:50:53 compute-0 nova_compute[250966]: + umask 0022
Nov 25 23:50:53 compute-0 nova_compute[250966]: + exec nova-compute
Nov 25 23:50:53 compute-0 ceph-mon[75654]: pgmap v622: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:50:53 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:50:53 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 23:50:53 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:50:53 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 23:50:53 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 23:50:53 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:50:53 compute-0 podman[251086]: 2025-11-25 23:50:53.751447613 +0000 UTC m=+0.048591379 container create cfddd113c4a733b5a80ba35e08240f5351c9ac1c69abbf5a4b1c7222c98d253e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_nash, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:50:53 compute-0 systemd[1]: Started libpod-conmon-cfddd113c4a733b5a80ba35e08240f5351c9ac1c69abbf5a4b1c7222c98d253e.scope.
Nov 25 23:50:53 compute-0 podman[251086]: 2025-11-25 23:50:53.724685674 +0000 UTC m=+0.021829460 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:50:53 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:50:53 compute-0 podman[251086]: 2025-11-25 23:50:53.841274423 +0000 UTC m=+0.138418169 container init cfddd113c4a733b5a80ba35e08240f5351c9ac1c69abbf5a4b1c7222c98d253e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_nash, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 23:50:53 compute-0 podman[251086]: 2025-11-25 23:50:53.847620131 +0000 UTC m=+0.144763857 container start cfddd113c4a733b5a80ba35e08240f5351c9ac1c69abbf5a4b1c7222c98d253e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:50:53 compute-0 podman[251086]: 2025-11-25 23:50:53.854461882 +0000 UTC m=+0.151605628 container attach cfddd113c4a733b5a80ba35e08240f5351c9ac1c69abbf5a4b1c7222c98d253e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_nash, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:50:53 compute-0 thirsty_nash[251103]: 167 167
Nov 25 23:50:53 compute-0 systemd[1]: libpod-cfddd113c4a733b5a80ba35e08240f5351c9ac1c69abbf5a4b1c7222c98d253e.scope: Deactivated successfully.
Nov 25 23:50:53 compute-0 podman[251086]: 2025-11-25 23:50:53.865980657 +0000 UTC m=+0.163124393 container died cfddd113c4a733b5a80ba35e08240f5351c9ac1c69abbf5a4b1c7222c98d253e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_nash, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 23:50:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-5f2b0926f6b5c5f6d31d623c149208697a32edeed18d46ab6cdb14ed445d12cd-merged.mount: Deactivated successfully.
Nov 25 23:50:53 compute-0 podman[251086]: 2025-11-25 23:50:53.914515073 +0000 UTC m=+0.211658799 container remove cfddd113c4a733b5a80ba35e08240f5351c9ac1c69abbf5a4b1c7222c98d253e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_nash, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 25 23:50:53 compute-0 systemd[1]: libpod-conmon-cfddd113c4a733b5a80ba35e08240f5351c9ac1c69abbf5a4b1c7222c98d253e.scope: Deactivated successfully.
Nov 25 23:50:54 compute-0 podman[251209]: 2025-11-25 23:50:54.084142597 +0000 UTC m=+0.042284551 container create c919ca432f7c22bd0258527b4b6ee575be441b0d1a8d516ca42aa49ba5cad470 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_cray, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:50:54 compute-0 systemd[1]: Started libpod-conmon-c919ca432f7c22bd0258527b4b6ee575be441b0d1a8d516ca42aa49ba5cad470.scope.
Nov 25 23:50:54 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:50:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57f5afc1eded108a80f72303fdb2a30378b439ec1c063139e9eef5be35cbbcda/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 23:50:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57f5afc1eded108a80f72303fdb2a30378b439ec1c063139e9eef5be35cbbcda/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:50:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57f5afc1eded108a80f72303fdb2a30378b439ec1c063139e9eef5be35cbbcda/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:50:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57f5afc1eded108a80f72303fdb2a30378b439ec1c063139e9eef5be35cbbcda/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:50:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57f5afc1eded108a80f72303fdb2a30378b439ec1c063139e9eef5be35cbbcda/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 23:50:54 compute-0 podman[251209]: 2025-11-25 23:50:54.062442173 +0000 UTC m=+0.020584137 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:50:54 compute-0 podman[251209]: 2025-11-25 23:50:54.17670968 +0000 UTC m=+0.134851634 container init c919ca432f7c22bd0258527b4b6ee575be441b0d1a8d516ca42aa49ba5cad470 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_cray, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:50:54 compute-0 podman[251209]: 2025-11-25 23:50:54.186451198 +0000 UTC m=+0.144593162 container start c919ca432f7c22bd0258527b4b6ee575be441b0d1a8d516ca42aa49ba5cad470 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_cray, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:50:54 compute-0 podman[251209]: 2025-11-25 23:50:54.189415006 +0000 UTC m=+0.147556960 container attach c919ca432f7c22bd0258527b4b6ee575be441b0d1a8d516ca42aa49ba5cad470 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:50:54 compute-0 python3.9[251268]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 23:50:54 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v623: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:50:55 compute-0 python3.9[251432]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 23:50:55 compute-0 stupefied_cray[251269]: --> passed data devices: 0 physical, 3 LVM
Nov 25 23:50:55 compute-0 stupefied_cray[251269]: --> relative data size: 1.0
Nov 25 23:50:55 compute-0 stupefied_cray[251269]: --> All data devices are unavailable
Nov 25 23:50:55 compute-0 systemd[1]: libpod-c919ca432f7c22bd0258527b4b6ee575be441b0d1a8d516ca42aa49ba5cad470.scope: Deactivated successfully.
Nov 25 23:50:55 compute-0 systemd[1]: libpod-c919ca432f7c22bd0258527b4b6ee575be441b0d1a8d516ca42aa49ba5cad470.scope: Consumed 1.030s CPU time.
Nov 25 23:50:55 compute-0 podman[251209]: 2025-11-25 23:50:55.293500239 +0000 UTC m=+1.251642203 container died c919ca432f7c22bd0258527b4b6ee575be441b0d1a8d516ca42aa49ba5cad470 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 23:50:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-57f5afc1eded108a80f72303fdb2a30378b439ec1c063139e9eef5be35cbbcda-merged.mount: Deactivated successfully.
Nov 25 23:50:55 compute-0 podman[251209]: 2025-11-25 23:50:55.36109458 +0000 UTC m=+1.319236544 container remove c919ca432f7c22bd0258527b4b6ee575be441b0d1a8d516ca42aa49ba5cad470 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 23:50:55 compute-0 systemd[1]: libpod-conmon-c919ca432f7c22bd0258527b4b6ee575be441b0d1a8d516ca42aa49ba5cad470.scope: Deactivated successfully.
Nov 25 23:50:55 compute-0 sudo[250993]: pam_unix(sudo:session): session closed for user root
Nov 25 23:50:55 compute-0 sudo[251488]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:50:55 compute-0 sudo[251488]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:50:55 compute-0 sudo[251488]: pam_unix(sudo:session): session closed for user root
Nov 25 23:50:55 compute-0 sudo[251538]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:50:55 compute-0 sudo[251538]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:50:55 compute-0 sudo[251538]: pam_unix(sudo:session): session closed for user root
Nov 25 23:50:55 compute-0 ceph-mon[75654]: pgmap v623: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:50:55 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:50:55 compute-0 sudo[251590]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:50:55 compute-0 sudo[251590]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:50:55 compute-0 sudo[251590]: pam_unix(sudo:session): session closed for user root
Nov 25 23:50:55 compute-0 sudo[251638]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 -- lvm list --format json
Nov 25 23:50:55 compute-0 sudo[251638]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:50:55 compute-0 nova_compute[250966]: 2025-11-25 23:50:55.825 250990 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 25 23:50:55 compute-0 nova_compute[250966]: 2025-11-25 23:50:55.826 250990 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 25 23:50:55 compute-0 nova_compute[250966]: 2025-11-25 23:50:55.826 250990 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 25 23:50:55 compute-0 nova_compute[250966]: 2025-11-25 23:50:55.826 250990 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Nov 25 23:50:55 compute-0 python3.9[251715]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 23:50:55 compute-0 nova_compute[250966]: 2025-11-25 23:50:55.973 250990 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.008 250990 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.035s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.009 250990 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Nov 25 23:50:56 compute-0 ceph-mgr[75954]: [balancer INFO root] Optimize plan auto_2025-11-25_23:50:56
Nov 25 23:50:56 compute-0 ceph-mgr[75954]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 23:50:56 compute-0 ceph-mgr[75954]: [balancer INFO root] do_upmap
Nov 25 23:50:56 compute-0 ceph-mgr[75954]: [balancer INFO root] pools ['.mgr', 'vms', 'backups', 'cephfs.cephfs.meta', 'images', 'volumes', 'cephfs.cephfs.data']
Nov 25 23:50:56 compute-0 ceph-mgr[75954]: [balancer INFO root] prepared 0/10 changes
Nov 25 23:50:56 compute-0 podman[251764]: 2025-11-25 23:50:56.083664664 +0000 UTC m=+0.052907612 container create b75b496c7ef7ca769f14708f0e20221126886c9aa77a7a9c461401a5c8f4af10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_shaw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 25 23:50:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:50:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:50:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:50:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:50:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:50:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:50:56 compute-0 systemd[1]: Started libpod-conmon-b75b496c7ef7ca769f14708f0e20221126886c9aa77a7a9c461401a5c8f4af10.scope.
Nov 25 23:50:56 compute-0 podman[251764]: 2025-11-25 23:50:56.062662398 +0000 UTC m=+0.031905346 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:50:56 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:50:56 compute-0 podman[251764]: 2025-11-25 23:50:56.186106869 +0000 UTC m=+0.155349797 container init b75b496c7ef7ca769f14708f0e20221126886c9aa77a7a9c461401a5c8f4af10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_shaw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:50:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 23:50:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 23:50:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 23:50:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 23:50:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 23:50:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 23:50:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 23:50:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 23:50:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 23:50:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 23:50:56 compute-0 podman[251764]: 2025-11-25 23:50:56.195311143 +0000 UTC m=+0.164554051 container start b75b496c7ef7ca769f14708f0e20221126886c9aa77a7a9c461401a5c8f4af10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_shaw, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:50:56 compute-0 podman[251764]: 2025-11-25 23:50:56.198108347 +0000 UTC m=+0.167351255 container attach b75b496c7ef7ca769f14708f0e20221126886c9aa77a7a9c461401a5c8f4af10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_shaw, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:50:56 compute-0 blissful_shaw[251802]: 167 167
Nov 25 23:50:56 compute-0 systemd[1]: libpod-b75b496c7ef7ca769f14708f0e20221126886c9aa77a7a9c461401a5c8f4af10.scope: Deactivated successfully.
Nov 25 23:50:56 compute-0 podman[251764]: 2025-11-25 23:50:56.203618733 +0000 UTC m=+0.172861671 container died b75b496c7ef7ca769f14708f0e20221126886c9aa77a7a9c461401a5c8f4af10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_shaw, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:50:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-3ef92f02c87643eb05c6808c0ecc5f42414cbe515ff9f03d49c3bfa0ac50d532-merged.mount: Deactivated successfully.
Nov 25 23:50:56 compute-0 podman[251764]: 2025-11-25 23:50:56.244553807 +0000 UTC m=+0.213796725 container remove b75b496c7ef7ca769f14708f0e20221126886c9aa77a7a9c461401a5c8f4af10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_shaw, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3)
Nov 25 23:50:56 compute-0 systemd[1]: libpod-conmon-b75b496c7ef7ca769f14708f0e20221126886c9aa77a7a9c461401a5c8f4af10.scope: Deactivated successfully.
Nov 25 23:50:56 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v624: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:50:56 compute-0 podman[251876]: 2025-11-25 23:50:56.438151847 +0000 UTC m=+0.047602762 container create 10711b3013dde7894da6e72b9a40da94245ed0dfde1ab7ebd31f298968192634 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ramanujan, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:50:56 compute-0 systemd[1]: Started libpod-conmon-10711b3013dde7894da6e72b9a40da94245ed0dfde1ab7ebd31f298968192634.scope.
Nov 25 23:50:56 compute-0 podman[251876]: 2025-11-25 23:50:56.414964563 +0000 UTC m=+0.024415498 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:50:56 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:50:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80f22f2c9a6bd2cabd883c3e1a1d494855346aa982b9e6556d6fd3918085d841/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 23:50:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80f22f2c9a6bd2cabd883c3e1a1d494855346aa982b9e6556d6fd3918085d841/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:50:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80f22f2c9a6bd2cabd883c3e1a1d494855346aa982b9e6556d6fd3918085d841/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:50:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80f22f2c9a6bd2cabd883c3e1a1d494855346aa982b9e6556d6fd3918085d841/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:50:56 compute-0 podman[251876]: 2025-11-25 23:50:56.539444901 +0000 UTC m=+0.148895836 container init 10711b3013dde7894da6e72b9a40da94245ed0dfde1ab7ebd31f298968192634 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ramanujan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:50:56 compute-0 podman[251876]: 2025-11-25 23:50:56.552941259 +0000 UTC m=+0.162392144 container start 10711b3013dde7894da6e72b9a40da94245ed0dfde1ab7ebd31f298968192634 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ramanujan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 23:50:56 compute-0 podman[251876]: 2025-11-25 23:50:56.556684207 +0000 UTC m=+0.166135122 container attach 10711b3013dde7894da6e72b9a40da94245ed0dfde1ab7ebd31f298968192634 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ramanujan, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.689 250990 INFO nova.virt.driver [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.893 250990 INFO nova.compute.provider_config [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Nov 25 23:50:56 compute-0 sudo[251971]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewpgjnvdlkjzhxraqrwubymmqzptyjms ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114656.300666-1549-149022639768943/AnsiballZ_podman_container.py'
Nov 25 23:50:56 compute-0 sudo[251971]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.918 250990 DEBUG oslo_concurrency.lockutils [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.919 250990 DEBUG oslo_concurrency.lockutils [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.920 250990 DEBUG oslo_concurrency.lockutils [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.920 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.920 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.920 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.921 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.921 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.921 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.922 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.922 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.922 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.922 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.923 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.923 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.923 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.924 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.924 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.924 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.924 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.925 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.925 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.925 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.926 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.926 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.926 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.926 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.927 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.927 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.927 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.928 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.928 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.928 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.929 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.929 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.929 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.929 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.930 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.930 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.930 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.930 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.931 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.931 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.931 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.932 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.932 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.932 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.933 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.933 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.933 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.933 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.934 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.934 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.934 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.935 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.935 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.935 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.935 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.936 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.936 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.936 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.936 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.937 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.937 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.937 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.937 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.938 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.938 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.938 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.938 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.939 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.939 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.939 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.940 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.940 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.940 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.940 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.941 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.941 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.941 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.942 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.942 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.942 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.943 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.943 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.943 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.943 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.944 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.944 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.944 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.944 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.945 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.945 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.945 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.946 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.946 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.946 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.946 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.947 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.947 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.947 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.947 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.948 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.948 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.948 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.948 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.948 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.948 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.949 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.949 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.949 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.949 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.949 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.950 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.950 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.950 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.950 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.950 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.951 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.951 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.951 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.951 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.951 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.952 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.952 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.952 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.952 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.952 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.953 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.953 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.953 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.953 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.953 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.954 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.954 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.954 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.954 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.954 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.955 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.955 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.955 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.955 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.955 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.955 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.956 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.956 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.956 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.956 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.956 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.957 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.957 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.957 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.957 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.957 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.958 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.958 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.958 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.958 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.958 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.959 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.959 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.959 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.959 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.959 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.959 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.960 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.960 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.960 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.960 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.960 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.960 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.960 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.961 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.961 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.961 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.961 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:56 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.961 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.961 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.961 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.961 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.962 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.962 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.962 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.962 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.962 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.962 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.962 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.963 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.963 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.963 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.963 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.963 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.963 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.963 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.963 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.964 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.964 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.964 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.964 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.964 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.964 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.964 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.965 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.965 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.965 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.965 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.965 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.965 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.965 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.965 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.966 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.966 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.966 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.966 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.966 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.966 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.966 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.967 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.967 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.967 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.967 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.967 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.967 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.967 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.967 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.968 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.968 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.968 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.968 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.968 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.968 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.968 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.969 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.969 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.969 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.969 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.969 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.969 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.969 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.969 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.970 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.970 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.970 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.970 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.970 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.970 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.970 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.971 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.971 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.971 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.971 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.971 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.971 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.971 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.971 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.972 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.972 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.972 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.972 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.972 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.972 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.972 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.973 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.973 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.973 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.973 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.973 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.973 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.973 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.973 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.974 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.974 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.974 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.974 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.974 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.974 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.974 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.975 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.975 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.975 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.975 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.975 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.975 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.975 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.975 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.976 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.976 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.976 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.976 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.976 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.976 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.976 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.977 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.977 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.977 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.977 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.977 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.977 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.977 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.977 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.978 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.978 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.978 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.978 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.978 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.978 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.978 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.979 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.979 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.979 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.979 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.979 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.979 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.979 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.979 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.980 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.980 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.980 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.980 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.980 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.980 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.980 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.981 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.981 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.981 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.981 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.981 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.981 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.981 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.981 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.982 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.982 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.982 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.982 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.982 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.982 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.982 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.982 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.983 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.983 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.983 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.983 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.983 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.983 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.983 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.984 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.984 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.984 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.984 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.984 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.984 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.984 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.984 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.985 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.985 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.985 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.985 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.985 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.985 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.986 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.986 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.986 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.986 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.986 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.986 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.986 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.987 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.987 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.987 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.999 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.999 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.999 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.999 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:56.999 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.000 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.000 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.000 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.000 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.000 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.000 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.001 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.001 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.001 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.001 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.001 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.002 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.002 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.002 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.002 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.002 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.003 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.003 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.003 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.003 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.004 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.004 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.004 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.004 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.004 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.005 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.005 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.005 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.005 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.005 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.006 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.006 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.006 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.006 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.006 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.007 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.007 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.007 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.007 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.007 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.008 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.008 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.008 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.008 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.008 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.009 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.009 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.009 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.009 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.009 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.010 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.010 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.010 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.010 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.010 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.011 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.011 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.011 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.011 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.011 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.012 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.012 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.012 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.012 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.013 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.013 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.013 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.013 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.013 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.014 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.014 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.014 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.014 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.015 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.015 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.015 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.015 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.015 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.016 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.016 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.016 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.016 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.016 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.017 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.017 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.017 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.017 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.017 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.018 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.018 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.018 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.018 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.018 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.019 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.019 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.019 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.019 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.019 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.020 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.020 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.020 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.020 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.020 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.021 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.021 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.021 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.021 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.021 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.022 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.022 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.022 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.022 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.022 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.023 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.023 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.023 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.023 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.023 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.024 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.024 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.024 250990 WARNING oslo_config.cfg [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Nov 25 23:50:57 compute-0 nova_compute[250966]: live_migration_uri is deprecated for removal in favor of two other options that
Nov 25 23:50:57 compute-0 nova_compute[250966]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Nov 25 23:50:57 compute-0 nova_compute[250966]: and ``live_migration_inbound_addr`` respectively.
Nov 25 23:50:57 compute-0 nova_compute[250966]: ).  Its value may be silently ignored in the future.
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.024 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.025 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.025 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.025 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.025 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.025 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.026 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.026 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.026 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.026 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.026 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.027 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.027 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.027 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.027 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.027 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.028 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.028 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.028 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.rbd_secret_uuid        = 101922db-575f-58e2-980f-928050464f69 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.028 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.028 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.029 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.029 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.029 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.029 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.029 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.030 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.030 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.030 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.030 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.030 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.031 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.031 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.031 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.031 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.031 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.032 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.032 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.032 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.032 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.032 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.033 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.033 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.033 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.033 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.033 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.034 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.034 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.034 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.035 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.035 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.035 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.036 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.036 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.036 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.037 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.037 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.037 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.038 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.038 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.038 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.039 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.039 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.039 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.039 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.039 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.039 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.040 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.040 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.040 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.040 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.040 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.040 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.041 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.041 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.041 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.041 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.041 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.041 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.041 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.042 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.042 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.042 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.042 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.042 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.042 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.043 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.043 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.043 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.043 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.043 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.043 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.043 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.044 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.044 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.044 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.044 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.044 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.044 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.044 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.044 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.045 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.045 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.045 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.045 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.045 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.045 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.045 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.046 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.046 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.046 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.046 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.046 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.046 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.046 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.047 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.047 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.047 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.047 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.047 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.047 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.047 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.048 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.048 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.048 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.048 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.048 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.048 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.048 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.049 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.049 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.049 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.049 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.049 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.049 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.049 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.049 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.050 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.050 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.050 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.050 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.050 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.051 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.051 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.051 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.051 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.051 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.051 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.051 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.051 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.052 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.052 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.052 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.052 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.052 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.052 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.052 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.053 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.053 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.053 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.053 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.053 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.053 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.053 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.054 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.054 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.054 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.054 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.054 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.054 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.054 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.055 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.055 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.055 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.055 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.055 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.055 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.055 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.056 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.056 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.056 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.056 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.056 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.056 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.057 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.057 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.057 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.057 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.057 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.057 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.057 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.058 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.058 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.058 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.058 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.058 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.058 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.059 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.059 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.059 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.059 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.059 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.059 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.059 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.060 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.060 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.060 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.060 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.060 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.060 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.060 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.061 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.061 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.061 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.061 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.061 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.061 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.061 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.061 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.062 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.062 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.062 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.062 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.062 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.062 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.062 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.063 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.063 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.063 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.063 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.063 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.063 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.063 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.064 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.064 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.064 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.064 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.064 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.064 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.064 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.065 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.065 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.065 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.065 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.065 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.065 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.065 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.066 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.066 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.066 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.066 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.066 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.066 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.067 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.067 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.067 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.067 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.067 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.067 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.068 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.068 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.068 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.068 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.068 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.068 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.068 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.069 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.069 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.069 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.069 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.069 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.069 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.069 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.070 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.070 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.070 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.070 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.070 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.070 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.070 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.071 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.071 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.071 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.071 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.071 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.072 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.072 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.072 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.072 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.072 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.072 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.072 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.073 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.073 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.073 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.073 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.073 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.073 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.074 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.074 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.074 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.074 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.074 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.074 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.074 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.075 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.075 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.075 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.075 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.075 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.075 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.075 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.076 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.076 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.076 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.076 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.076 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.076 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.076 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.076 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.077 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.077 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.077 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.077 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.077 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.077 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.077 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.078 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.078 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.078 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.078 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.078 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.078 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.078 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.079 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.079 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.079 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.079 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.079 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.079 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.079 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.080 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.080 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.080 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.080 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.080 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.080 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.080 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.081 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.081 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.081 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.081 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.081 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.081 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.081 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.082 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.082 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.082 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.082 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.082 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.082 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.082 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.083 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.083 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.083 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.083 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.083 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.083 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.083 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.084 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.084 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.084 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.084 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.084 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.084 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.084 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.084 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.085 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.085 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.085 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.085 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.085 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.085 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.085 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.086 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.086 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.086 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.086 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.086 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.086 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.086 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.087 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.087 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.087 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.087 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.087 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.087 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.087 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.088 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.088 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.088 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.088 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.088 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.088 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.088 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.089 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.089 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.089 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.089 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.089 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.089 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.089 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.089 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.090 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.090 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.090 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.090 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.090 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.090 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.091 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.091 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.091 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.091 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.091 250990 DEBUG oslo_service.service [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.092 250990 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.124 250990 DEBUG nova.virt.libvirt.host [None req-8c98dd3e-5696-4ba6-a7b5-64006fd5add8 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.125 250990 DEBUG nova.virt.libvirt.host [None req-8c98dd3e-5696-4ba6-a7b5-64006fd5add8 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.125 250990 DEBUG nova.virt.libvirt.host [None req-8c98dd3e-5696-4ba6-a7b5-64006fd5add8 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.125 250990 DEBUG nova.virt.libvirt.host [None req-8c98dd3e-5696-4ba6-a7b5-64006fd5add8 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Nov 25 23:50:57 compute-0 python3.9[251973]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Nov 25 23:50:57 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Nov 25 23:50:57 compute-0 systemd[1]: Started libvirt QEMU daemon.
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.210 250990 DEBUG nova.virt.libvirt.host [None req-8c98dd3e-5696-4ba6-a7b5-64006fd5add8 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f64b8f377c0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.212 250990 DEBUG nova.virt.libvirt.host [None req-8c98dd3e-5696-4ba6-a7b5-64006fd5add8 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f64b8f377c0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.213 250990 INFO nova.virt.libvirt.driver [None req-8c98dd3e-5696-4ba6-a7b5-64006fd5add8 - - - - - -] Connection event '1' reason 'None'
Nov 25 23:50:57 compute-0 rsyslogd[1001]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.235 250990 WARNING nova.virt.libvirt.driver [None req-8c98dd3e-5696-4ba6-a7b5-64006fd5add8 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Nov 25 23:50:57 compute-0 nova_compute[250966]: 2025-11-25 23:50:57.235 250990 DEBUG nova.virt.libvirt.volume.mount [None req-8c98dd3e-5696-4ba6-a7b5-64006fd5add8 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Nov 25 23:50:57 compute-0 sudo[251971]: pam_unix(sudo:session): session closed for user root
Nov 25 23:50:57 compute-0 brave_ramanujan[251893]: {
Nov 25 23:50:57 compute-0 brave_ramanujan[251893]:     "0": [
Nov 25 23:50:57 compute-0 brave_ramanujan[251893]:         {
Nov 25 23:50:57 compute-0 brave_ramanujan[251893]:             "devices": [
Nov 25 23:50:57 compute-0 brave_ramanujan[251893]:                 "/dev/loop3"
Nov 25 23:50:57 compute-0 brave_ramanujan[251893]:             ],
Nov 25 23:50:57 compute-0 brave_ramanujan[251893]:             "lv_name": "ceph_lv0",
Nov 25 23:50:57 compute-0 brave_ramanujan[251893]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 23:50:57 compute-0 brave_ramanujan[251893]:             "lv_size": "21470642176",
Nov 25 23:50:57 compute-0 brave_ramanujan[251893]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c5ddf08a-6193-41e0-8332-60b5083aa62e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 23:50:57 compute-0 brave_ramanujan[251893]:             "lv_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 23:50:57 compute-0 brave_ramanujan[251893]:             "name": "ceph_lv0",
Nov 25 23:50:57 compute-0 brave_ramanujan[251893]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 23:50:57 compute-0 brave_ramanujan[251893]:             "tags": {
Nov 25 23:50:57 compute-0 brave_ramanujan[251893]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 23:50:57 compute-0 brave_ramanujan[251893]:                 "ceph.block_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 23:50:57 compute-0 brave_ramanujan[251893]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 23:50:57 compute-0 brave_ramanujan[251893]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:50:57 compute-0 brave_ramanujan[251893]:                 "ceph.cluster_name": "ceph",
Nov 25 23:50:57 compute-0 brave_ramanujan[251893]:                 "ceph.crush_device_class": "",
Nov 25 23:50:57 compute-0 brave_ramanujan[251893]:                 "ceph.encrypted": "0",
Nov 25 23:50:57 compute-0 brave_ramanujan[251893]:                 "ceph.osd_fsid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 23:50:57 compute-0 brave_ramanujan[251893]:                 "ceph.osd_id": "0",
Nov 25 23:50:57 compute-0 brave_ramanujan[251893]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 23:50:57 compute-0 brave_ramanujan[251893]:                 "ceph.type": "block",
Nov 25 23:50:57 compute-0 brave_ramanujan[251893]:                 "ceph.vdo": "0"
Nov 25 23:50:57 compute-0 brave_ramanujan[251893]:             },
Nov 25 23:50:57 compute-0 brave_ramanujan[251893]:             "type": "block",
Nov 25 23:50:57 compute-0 brave_ramanujan[251893]:             "vg_name": "ceph_vg0"
Nov 25 23:50:57 compute-0 brave_ramanujan[251893]:         }
Nov 25 23:50:57 compute-0 brave_ramanujan[251893]:     ],
Nov 25 23:50:57 compute-0 brave_ramanujan[251893]:     "1": [
Nov 25 23:50:57 compute-0 brave_ramanujan[251893]:         {
Nov 25 23:50:57 compute-0 brave_ramanujan[251893]:             "devices": [
Nov 25 23:50:57 compute-0 brave_ramanujan[251893]:                 "/dev/loop4"
Nov 25 23:50:57 compute-0 brave_ramanujan[251893]:             ],
Nov 25 23:50:57 compute-0 brave_ramanujan[251893]:             "lv_name": "ceph_lv1",
Nov 25 23:50:57 compute-0 brave_ramanujan[251893]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 23:50:57 compute-0 brave_ramanujan[251893]:             "lv_size": "21470642176",
Nov 25 23:50:57 compute-0 brave_ramanujan[251893]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21bbab34-bea3-466b-8bf7-812749fcef47,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 23:50:57 compute-0 brave_ramanujan[251893]:             "lv_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 23:50:57 compute-0 brave_ramanujan[251893]:             "name": "ceph_lv1",
Nov 25 23:50:57 compute-0 brave_ramanujan[251893]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 23:50:57 compute-0 brave_ramanujan[251893]:             "tags": {
Nov 25 23:50:57 compute-0 brave_ramanujan[251893]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 23:50:57 compute-0 brave_ramanujan[251893]:                 "ceph.block_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 23:50:57 compute-0 brave_ramanujan[251893]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 23:50:57 compute-0 brave_ramanujan[251893]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:50:57 compute-0 brave_ramanujan[251893]:                 "ceph.cluster_name": "ceph",
Nov 25 23:50:57 compute-0 brave_ramanujan[251893]:                 "ceph.crush_device_class": "",
Nov 25 23:50:57 compute-0 brave_ramanujan[251893]:                 "ceph.encrypted": "0",
Nov 25 23:50:57 compute-0 brave_ramanujan[251893]:                 "ceph.osd_fsid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 23:50:57 compute-0 brave_ramanujan[251893]:                 "ceph.osd_id": "1",
Nov 25 23:50:57 compute-0 brave_ramanujan[251893]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 23:50:57 compute-0 brave_ramanujan[251893]:                 "ceph.type": "block",
Nov 25 23:50:57 compute-0 brave_ramanujan[251893]:                 "ceph.vdo": "0"
Nov 25 23:50:57 compute-0 brave_ramanujan[251893]:             },
Nov 25 23:50:57 compute-0 brave_ramanujan[251893]:             "type": "block",
Nov 25 23:50:57 compute-0 brave_ramanujan[251893]:             "vg_name": "ceph_vg1"
Nov 25 23:50:57 compute-0 brave_ramanujan[251893]:         }
Nov 25 23:50:57 compute-0 brave_ramanujan[251893]:     ],
Nov 25 23:50:57 compute-0 brave_ramanujan[251893]:     "2": [
Nov 25 23:50:57 compute-0 brave_ramanujan[251893]:         {
Nov 25 23:50:57 compute-0 brave_ramanujan[251893]:             "devices": [
Nov 25 23:50:57 compute-0 brave_ramanujan[251893]:                 "/dev/loop5"
Nov 25 23:50:57 compute-0 brave_ramanujan[251893]:             ],
Nov 25 23:50:57 compute-0 brave_ramanujan[251893]:             "lv_name": "ceph_lv2",
Nov 25 23:50:57 compute-0 brave_ramanujan[251893]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 23:50:57 compute-0 brave_ramanujan[251893]:             "lv_size": "21470642176",
Nov 25 23:50:57 compute-0 brave_ramanujan[251893]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=019d967b-1a56-4e90-8682-a890da577e20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 23:50:57 compute-0 brave_ramanujan[251893]:             "lv_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 23:50:57 compute-0 brave_ramanujan[251893]:             "name": "ceph_lv2",
Nov 25 23:50:57 compute-0 brave_ramanujan[251893]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 23:50:57 compute-0 brave_ramanujan[251893]:             "tags": {
Nov 25 23:50:57 compute-0 brave_ramanujan[251893]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 23:50:57 compute-0 brave_ramanujan[251893]:                 "ceph.block_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 23:50:57 compute-0 brave_ramanujan[251893]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 23:50:57 compute-0 brave_ramanujan[251893]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:50:57 compute-0 brave_ramanujan[251893]:                 "ceph.cluster_name": "ceph",
Nov 25 23:50:57 compute-0 brave_ramanujan[251893]:                 "ceph.crush_device_class": "",
Nov 25 23:50:57 compute-0 brave_ramanujan[251893]:                 "ceph.encrypted": "0",
Nov 25 23:50:57 compute-0 brave_ramanujan[251893]:                 "ceph.osd_fsid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 23:50:57 compute-0 brave_ramanujan[251893]:                 "ceph.osd_id": "2",
Nov 25 23:50:57 compute-0 brave_ramanujan[251893]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 23:50:57 compute-0 brave_ramanujan[251893]:                 "ceph.type": "block",
Nov 25 23:50:57 compute-0 brave_ramanujan[251893]:                 "ceph.vdo": "0"
Nov 25 23:50:57 compute-0 brave_ramanujan[251893]:             },
Nov 25 23:50:57 compute-0 brave_ramanujan[251893]:             "type": "block",
Nov 25 23:50:57 compute-0 brave_ramanujan[251893]:             "vg_name": "ceph_vg2"
Nov 25 23:50:57 compute-0 brave_ramanujan[251893]:         }
Nov 25 23:50:57 compute-0 brave_ramanujan[251893]:     ]
Nov 25 23:50:57 compute-0 brave_ramanujan[251893]: }
Nov 25 23:50:57 compute-0 systemd[1]: libpod-10711b3013dde7894da6e72b9a40da94245ed0dfde1ab7ebd31f298968192634.scope: Deactivated successfully.
Nov 25 23:50:57 compute-0 podman[252050]: 2025-11-25 23:50:57.341593674 +0000 UTC m=+0.023228127 container died 10711b3013dde7894da6e72b9a40da94245ed0dfde1ab7ebd31f298968192634 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ramanujan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 25 23:50:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-80f22f2c9a6bd2cabd883c3e1a1d494855346aa982b9e6556d6fd3918085d841-merged.mount: Deactivated successfully.
Nov 25 23:50:57 compute-0 podman[252050]: 2025-11-25 23:50:57.398521712 +0000 UTC m=+0.080156135 container remove 10711b3013dde7894da6e72b9a40da94245ed0dfde1ab7ebd31f298968192634 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ramanujan, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:50:57 compute-0 systemd[1]: libpod-conmon-10711b3013dde7894da6e72b9a40da94245ed0dfde1ab7ebd31f298968192634.scope: Deactivated successfully.
Nov 25 23:50:57 compute-0 sudo[251638]: pam_unix(sudo:session): session closed for user root
Nov 25 23:50:57 compute-0 sudo[252090]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:50:57 compute-0 sudo[252090]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:50:57 compute-0 sudo[252090]: pam_unix(sudo:session): session closed for user root
Nov 25 23:50:57 compute-0 ceph-mon[75654]: pgmap v624: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:50:57 compute-0 sudo[252138]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:50:57 compute-0 sudo[252138]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:50:57 compute-0 sudo[252138]: pam_unix(sudo:session): session closed for user root
Nov 25 23:50:57 compute-0 sudo[252192]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:50:57 compute-0 sudo[252192]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:50:57 compute-0 sudo[252192]: pam_unix(sudo:session): session closed for user root
Nov 25 23:50:57 compute-0 sudo[252240]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 -- raw list --format json
Nov 25 23:50:57 compute-0 sudo[252240]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:50:57 compute-0 sudo[252316]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmgfjqyvldhkhronrwiuitgncbqtulao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114657.5074682-1557-244561295163584/AnsiballZ_systemd.py'
Nov 25 23:50:57 compute-0 sudo[252316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:50:58 compute-0 podman[252366]: 2025-11-25 23:50:58.048452962 +0000 UTC m=+0.044229183 container create 54a191c444727e2bab4f928b088c41bbe38f6a5da2f822d0dba363429b0f302d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_sanderson, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:50:58 compute-0 python3.9[252319]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 23:50:58 compute-0 systemd[1]: Started libpod-conmon-54a191c444727e2bab4f928b088c41bbe38f6a5da2f822d0dba363429b0f302d.scope.
Nov 25 23:50:58 compute-0 systemd[1]: Stopping nova_compute container...
Nov 25 23:50:58 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:50:58 compute-0 podman[252366]: 2025-11-25 23:50:58.031874043 +0000 UTC m=+0.027650284 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:50:58 compute-0 podman[252366]: 2025-11-25 23:50:58.135534329 +0000 UTC m=+0.131310570 container init 54a191c444727e2bab4f928b088c41bbe38f6a5da2f822d0dba363429b0f302d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 25 23:50:58 compute-0 podman[252366]: 2025-11-25 23:50:58.141882027 +0000 UTC m=+0.137658248 container start 54a191c444727e2bab4f928b088c41bbe38f6a5da2f822d0dba363429b0f302d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 23:50:58 compute-0 agitated_sanderson[252386]: 167 167
Nov 25 23:50:58 compute-0 systemd[1]: libpod-54a191c444727e2bab4f928b088c41bbe38f6a5da2f822d0dba363429b0f302d.scope: Deactivated successfully.
Nov 25 23:50:58 compute-0 podman[252366]: 2025-11-25 23:50:58.148136283 +0000 UTC m=+0.143912504 container attach 54a191c444727e2bab4f928b088c41bbe38f6a5da2f822d0dba363429b0f302d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_sanderson, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 23:50:58 compute-0 podman[252366]: 2025-11-25 23:50:58.148729399 +0000 UTC m=+0.144505650 container died 54a191c444727e2bab4f928b088c41bbe38f6a5da2f822d0dba363429b0f302d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:50:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-c58f75b4eb28cf5255853ee5ed7e68d28e3c974912bd34e78e536b2bd6e54d86-merged.mount: Deactivated successfully.
Nov 25 23:50:58 compute-0 podman[252366]: 2025-11-25 23:50:58.190560327 +0000 UTC m=+0.186336568 container remove 54a191c444727e2bab4f928b088c41bbe38f6a5da2f822d0dba363429b0f302d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_sanderson, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 23:50:58 compute-0 nova_compute[250966]: 2025-11-25 23:50:58.191 250990 DEBUG oslo_concurrency.lockutils [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 23:50:58 compute-0 nova_compute[250966]: 2025-11-25 23:50:58.192 250990 DEBUG oslo_concurrency.lockutils [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 23:50:58 compute-0 nova_compute[250966]: 2025-11-25 23:50:58.192 250990 DEBUG oslo_concurrency.lockutils [None req-06409ee8-3bda-4c1b-a441-c8f76c704693 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 23:50:58 compute-0 systemd[1]: libpod-conmon-54a191c444727e2bab4f928b088c41bbe38f6a5da2f822d0dba363429b0f302d.scope: Deactivated successfully.
Nov 25 23:50:58 compute-0 podman[252423]: 2025-11-25 23:50:58.372933089 +0000 UTC m=+0.036990711 container create 8eb504ad5e1f3b4ba7c27e17e4db52e8a90c90d73f6da29f9b83c890b1c7c75b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_wilbur, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True)
Nov 25 23:50:58 compute-0 systemd[1]: Started libpod-conmon-8eb504ad5e1f3b4ba7c27e17e4db52e8a90c90d73f6da29f9b83c890b1c7c75b.scope.
Nov 25 23:50:58 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v625: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:50:58 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:50:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2992ad7e4d325d4135aa23a9e31fd72500878f1a53e2d98b6d6d67c1210d50c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 23:50:58 compute-0 podman[252423]: 2025-11-25 23:50:58.356642068 +0000 UTC m=+0.020699710 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:50:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2992ad7e4d325d4135aa23a9e31fd72500878f1a53e2d98b6d6d67c1210d50c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:50:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2992ad7e4d325d4135aa23a9e31fd72500878f1a53e2d98b6d6d67c1210d50c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:50:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2992ad7e4d325d4135aa23a9e31fd72500878f1a53e2d98b6d6d67c1210d50c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:50:58 compute-0 podman[252423]: 2025-11-25 23:50:58.468726897 +0000 UTC m=+0.132784559 container init 8eb504ad5e1f3b4ba7c27e17e4db52e8a90c90d73f6da29f9b83c890b1c7c75b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_wilbur, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:50:58 compute-0 podman[252423]: 2025-11-25 23:50:58.47561886 +0000 UTC m=+0.139676482 container start 8eb504ad5e1f3b4ba7c27e17e4db52e8a90c90d73f6da29f9b83c890b1c7c75b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_wilbur, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 23:50:58 compute-0 podman[252423]: 2025-11-25 23:50:58.478594959 +0000 UTC m=+0.142652591 container attach 8eb504ad5e1f3b4ba7c27e17e4db52e8a90c90d73f6da29f9b83c890b1c7c75b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_wilbur, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:50:58 compute-0 systemd[1]: libpod-16db7246babd293fdae6bf9e6f9c643fcddebf1647d88b7f21c23906a1f96b9a.scope: Deactivated successfully.
Nov 25 23:50:58 compute-0 systemd[1]: libpod-16db7246babd293fdae6bf9e6f9c643fcddebf1647d88b7f21c23906a1f96b9a.scope: Consumed 3.114s CPU time.
Nov 25 23:50:58 compute-0 virtqemud[251995]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Nov 25 23:50:58 compute-0 virtqemud[251995]: hostname: compute-0
Nov 25 23:50:58 compute-0 virtqemud[251995]: End of file while reading data: Input/output error
Nov 25 23:50:58 compute-0 podman[252388]: 2025-11-25 23:50:58.621553396 +0000 UTC m=+0.517035789 container died 16db7246babd293fdae6bf9e6f9c643fcddebf1647d88b7f21c23906a1f96b9a (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=nova_compute, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 23:50:58 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-16db7246babd293fdae6bf9e6f9c643fcddebf1647d88b7f21c23906a1f96b9a-userdata-shm.mount: Deactivated successfully.
Nov 25 23:50:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-a06730aa2f15c477494b4ba76fd0f586c8dd22dd7e42216f85d7145b07b57045-merged.mount: Deactivated successfully.
Nov 25 23:50:59 compute-0 zen_wilbur[252439]: {
Nov 25 23:50:59 compute-0 zen_wilbur[252439]:     "019d967b-1a56-4e90-8682-a890da577e20": {
Nov 25 23:50:59 compute-0 zen_wilbur[252439]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:50:59 compute-0 zen_wilbur[252439]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 23:50:59 compute-0 zen_wilbur[252439]:         "osd_id": 2,
Nov 25 23:50:59 compute-0 zen_wilbur[252439]:         "osd_uuid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 23:50:59 compute-0 zen_wilbur[252439]:         "type": "bluestore"
Nov 25 23:50:59 compute-0 zen_wilbur[252439]:     },
Nov 25 23:50:59 compute-0 zen_wilbur[252439]:     "21bbab34-bea3-466b-8bf7-812749fcef47": {
Nov 25 23:50:59 compute-0 zen_wilbur[252439]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:50:59 compute-0 zen_wilbur[252439]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 23:50:59 compute-0 zen_wilbur[252439]:         "osd_id": 1,
Nov 25 23:50:59 compute-0 zen_wilbur[252439]:         "osd_uuid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 23:50:59 compute-0 zen_wilbur[252439]:         "type": "bluestore"
Nov 25 23:50:59 compute-0 zen_wilbur[252439]:     },
Nov 25 23:50:59 compute-0 zen_wilbur[252439]:     "c5ddf08a-6193-41e0-8332-60b5083aa62e": {
Nov 25 23:50:59 compute-0 zen_wilbur[252439]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:50:59 compute-0 zen_wilbur[252439]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 23:50:59 compute-0 zen_wilbur[252439]:         "osd_id": 0,
Nov 25 23:50:59 compute-0 zen_wilbur[252439]:         "osd_uuid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 23:50:59 compute-0 zen_wilbur[252439]:         "type": "bluestore"
Nov 25 23:50:59 compute-0 zen_wilbur[252439]:     }
Nov 25 23:50:59 compute-0 zen_wilbur[252439]: }
Nov 25 23:50:59 compute-0 systemd[1]: libpod-8eb504ad5e1f3b4ba7c27e17e4db52e8a90c90d73f6da29f9b83c890b1c7c75b.scope: Deactivated successfully.
Nov 25 23:50:59 compute-0 systemd[1]: libpod-8eb504ad5e1f3b4ba7c27e17e4db52e8a90c90d73f6da29f9b83c890b1c7c75b.scope: Consumed 1.042s CPU time.
Nov 25 23:50:59 compute-0 podman[252423]: 2025-11-25 23:50:59.534292319 +0000 UTC m=+1.198349981 container died 8eb504ad5e1f3b4ba7c27e17e4db52e8a90c90d73f6da29f9b83c890b1c7c75b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_wilbur, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 23:51:00 compute-0 ceph-mon[75654]: pgmap v625: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:51:00 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v626: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:51:00 compute-0 podman[252388]: 2025-11-25 23:51:00.524577977 +0000 UTC m=+2.420060410 container cleanup 16db7246babd293fdae6bf9e6f9c643fcddebf1647d88b7f21c23906a1f96b9a (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:51:00 compute-0 podman[252388]: nova_compute
Nov 25 23:51:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-e2992ad7e4d325d4135aa23a9e31fd72500878f1a53e2d98b6d6d67c1210d50c-merged.mount: Deactivated successfully.
Nov 25 23:51:00 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:51:00 compute-0 podman[252423]: 2025-11-25 23:51:00.585718107 +0000 UTC m=+2.249775729 container remove 8eb504ad5e1f3b4ba7c27e17e4db52e8a90c90d73f6da29f9b83c890b1c7c75b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_wilbur, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 25 23:51:00 compute-0 podman[252499]: nova_compute
Nov 25 23:51:00 compute-0 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Nov 25 23:51:00 compute-0 systemd[1]: Stopped nova_compute container.
Nov 25 23:51:00 compute-0 systemd[1]: Starting nova_compute container...
Nov 25 23:51:00 compute-0 systemd[1]: libpod-conmon-8eb504ad5e1f3b4ba7c27e17e4db52e8a90c90d73f6da29f9b83c890b1c7c75b.scope: Deactivated successfully.
Nov 25 23:51:00 compute-0 sudo[252240]: pam_unix(sudo:session): session closed for user root
Nov 25 23:51:00 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 23:51:00 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:51:00 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 23:51:00 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:51:00 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev d670e01e-9f3f-4218-a490-20b1a9304fca does not exist
Nov 25 23:51:00 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:51:00 compute-0 sudo[252518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:51:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a06730aa2f15c477494b4ba76fd0f586c8dd22dd7e42216f85d7145b07b57045/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Nov 25 23:51:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a06730aa2f15c477494b4ba76fd0f586c8dd22dd7e42216f85d7145b07b57045/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 25 23:51:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a06730aa2f15c477494b4ba76fd0f586c8dd22dd7e42216f85d7145b07b57045/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 25 23:51:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a06730aa2f15c477494b4ba76fd0f586c8dd22dd7e42216f85d7145b07b57045/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 25 23:51:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a06730aa2f15c477494b4ba76fd0f586c8dd22dd7e42216f85d7145b07b57045/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 25 23:51:00 compute-0 sudo[252518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:51:00 compute-0 sudo[252518]: pam_unix(sudo:session): session closed for user root
Nov 25 23:51:00 compute-0 podman[252512]: 2025-11-25 23:51:00.736673167 +0000 UTC m=+0.097799563 container init 16db7246babd293fdae6bf9e6f9c643fcddebf1647d88b7f21c23906a1f96b9a (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=nova_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.build-date=20251118)
Nov 25 23:51:00 compute-0 podman[252512]: 2025-11-25 23:51:00.743035465 +0000 UTC m=+0.104161841 container start 16db7246babd293fdae6bf9e6f9c643fcddebf1647d88b7f21c23906a1f96b9a (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, io.buildah.version=1.41.3, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 25 23:51:00 compute-0 podman[252512]: nova_compute
Nov 25 23:51:00 compute-0 nova_compute[252550]: + sudo -E kolla_set_configs
Nov 25 23:51:00 compute-0 systemd[1]: Started nova_compute container.
Nov 25 23:51:00 compute-0 sudo[252556]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 23:51:00 compute-0 sudo[252556]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:51:00 compute-0 sudo[252316]: pam_unix(sudo:session): session closed for user root
Nov 25 23:51:00 compute-0 sudo[252556]: pam_unix(sudo:session): session closed for user root
Nov 25 23:51:00 compute-0 nova_compute[252550]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 25 23:51:00 compute-0 nova_compute[252550]: INFO:__main__:Validating config file
Nov 25 23:51:00 compute-0 nova_compute[252550]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 25 23:51:00 compute-0 nova_compute[252550]: INFO:__main__:Copying service configuration files
Nov 25 23:51:00 compute-0 nova_compute[252550]: INFO:__main__:Deleting /etc/nova/nova.conf
Nov 25 23:51:00 compute-0 nova_compute[252550]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Nov 25 23:51:00 compute-0 nova_compute[252550]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Nov 25 23:51:00 compute-0 nova_compute[252550]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Nov 25 23:51:00 compute-0 nova_compute[252550]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Nov 25 23:51:00 compute-0 nova_compute[252550]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Nov 25 23:51:00 compute-0 nova_compute[252550]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 25 23:51:00 compute-0 nova_compute[252550]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 25 23:51:00 compute-0 nova_compute[252550]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 25 23:51:00 compute-0 nova_compute[252550]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 25 23:51:00 compute-0 nova_compute[252550]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 25 23:51:00 compute-0 nova_compute[252550]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 25 23:51:00 compute-0 nova_compute[252550]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Nov 25 23:51:00 compute-0 nova_compute[252550]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Nov 25 23:51:00 compute-0 nova_compute[252550]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Nov 25 23:51:00 compute-0 nova_compute[252550]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 25 23:51:00 compute-0 nova_compute[252550]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 25 23:51:00 compute-0 nova_compute[252550]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 25 23:51:00 compute-0 nova_compute[252550]: INFO:__main__:Deleting /etc/ceph
Nov 25 23:51:00 compute-0 nova_compute[252550]: INFO:__main__:Creating directory /etc/ceph
Nov 25 23:51:00 compute-0 nova_compute[252550]: INFO:__main__:Setting permission for /etc/ceph
Nov 25 23:51:00 compute-0 nova_compute[252550]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Nov 25 23:51:00 compute-0 nova_compute[252550]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 25 23:51:00 compute-0 nova_compute[252550]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Nov 25 23:51:00 compute-0 nova_compute[252550]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 25 23:51:00 compute-0 nova_compute[252550]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Nov 25 23:51:00 compute-0 nova_compute[252550]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Nov 25 23:51:00 compute-0 nova_compute[252550]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 25 23:51:00 compute-0 nova_compute[252550]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Nov 25 23:51:00 compute-0 nova_compute[252550]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Nov 25 23:51:00 compute-0 nova_compute[252550]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 25 23:51:00 compute-0 nova_compute[252550]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Nov 25 23:51:00 compute-0 nova_compute[252550]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Nov 25 23:51:00 compute-0 nova_compute[252550]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Nov 25 23:51:00 compute-0 nova_compute[252550]: INFO:__main__:Writing out command to execute
Nov 25 23:51:00 compute-0 nova_compute[252550]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 25 23:51:00 compute-0 nova_compute[252550]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 25 23:51:00 compute-0 nova_compute[252550]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Nov 25 23:51:00 compute-0 nova_compute[252550]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 25 23:51:00 compute-0 nova_compute[252550]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 25 23:51:00 compute-0 nova_compute[252550]: ++ cat /run_command
Nov 25 23:51:00 compute-0 nova_compute[252550]: + CMD=nova-compute
Nov 25 23:51:00 compute-0 nova_compute[252550]: + ARGS=
Nov 25 23:51:00 compute-0 nova_compute[252550]: + sudo kolla_copy_cacerts
Nov 25 23:51:00 compute-0 nova_compute[252550]: + [[ ! -n '' ]]
Nov 25 23:51:00 compute-0 nova_compute[252550]: + . kolla_extend_start
Nov 25 23:51:00 compute-0 nova_compute[252550]: + echo 'Running command: '\''nova-compute'\'''
Nov 25 23:51:00 compute-0 nova_compute[252550]: Running command: 'nova-compute'
Nov 25 23:51:00 compute-0 nova_compute[252550]: + umask 0022
Nov 25 23:51:00 compute-0 nova_compute[252550]: + exec nova-compute
Nov 25 23:51:01 compute-0 sudo[252738]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sragjlqgtfwvcosophahoxmmxoqqzpyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764114661.0484147-1566-62521150926167/AnsiballZ_podman_container.py'
Nov 25 23:51:01 compute-0 sudo[252738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:51:01 compute-0 ceph-mon[75654]: pgmap v626: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:51:01 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:51:01 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:51:01 compute-0 python3.9[252740]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Nov 25 23:51:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 23:51:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:51:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 23:51:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:51:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:51:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:51:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:51:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:51:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:51:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:51:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:51:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:51:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 23:51:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:51:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:51:01 compute-0 systemd[1]: Started libpod-conmon-a8340f2d2830922dc4913321305004a758198acf74b70a177957c7f85ee2db07.scope.
Nov 25 23:51:01 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:51:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8beb674a9b7ae1a559d086947d6ffd10572bb9cc13e26bf5c502719b6d834c75/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Nov 25 23:51:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8beb674a9b7ae1a559d086947d6ffd10572bb9cc13e26bf5c502719b6d834c75/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 25 23:51:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8beb674a9b7ae1a559d086947d6ffd10572bb9cc13e26bf5c502719b6d834c75/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Nov 25 23:51:01 compute-0 podman[252767]: 2025-11-25 23:51:01.929262553 +0000 UTC m=+0.119501286 container init a8340f2d2830922dc4913321305004a758198acf74b70a177957c7f85ee2db07 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 25 23:51:01 compute-0 podman[252767]: 2025-11-25 23:51:01.941612101 +0000 UTC m=+0.131850814 container start a8340f2d2830922dc4913321305004a758198acf74b70a177957c7f85ee2db07 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, container_name=nova_compute_init, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']})
Nov 25 23:51:01 compute-0 python3.9[252740]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Nov 25 23:51:02 compute-0 nova_compute_init[252789]: INFO:nova_statedir:Applying nova statedir ownership
Nov 25 23:51:02 compute-0 nova_compute_init[252789]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Nov 25 23:51:02 compute-0 nova_compute_init[252789]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Nov 25 23:51:02 compute-0 nova_compute_init[252789]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Nov 25 23:51:02 compute-0 nova_compute_init[252789]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Nov 25 23:51:02 compute-0 nova_compute_init[252789]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Nov 25 23:51:02 compute-0 nova_compute_init[252789]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Nov 25 23:51:02 compute-0 nova_compute_init[252789]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Nov 25 23:51:02 compute-0 nova_compute_init[252789]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Nov 25 23:51:02 compute-0 nova_compute_init[252789]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Nov 25 23:51:02 compute-0 nova_compute_init[252789]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Nov 25 23:51:02 compute-0 nova_compute_init[252789]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Nov 25 23:51:02 compute-0 nova_compute_init[252789]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Nov 25 23:51:02 compute-0 nova_compute_init[252789]: INFO:nova_statedir:Nova statedir ownership complete
Nov 25 23:51:02 compute-0 systemd[1]: libpod-a8340f2d2830922dc4913321305004a758198acf74b70a177957c7f85ee2db07.scope: Deactivated successfully.
Nov 25 23:51:02 compute-0 podman[252790]: 2025-11-25 23:51:02.0763294 +0000 UTC m=+0.094907305 container died a8340f2d2830922dc4913321305004a758198acf74b70a177957c7f85ee2db07 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']})
Nov 25 23:51:02 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a8340f2d2830922dc4913321305004a758198acf74b70a177957c7f85ee2db07-userdata-shm.mount: Deactivated successfully.
Nov 25 23:51:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-8beb674a9b7ae1a559d086947d6ffd10572bb9cc13e26bf5c502719b6d834c75-merged.mount: Deactivated successfully.
Nov 25 23:51:02 compute-0 podman[252801]: 2025-11-25 23:51:02.136833324 +0000 UTC m=+0.066560345 container cleanup a8340f2d2830922dc4913321305004a758198acf74b70a177957c7f85ee2db07 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, container_name=nova_compute_init, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 23:51:02 compute-0 systemd[1]: libpod-conmon-a8340f2d2830922dc4913321305004a758198acf74b70a177957c7f85ee2db07.scope: Deactivated successfully.
Nov 25 23:51:02 compute-0 sudo[252738]: pam_unix(sudo:session): session closed for user root
Nov 25 23:51:02 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v627: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:51:02 compute-0 nova_compute[252550]: 2025-11-25 23:51:02.659 252558 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 25 23:51:02 compute-0 nova_compute[252550]: 2025-11-25 23:51:02.659 252558 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 25 23:51:02 compute-0 nova_compute[252550]: 2025-11-25 23:51:02.659 252558 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 25 23:51:02 compute-0 nova_compute[252550]: 2025-11-25 23:51:02.660 252558 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Nov 25 23:51:02 compute-0 sshd-session[221700]: Connection closed by 192.168.122.30 port 52070
Nov 25 23:51:02 compute-0 sshd-session[221697]: pam_unix(sshd:session): session closed for user zuul
Nov 25 23:51:02 compute-0 systemd[1]: session-50.scope: Deactivated successfully.
Nov 25 23:51:02 compute-0 systemd[1]: session-50.scope: Consumed 2min 35.537s CPU time.
Nov 25 23:51:02 compute-0 systemd-logind[789]: Session 50 logged out. Waiting for processes to exit.
Nov 25 23:51:02 compute-0 systemd-logind[789]: Removed session 50.
Nov 25 23:51:02 compute-0 nova_compute[252550]: 2025-11-25 23:51:02.785 252558 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 23:51:02 compute-0 nova_compute[252550]: 2025-11-25 23:51:02.812 252558 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.027s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 23:51:02 compute-0 nova_compute[252550]: 2025-11-25 23:51:02.812 252558 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.300 252558 INFO nova.virt.driver [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.391 252558 INFO nova.compute.provider_config [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.564 252558 DEBUG oslo_concurrency.lockutils [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.564 252558 DEBUG oslo_concurrency.lockutils [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.565 252558 DEBUG oslo_concurrency.lockutils [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.565 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.566 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.566 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.566 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.566 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.567 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.567 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.567 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.568 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.568 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.568 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.569 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.569 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.569 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.569 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.570 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.570 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.570 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.571 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.571 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.572 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.572 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.572 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.572 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.573 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.573 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.574 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.574 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.574 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.575 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.575 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.575 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.576 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.576 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.576 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.577 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.577 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.577 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.578 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.578 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.578 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.579 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.579 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.580 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.580 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.581 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.581 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.581 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.582 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.582 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.582 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.583 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.583 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.584 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.584 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.585 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.585 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.585 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.586 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.586 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.586 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.587 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.587 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.587 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.588 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.588 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.588 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.589 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.589 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.589 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.589 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.590 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.590 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.590 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.591 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.591 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.592 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.592 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.592 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.593 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.593 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.593 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.594 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.594 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.594 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.595 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.595 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.595 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.596 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.596 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.597 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.597 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.598 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.598 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.598 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.599 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.599 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.599 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.600 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.600 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.600 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.601 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.601 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.601 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.602 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.602 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.602 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.603 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.603 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.603 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.604 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.604 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.604 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.604 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.604 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.605 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.605 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.605 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.605 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.606 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.606 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.606 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.606 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.606 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.607 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.607 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.607 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.607 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.608 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.608 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.608 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.608 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.608 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.609 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.609 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.609 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.609 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.609 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.610 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.610 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.610 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.610 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.610 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.611 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.611 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.611 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.611 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.612 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.612 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.612 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.612 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.613 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.613 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.613 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.613 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.614 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.614 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.614 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.614 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.614 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.615 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.615 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.615 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.615 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.615 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.616 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.616 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.616 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.616 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.617 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.617 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.617 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.617 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.617 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.618 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.618 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.618 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.618 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.619 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.619 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.619 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.619 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.619 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.620 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.620 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.620 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.620 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.621 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.621 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.621 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.621 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.622 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.622 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.622 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.622 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.622 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.623 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.623 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.623 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.623 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.624 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.624 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.624 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.624 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.625 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.625 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.625 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.625 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.625 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.625 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.626 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.626 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.626 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.626 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.627 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.627 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.627 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.627 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.627 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.628 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.628 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.628 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.628 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.628 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.629 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.629 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.629 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.629 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.629 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.630 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.630 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.630 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.630 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.630 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.631 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.631 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.631 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.631 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.632 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.632 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.632 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.632 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.632 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.633 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.633 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.633 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.633 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.633 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.634 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.634 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.634 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.634 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.634 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.635 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.635 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.635 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.635 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.636 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.636 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.636 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.637 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.637 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.637 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.637 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.637 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.638 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.638 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.638 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.638 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.638 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.639 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.639 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.639 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.639 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.640 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.640 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.640 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.640 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.640 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.641 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.641 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.641 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.641 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.641 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.642 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.642 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.642 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.642 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.643 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.643 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.643 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.643 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.644 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.644 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.644 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.644 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.644 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.645 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 ceph-mon[75654]: pgmap v627: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.645 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.645 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.646 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.646 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.646 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.646 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.647 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.647 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.647 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.648 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.648 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.648 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.648 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.648 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.649 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.649 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.649 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.649 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.649 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.649 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.650 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.650 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.650 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.650 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.650 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.651 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.651 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.651 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.651 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.651 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.652 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.652 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.652 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.652 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.652 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.652 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.653 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.653 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.653 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.653 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.653 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.654 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.654 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.654 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.654 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.654 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.654 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.655 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.655 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.655 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.655 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.655 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.656 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.656 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.656 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.656 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.656 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.657 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.657 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.657 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.657 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.657 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.658 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.658 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.658 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.658 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.658 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.658 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.659 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.659 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.659 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.659 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.659 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.660 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.660 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.660 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.660 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.660 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.660 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.661 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.661 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.661 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.661 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.661 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.661 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.662 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.662 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.662 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.662 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.662 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.663 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.663 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.663 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.663 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.663 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.664 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.664 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.664 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.664 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.664 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.664 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.665 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.665 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.665 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.665 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.665 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.666 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.666 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.666 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.666 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.666 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.666 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.667 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.667 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.667 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.667 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.667 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.668 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.668 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.668 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.668 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.668 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.668 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.669 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.669 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.669 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.669 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.669 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.670 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.670 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.670 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.670 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.670 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.670 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.671 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.671 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.671 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.671 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.671 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.672 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.672 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.672 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.672 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.672 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.672 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.673 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.673 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.673 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.673 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.673 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.674 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.674 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.674 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.674 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.674 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.675 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.675 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.675 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.675 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.675 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.676 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.676 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.676 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.676 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.676 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.677 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.677 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.677 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.677 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.677 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.677 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.678 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.678 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.678 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.678 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.678 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.679 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.679 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.679 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.679 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.679 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.679 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.680 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.680 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.680 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.680 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.680 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.681 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.681 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.681 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.681 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.681 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.682 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.682 252558 WARNING oslo_config.cfg [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Nov 25 23:51:03 compute-0 nova_compute[252550]: live_migration_uri is deprecated for removal in favor of two other options that
Nov 25 23:51:03 compute-0 nova_compute[252550]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Nov 25 23:51:03 compute-0 nova_compute[252550]: and ``live_migration_inbound_addr`` respectively.
Nov 25 23:51:03 compute-0 nova_compute[252550]: ).  Its value may be silently ignored in the future.
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.682 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.682 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.682 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.683 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.683 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.683 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.683 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.683 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.684 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.684 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.684 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.684 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.684 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.684 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.685 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.685 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.685 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.685 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.685 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.rbd_secret_uuid        = 101922db-575f-58e2-980f-928050464f69 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.686 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.686 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.686 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.686 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.686 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.686 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.687 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.687 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.687 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.687 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.687 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.688 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.688 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.688 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.688 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.688 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.689 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.689 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.689 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.689 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.689 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.689 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.690 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.690 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.690 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.690 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.690 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.691 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.691 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.691 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.691 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.692 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.692 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.692 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.692 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.692 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.692 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.693 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.693 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.693 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.693 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.693 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.693 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.694 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.694 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.694 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.694 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.694 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.695 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.695 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.695 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.695 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.695 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.695 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.696 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.696 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.696 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.696 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.696 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.697 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.697 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.697 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.697 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.697 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.698 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.698 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.698 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.698 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.698 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.698 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.699 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.699 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.699 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.699 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.699 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.700 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.700 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.700 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.700 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.700 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.700 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.701 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.701 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.701 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.701 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.701 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.702 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.702 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.702 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.702 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.702 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.702 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.703 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.703 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.703 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.703 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.703 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.704 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.704 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.704 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.704 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.704 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.704 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.705 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.705 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.705 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.705 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.705 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.706 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.706 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.706 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.706 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.706 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.706 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.707 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.707 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.707 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.707 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.707 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.708 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.708 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.708 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.708 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.708 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.709 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.709 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.709 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.709 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.709 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.709 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.710 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.710 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.710 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.710 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.710 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.711 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.711 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.711 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.711 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.711 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.712 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.712 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.712 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.712 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.712 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.712 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.713 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.713 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.713 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.713 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.713 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.714 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.714 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.714 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.714 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.714 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.714 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.715 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.715 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.715 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.715 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.715 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.716 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.716 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.716 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.716 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.716 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.717 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.717 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.717 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.717 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.717 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.717 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.718 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.718 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.718 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.718 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.718 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.719 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.719 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.719 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.719 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.719 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.719 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.720 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.720 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.720 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.720 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.720 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.721 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.721 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.721 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.721 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.721 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.721 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.722 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.722 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.722 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.722 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.722 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.722 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.723 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.723 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.723 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.723 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.723 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.724 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.724 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.724 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.724 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.724 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.725 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.725 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.725 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.725 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.725 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.725 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.726 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.726 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.726 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.726 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.726 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.727 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.727 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.727 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.727 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.727 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.728 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.728 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.729 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.729 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.729 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.729 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.729 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.730 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.730 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.730 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.730 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.730 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.731 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.731 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.731 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.731 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.731 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.731 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.732 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.732 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.732 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.732 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.732 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.733 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.733 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.733 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.733 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.733 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.733 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.734 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.734 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.734 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.734 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.734 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.735 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.735 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.735 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.735 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.735 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.736 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.736 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.736 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.736 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.736 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.736 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.737 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.737 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.737 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.737 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.737 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.738 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.738 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.738 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.738 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.738 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.738 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.739 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.739 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.739 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.739 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.739 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.740 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.740 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.740 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.740 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.740 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.741 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.741 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.741 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.741 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.741 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.741 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.742 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.742 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.742 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.742 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.742 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.743 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.743 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.743 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.743 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.743 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.743 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.744 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.744 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.744 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.744 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.744 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.745 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.745 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.745 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.745 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.745 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.745 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.746 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.746 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.746 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.746 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.746 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.747 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.747 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.747 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.747 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.747 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.748 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.748 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.748 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.748 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.748 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.748 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.749 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.749 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.749 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.749 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.749 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.749 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.750 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.750 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.750 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.750 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.750 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.750 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.751 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.751 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.751 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.751 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.751 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.752 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.752 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.752 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.752 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.752 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.752 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.753 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.753 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.753 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.753 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.753 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.754 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.754 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.754 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.754 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.754 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.754 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.755 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.755 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.755 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.755 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.755 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.756 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.756 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.756 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.756 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.756 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.756 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.757 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.757 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.757 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.757 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.757 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.758 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.758 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.758 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.758 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.758 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.758 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.759 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.759 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.759 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.759 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.759 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.759 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.760 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.760 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.760 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.760 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.760 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.761 252558 DEBUG oslo_service.service [None req-0a0b7481-8a4f-4c01-a1a7-894569b37936 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.762 252558 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.889 252558 DEBUG nova.virt.libvirt.host [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.890 252558 DEBUG nova.virt.libvirt.host [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.890 252558 DEBUG nova.virt.libvirt.host [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.891 252558 DEBUG nova.virt.libvirt.host [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.907 252558 DEBUG nova.virt.libvirt.host [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f0df9cee100> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.910 252558 DEBUG nova.virt.libvirt.host [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f0df9cee100> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.911 252558 INFO nova.virt.libvirt.driver [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Connection event '1' reason 'None'
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.924 252558 INFO nova.virt.libvirt.host [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Libvirt host capabilities <capabilities>
Nov 25 23:51:03 compute-0 nova_compute[252550]: 
Nov 25 23:51:03 compute-0 nova_compute[252550]:   <host>
Nov 25 23:51:03 compute-0 nova_compute[252550]:     <uuid>99edd01f-cb88-4b88-a56d-15f374f9d1d0</uuid>
Nov 25 23:51:03 compute-0 nova_compute[252550]:     <cpu>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <arch>x86_64</arch>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <model>EPYC-Rome-v4</model>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <vendor>AMD</vendor>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <microcode version='16777317'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <signature family='23' model='49' stepping='0'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <maxphysaddr mode='emulate' bits='40'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <feature name='x2apic'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <feature name='tsc-deadline'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <feature name='osxsave'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <feature name='hypervisor'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <feature name='tsc_adjust'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <feature name='spec-ctrl'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <feature name='stibp'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <feature name='arch-capabilities'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <feature name='ssbd'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <feature name='cmp_legacy'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <feature name='topoext'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <feature name='virt-ssbd'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <feature name='lbrv'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <feature name='tsc-scale'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <feature name='vmcb-clean'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <feature name='pause-filter'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <feature name='pfthreshold'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <feature name='svme-addr-chk'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <feature name='rdctl-no'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <feature name='skip-l1dfl-vmentry'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <feature name='mds-no'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <feature name='pschange-mc-no'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <pages unit='KiB' size='4'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <pages unit='KiB' size='2048'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <pages unit='KiB' size='1048576'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:     </cpu>
Nov 25 23:51:03 compute-0 nova_compute[252550]:     <power_management>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <suspend_mem/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:     </power_management>
Nov 25 23:51:03 compute-0 nova_compute[252550]:     <iommu support='no'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:     <migration_features>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <live/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <uri_transports>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <uri_transport>tcp</uri_transport>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <uri_transport>rdma</uri_transport>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       </uri_transports>
Nov 25 23:51:03 compute-0 nova_compute[252550]:     </migration_features>
Nov 25 23:51:03 compute-0 nova_compute[252550]:     <topology>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <cells num='1'>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <cell id='0'>
Nov 25 23:51:03 compute-0 nova_compute[252550]:           <memory unit='KiB'>7864320</memory>
Nov 25 23:51:03 compute-0 nova_compute[252550]:           <pages unit='KiB' size='4'>1966080</pages>
Nov 25 23:51:03 compute-0 nova_compute[252550]:           <pages unit='KiB' size='2048'>0</pages>
Nov 25 23:51:03 compute-0 nova_compute[252550]:           <pages unit='KiB' size='1048576'>0</pages>
Nov 25 23:51:03 compute-0 nova_compute[252550]:           <distances>
Nov 25 23:51:03 compute-0 nova_compute[252550]:             <sibling id='0' value='10'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:           </distances>
Nov 25 23:51:03 compute-0 nova_compute[252550]:           <cpus num='8'>
Nov 25 23:51:03 compute-0 nova_compute[252550]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:           </cpus>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         </cell>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       </cells>
Nov 25 23:51:03 compute-0 nova_compute[252550]:     </topology>
Nov 25 23:51:03 compute-0 nova_compute[252550]:     <cache>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:     </cache>
Nov 25 23:51:03 compute-0 nova_compute[252550]:     <secmodel>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <model>selinux</model>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <doi>0</doi>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Nov 25 23:51:03 compute-0 nova_compute[252550]:     </secmodel>
Nov 25 23:51:03 compute-0 nova_compute[252550]:     <secmodel>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <model>dac</model>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <doi>0</doi>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <baselabel type='kvm'>+107:+107</baselabel>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <baselabel type='qemu'>+107:+107</baselabel>
Nov 25 23:51:03 compute-0 nova_compute[252550]:     </secmodel>
Nov 25 23:51:03 compute-0 nova_compute[252550]:   </host>
Nov 25 23:51:03 compute-0 nova_compute[252550]: 
Nov 25 23:51:03 compute-0 nova_compute[252550]:   <guest>
Nov 25 23:51:03 compute-0 nova_compute[252550]:     <os_type>hvm</os_type>
Nov 25 23:51:03 compute-0 nova_compute[252550]:     <arch name='i686'>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <wordsize>32</wordsize>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <domain type='qemu'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <domain type='kvm'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:     </arch>
Nov 25 23:51:03 compute-0 nova_compute[252550]:     <features>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <pae/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <nonpae/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <acpi default='on' toggle='yes'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <apic default='on' toggle='no'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <cpuselection/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <deviceboot/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <disksnapshot default='on' toggle='no'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <externalSnapshot/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:     </features>
Nov 25 23:51:03 compute-0 nova_compute[252550]:   </guest>
Nov 25 23:51:03 compute-0 nova_compute[252550]: 
Nov 25 23:51:03 compute-0 nova_compute[252550]:   <guest>
Nov 25 23:51:03 compute-0 nova_compute[252550]:     <os_type>hvm</os_type>
Nov 25 23:51:03 compute-0 nova_compute[252550]:     <arch name='x86_64'>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <wordsize>64</wordsize>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <domain type='qemu'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <domain type='kvm'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:     </arch>
Nov 25 23:51:03 compute-0 nova_compute[252550]:     <features>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <acpi default='on' toggle='yes'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <apic default='on' toggle='no'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <cpuselection/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <deviceboot/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <disksnapshot default='on' toggle='no'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <externalSnapshot/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:     </features>
Nov 25 23:51:03 compute-0 nova_compute[252550]:   </guest>
Nov 25 23:51:03 compute-0 nova_compute[252550]: 
Nov 25 23:51:03 compute-0 nova_compute[252550]: </capabilities>
Nov 25 23:51:03 compute-0 nova_compute[252550]: 
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.937 252558 DEBUG nova.virt.libvirt.host [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.973 252558 WARNING nova.virt.libvirt.driver [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.974 252558 DEBUG nova.virt.libvirt.volume.mount [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Nov 25 23:51:03 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.977 252558 DEBUG nova.virt.libvirt.host [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Nov 25 23:51:03 compute-0 nova_compute[252550]: <domainCapabilities>
Nov 25 23:51:03 compute-0 nova_compute[252550]:   <path>/usr/libexec/qemu-kvm</path>
Nov 25 23:51:03 compute-0 nova_compute[252550]:   <domain>kvm</domain>
Nov 25 23:51:03 compute-0 nova_compute[252550]:   <machine>pc-i440fx-rhel7.6.0</machine>
Nov 25 23:51:03 compute-0 nova_compute[252550]:   <arch>i686</arch>
Nov 25 23:51:03 compute-0 nova_compute[252550]:   <vcpu max='240'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:   <iothreads supported='yes'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:   <os supported='yes'>
Nov 25 23:51:03 compute-0 nova_compute[252550]:     <enum name='firmware'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:     <loader supported='yes'>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <enum name='type'>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <value>rom</value>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <value>pflash</value>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <enum name='readonly'>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <value>yes</value>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <value>no</value>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <enum name='secure'>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <value>no</value>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:03 compute-0 nova_compute[252550]:     </loader>
Nov 25 23:51:03 compute-0 nova_compute[252550]:   </os>
Nov 25 23:51:03 compute-0 nova_compute[252550]:   <cpu>
Nov 25 23:51:03 compute-0 nova_compute[252550]:     <mode name='host-passthrough' supported='yes'>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <enum name='hostPassthroughMigratable'>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <value>on</value>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <value>off</value>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:03 compute-0 nova_compute[252550]:     </mode>
Nov 25 23:51:03 compute-0 nova_compute[252550]:     <mode name='maximum' supported='yes'>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <enum name='maximumMigratable'>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <value>on</value>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <value>off</value>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:03 compute-0 nova_compute[252550]:     </mode>
Nov 25 23:51:03 compute-0 nova_compute[252550]:     <mode name='host-model' supported='yes'>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <vendor>AMD</vendor>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <feature policy='require' name='x2apic'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <feature policy='require' name='tsc-deadline'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <feature policy='require' name='hypervisor'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <feature policy='require' name='tsc_adjust'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <feature policy='require' name='spec-ctrl'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <feature policy='require' name='stibp'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <feature policy='require' name='ssbd'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <feature policy='require' name='cmp_legacy'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <feature policy='require' name='overflow-recov'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <feature policy='require' name='succor'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <feature policy='require' name='ibrs'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <feature policy='require' name='amd-ssbd'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <feature policy='require' name='virt-ssbd'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <feature policy='require' name='lbrv'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <feature policy='require' name='tsc-scale'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <feature policy='require' name='vmcb-clean'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <feature policy='require' name='flushbyasid'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <feature policy='require' name='pause-filter'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <feature policy='require' name='pfthreshold'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <feature policy='require' name='svme-addr-chk'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <feature policy='disable' name='xsaves'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:     </mode>
Nov 25 23:51:03 compute-0 nova_compute[252550]:     <mode name='custom' supported='yes'>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <blockers model='Broadwell'>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <blockers model='Broadwell-IBRS'>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <blockers model='Broadwell-noTSX'>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <blockers model='Broadwell-v1'>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <blockers model='Broadwell-v2'>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <blockers model='Broadwell-v3'>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <blockers model='Broadwell-v4'>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <blockers model='Cascadelake-Server'>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='ibrs-all'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <blockers model='Cascadelake-Server-v1'>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <blockers model='Cascadelake-Server-v2'>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='ibrs-all'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <blockers model='Cascadelake-Server-v3'>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='ibrs-all'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <blockers model='Cascadelake-Server-v4'>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='ibrs-all'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <blockers model='Cascadelake-Server-v5'>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='ibrs-all'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <blockers model='Cooperlake'>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512-bf16'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='ibrs-all'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='taa-no'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <blockers model='Cooperlake-v1'>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512-bf16'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='ibrs-all'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='taa-no'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <blockers model='Cooperlake-v2'>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512-bf16'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='ibrs-all'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='taa-no'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <blockers model='Denverton'>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='mpx'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <blockers model='Denverton-v1'>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='mpx'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <blockers model='Denverton-v2'>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <blockers model='Denverton-v3'>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <blockers model='Dhyana-v2'>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <blockers model='EPYC-Genoa'>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='amd-psfd'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='auto-ibrs'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512-bf16'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512-vpopcntdq'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512bitalg'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512ifma'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512vbmi'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512vbmi2'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='fsrm'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='la57'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='no-nested-data-bp'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='null-sel-clr-base'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='stibp-always-on'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='vaes'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='vpclmulqdq'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <blockers model='EPYC-Genoa-v1'>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='amd-psfd'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='auto-ibrs'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512-bf16'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512-vpopcntdq'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512bitalg'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512ifma'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512vbmi'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512vbmi2'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='fsrm'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='la57'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='no-nested-data-bp'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='null-sel-clr-base'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='stibp-always-on'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='vaes'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='vpclmulqdq'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <blockers model='EPYC-Milan'>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='fsrm'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <blockers model='EPYC-Milan-v1'>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='fsrm'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <blockers model='EPYC-Milan-v2'>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='amd-psfd'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='fsrm'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='no-nested-data-bp'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='null-sel-clr-base'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='stibp-always-on'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='vaes'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='vpclmulqdq'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <blockers model='EPYC-Rome'>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <blockers model='EPYC-Rome-v1'>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <blockers model='EPYC-Rome-v2'>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <blockers model='EPYC-Rome-v3'>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <blockers model='EPYC-v3'>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <blockers model='EPYC-v4'>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <blockers model='GraniteRapids'>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='amx-bf16'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='amx-fp16'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='amx-int8'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='amx-tile'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx-vnni'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512-bf16'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512-fp16'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512-vpopcntdq'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512bitalg'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512ifma'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512vbmi'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512vbmi2'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='bus-lock-detect'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='fbsdp-no'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='fsrc'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='fsrm'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='fsrs'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='fzrm'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='ibrs-all'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='la57'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='mcdt-no'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='pbrsb-no'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='prefetchiti'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='psdp-no'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='sbdr-ssdp-no'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='serialize'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='taa-no'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='tsx-ldtrk'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='vaes'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='vpclmulqdq'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='xfd'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <blockers model='GraniteRapids-v1'>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='amx-bf16'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='amx-fp16'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='amx-int8'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='amx-tile'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx-vnni'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512-bf16'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512-fp16'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512-vpopcntdq'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512bitalg'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512ifma'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512vbmi'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512vbmi2'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='bus-lock-detect'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='fbsdp-no'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='fsrc'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='fsrm'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='fsrs'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='fzrm'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='ibrs-all'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='la57'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='mcdt-no'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='pbrsb-no'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='prefetchiti'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='psdp-no'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='sbdr-ssdp-no'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='serialize'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='taa-no'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='tsx-ldtrk'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='vaes'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='vpclmulqdq'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='xfd'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <blockers model='GraniteRapids-v2'>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='amx-bf16'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='amx-fp16'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='amx-int8'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='amx-tile'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx-vnni'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx10'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx10-128'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx10-256'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx10-512'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512-bf16'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512-fp16'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512-vpopcntdq'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512bitalg'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512ifma'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512vbmi'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512vbmi2'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='bus-lock-detect'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='cldemote'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='fbsdp-no'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='fsrc'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='fsrm'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='fsrs'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='fzrm'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='ibrs-all'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='la57'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='mcdt-no'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='movdir64b'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='movdiri'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='pbrsb-no'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='prefetchiti'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='psdp-no'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='sbdr-ssdp-no'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='serialize'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='ss'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='taa-no'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='tsx-ldtrk'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='vaes'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='vpclmulqdq'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='xfd'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <blockers model='Haswell'>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <blockers model='Haswell-IBRS'>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <blockers model='Haswell-noTSX'>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <blockers model='Haswell-v1'>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <blockers model='Haswell-v2'>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <blockers model='Haswell-v3'>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <blockers model='Haswell-v4'>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <blockers model='Icelake-Server'>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512-vpopcntdq'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512bitalg'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512vbmi'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512vbmi2'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='la57'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='vaes'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='vpclmulqdq'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <blockers model='Icelake-Server-noTSX'>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512-vpopcntdq'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512bitalg'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512vbmi'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512vbmi2'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='la57'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='vaes'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='vpclmulqdq'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <blockers model='Icelake-Server-v1'>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512-vpopcntdq'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512bitalg'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512vbmi'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512vbmi2'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='la57'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='vaes'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='vpclmulqdq'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <blockers model='Icelake-Server-v2'>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512-vpopcntdq'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512bitalg'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512vbmi'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512vbmi2'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='la57'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='vaes'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='vpclmulqdq'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <blockers model='Icelake-Server-v3'>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512-vpopcntdq'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512bitalg'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512vbmi'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512vbmi2'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='ibrs-all'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='la57'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='taa-no'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='vaes'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='vpclmulqdq'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 25 23:51:03 compute-0 nova_compute[252550]:       <blockers model='Icelake-Server-v4'>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512-vpopcntdq'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512bitalg'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512ifma'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512vbmi'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512vbmi2'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:03 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ibrs-all'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='la57'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='taa-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vaes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vpclmulqdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Icelake-Server-v5'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-vpopcntdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bitalg'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512ifma'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi2'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ibrs-all'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='la57'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='taa-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vaes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vpclmulqdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Icelake-Server-v6'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-vpopcntdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bitalg'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512ifma'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi2'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ibrs-all'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='la57'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='taa-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vaes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vpclmulqdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Icelake-Server-v7'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-vpopcntdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bitalg'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512ifma'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi2'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ibrs-all'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='la57'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='taa-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vaes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vpclmulqdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='IvyBridge'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='IvyBridge-IBRS'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='IvyBridge-v1'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='IvyBridge-v2'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='KnightsMill'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-4fmaps'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-4vnniw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-vpopcntdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512er'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512pf'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ss'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='KnightsMill-v1'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-4fmaps'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-4vnniw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-vpopcntdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512er'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512pf'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ss'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Opteron_G4'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fma4'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xop'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Opteron_G4-v1'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fma4'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xop'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Opteron_G5'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fma4'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='tbm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xop'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Opteron_G5-v1'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fma4'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='tbm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xop'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='SapphireRapids'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='amx-bf16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='amx-int8'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='amx-tile'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx-vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-bf16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-fp16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-vpopcntdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bitalg'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512ifma'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi2'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='bus-lock-detect'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrc'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrs'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fzrm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ibrs-all'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='la57'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='serialize'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='taa-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='tsx-ldtrk'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vaes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vpclmulqdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xfd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='SapphireRapids-v1'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='amx-bf16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='amx-int8'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='amx-tile'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx-vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-bf16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-fp16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-vpopcntdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bitalg'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512ifma'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi2'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='bus-lock-detect'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrc'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrs'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fzrm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ibrs-all'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='la57'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='serialize'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='taa-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='tsx-ldtrk'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vaes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vpclmulqdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xfd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='SapphireRapids-v2'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='amx-bf16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='amx-int8'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='amx-tile'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx-vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-bf16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-fp16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-vpopcntdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bitalg'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512ifma'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi2'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='bus-lock-detect'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fbsdp-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrc'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrs'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fzrm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ibrs-all'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='la57'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='psdp-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='sbdr-ssdp-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='serialize'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='taa-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='tsx-ldtrk'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vaes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vpclmulqdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xfd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='SapphireRapids-v3'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='amx-bf16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='amx-int8'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='amx-tile'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx-vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-bf16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-fp16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-vpopcntdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bitalg'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512ifma'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi2'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='bus-lock-detect'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='cldemote'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fbsdp-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrc'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrs'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fzrm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ibrs-all'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='la57'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='movdir64b'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='movdiri'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='psdp-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='sbdr-ssdp-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='serialize'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ss'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='taa-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='tsx-ldtrk'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vaes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vpclmulqdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xfd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='SierraForest'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx-ifma'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx-ne-convert'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx-vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx-vnni-int8'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='bus-lock-detect'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='cmpccxadd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fbsdp-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrs'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ibrs-all'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='mcdt-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pbrsb-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='psdp-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='sbdr-ssdp-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='serialize'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vaes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vpclmulqdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='SierraForest-v1'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx-ifma'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx-ne-convert'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx-vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx-vnni-int8'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='bus-lock-detect'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='cmpccxadd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fbsdp-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrs'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ibrs-all'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='mcdt-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pbrsb-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='psdp-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='sbdr-ssdp-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='serialize'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vaes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vpclmulqdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Skylake-Client'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Skylake-Client-IBRS'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Skylake-Client-v1'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Skylake-Client-v2'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Skylake-Client-v3'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Skylake-Client-v4'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Skylake-Server'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Skylake-Server-IBRS'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Skylake-Server-v1'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Skylake-Server-v2'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Skylake-Server-v3'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Skylake-Server-v4'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Skylake-Server-v5'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Snowridge'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='cldemote'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='core-capability'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='movdir64b'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='movdiri'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='mpx'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='split-lock-detect'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Snowridge-v1'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='cldemote'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='core-capability'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='movdir64b'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='movdiri'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='mpx'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='split-lock-detect'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Snowridge-v2'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='cldemote'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='core-capability'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='movdir64b'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='movdiri'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='split-lock-detect'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Snowridge-v3'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='cldemote'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='core-capability'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='movdir64b'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='movdiri'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='split-lock-detect'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Snowridge-v4'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='cldemote'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='movdir64b'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='movdiri'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='athlon'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='3dnow'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='3dnowext'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='athlon-v1'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='3dnow'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='3dnowext'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='core2duo'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ss'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='core2duo-v1'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ss'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='coreduo'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ss'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='coreduo-v1'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ss'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='n270'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ss'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='n270-v1'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ss'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='phenom'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='3dnow'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='3dnowext'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='phenom-v1'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='3dnow'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='3dnowext'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     </mode>
Nov 25 23:51:04 compute-0 nova_compute[252550]:   </cpu>
Nov 25 23:51:04 compute-0 nova_compute[252550]:   <memoryBacking supported='yes'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <enum name='sourceType'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <value>file</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <value>anonymous</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <value>memfd</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:   </memoryBacking>
Nov 25 23:51:04 compute-0 nova_compute[252550]:   <devices>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <disk supported='yes'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='diskDevice'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>disk</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>cdrom</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>floppy</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>lun</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='bus'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>ide</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>fdc</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>scsi</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>virtio</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>usb</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>sata</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='model'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>virtio</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>virtio-transitional</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>virtio-non-transitional</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     </disk>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <graphics supported='yes'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='type'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>vnc</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>egl-headless</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>dbus</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     </graphics>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <video supported='yes'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='modelType'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>vga</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>cirrus</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>virtio</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>none</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>bochs</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>ramfb</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     </video>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <hostdev supported='yes'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='mode'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>subsystem</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='startupPolicy'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>default</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>mandatory</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>requisite</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>optional</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='subsysType'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>usb</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>pci</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>scsi</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='capsType'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='pciBackend'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     </hostdev>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <rng supported='yes'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='model'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>virtio</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>virtio-transitional</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>virtio-non-transitional</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='backendModel'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>random</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>egd</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>builtin</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     </rng>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <filesystem supported='yes'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='driverType'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>path</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>handle</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>virtiofs</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     </filesystem>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <tpm supported='yes'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='model'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>tpm-tis</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>tpm-crb</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='backendModel'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>emulator</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>external</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='backendVersion'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>2.0</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     </tpm>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <redirdev supported='yes'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='bus'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>usb</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     </redirdev>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <channel supported='yes'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='type'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>pty</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>unix</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     </channel>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <crypto supported='yes'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='model'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='type'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>qemu</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='backendModel'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>builtin</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     </crypto>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <interface supported='yes'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='backendType'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>default</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>passt</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     </interface>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <panic supported='yes'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='model'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>isa</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>hyperv</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     </panic>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <console supported='yes'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='type'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>null</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>vc</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>pty</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>dev</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>file</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>pipe</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>stdio</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>udp</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>tcp</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>unix</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>qemu-vdagent</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>dbus</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     </console>
Nov 25 23:51:04 compute-0 nova_compute[252550]:   </devices>
Nov 25 23:51:04 compute-0 nova_compute[252550]:   <features>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <gic supported='no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <vmcoreinfo supported='yes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <genid supported='yes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <backingStoreInput supported='yes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <backup supported='yes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <async-teardown supported='yes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <ps2 supported='yes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <sev supported='no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <sgx supported='no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <hyperv supported='yes'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='features'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>relaxed</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>vapic</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>spinlocks</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>vpindex</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>runtime</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>synic</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>stimer</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>reset</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>vendor_id</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>frequencies</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>reenlightenment</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>tlbflush</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>ipi</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>avic</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>emsr_bitmap</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>xmm_input</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <defaults>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <spinlocks>4095</spinlocks>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <stimer_direct>on</stimer_direct>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <tlbflush_direct>on</tlbflush_direct>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <tlbflush_extended>on</tlbflush_extended>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </defaults>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     </hyperv>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <launchSecurity supported='yes'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='sectype'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>tdx</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     </launchSecurity>
Nov 25 23:51:04 compute-0 nova_compute[252550]:   </features>
Nov 25 23:51:04 compute-0 nova_compute[252550]: </domainCapabilities>
Nov 25 23:51:04 compute-0 nova_compute[252550]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 25 23:51:04 compute-0 nova_compute[252550]: 2025-11-25 23:51:03.986 252558 DEBUG nova.virt.libvirt.host [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Nov 25 23:51:04 compute-0 nova_compute[252550]: <domainCapabilities>
Nov 25 23:51:04 compute-0 nova_compute[252550]:   <path>/usr/libexec/qemu-kvm</path>
Nov 25 23:51:04 compute-0 nova_compute[252550]:   <domain>kvm</domain>
Nov 25 23:51:04 compute-0 nova_compute[252550]:   <machine>pc-q35-rhel9.8.0</machine>
Nov 25 23:51:04 compute-0 nova_compute[252550]:   <arch>i686</arch>
Nov 25 23:51:04 compute-0 nova_compute[252550]:   <vcpu max='4096'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:   <iothreads supported='yes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:   <os supported='yes'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <enum name='firmware'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <loader supported='yes'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='type'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>rom</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>pflash</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='readonly'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>yes</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>no</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='secure'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>no</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     </loader>
Nov 25 23:51:04 compute-0 nova_compute[252550]:   </os>
Nov 25 23:51:04 compute-0 nova_compute[252550]:   <cpu>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <mode name='host-passthrough' supported='yes'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='hostPassthroughMigratable'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>on</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>off</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     </mode>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <mode name='maximum' supported='yes'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='maximumMigratable'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>on</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>off</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     </mode>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <mode name='host-model' supported='yes'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <vendor>AMD</vendor>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <feature policy='require' name='x2apic'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <feature policy='require' name='tsc-deadline'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <feature policy='require' name='hypervisor'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <feature policy='require' name='tsc_adjust'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <feature policy='require' name='spec-ctrl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <feature policy='require' name='stibp'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <feature policy='require' name='ssbd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <feature policy='require' name='cmp_legacy'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <feature policy='require' name='overflow-recov'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <feature policy='require' name='succor'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <feature policy='require' name='ibrs'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <feature policy='require' name='amd-ssbd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <feature policy='require' name='virt-ssbd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <feature policy='require' name='lbrv'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <feature policy='require' name='tsc-scale'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <feature policy='require' name='vmcb-clean'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <feature policy='require' name='flushbyasid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <feature policy='require' name='pause-filter'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <feature policy='require' name='pfthreshold'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <feature policy='require' name='svme-addr-chk'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <feature policy='disable' name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     </mode>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <mode name='custom' supported='yes'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Broadwell'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Broadwell-IBRS'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Broadwell-noTSX'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Broadwell-v1'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Broadwell-v2'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Broadwell-v3'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Broadwell-v4'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Cascadelake-Server'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ibrs-all'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Cascadelake-Server-v1'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Cascadelake-Server-v2'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ibrs-all'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Cascadelake-Server-v3'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ibrs-all'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Cascadelake-Server-v4'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ibrs-all'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Cascadelake-Server-v5'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ibrs-all'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Cooperlake'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-bf16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ibrs-all'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='taa-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Cooperlake-v1'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-bf16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ibrs-all'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='taa-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Cooperlake-v2'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-bf16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ibrs-all'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='taa-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Denverton'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='mpx'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Denverton-v1'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='mpx'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Denverton-v2'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Denverton-v3'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Dhyana-v2'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='EPYC-Genoa'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='amd-psfd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='auto-ibrs'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-bf16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-vpopcntdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bitalg'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512ifma'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi2'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='la57'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='no-nested-data-bp'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='null-sel-clr-base'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='stibp-always-on'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vaes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vpclmulqdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='EPYC-Genoa-v1'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='amd-psfd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='auto-ibrs'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-bf16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-vpopcntdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bitalg'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512ifma'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi2'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='la57'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='no-nested-data-bp'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='null-sel-clr-base'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='stibp-always-on'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vaes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vpclmulqdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='EPYC-Milan'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='EPYC-Milan-v1'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='EPYC-Milan-v2'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='amd-psfd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='no-nested-data-bp'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='null-sel-clr-base'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='stibp-always-on'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vaes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vpclmulqdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='EPYC-Rome'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='EPYC-Rome-v1'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='EPYC-Rome-v2'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='EPYC-Rome-v3'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='EPYC-v3'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='EPYC-v4'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='GraniteRapids'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='amx-bf16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='amx-fp16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='amx-int8'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='amx-tile'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx-vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-bf16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-fp16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-vpopcntdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bitalg'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512ifma'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi2'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='bus-lock-detect'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fbsdp-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrc'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrs'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fzrm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ibrs-all'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='la57'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='mcdt-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pbrsb-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='prefetchiti'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='psdp-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='sbdr-ssdp-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='serialize'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='taa-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='tsx-ldtrk'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vaes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vpclmulqdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xfd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='GraniteRapids-v1'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='amx-bf16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='amx-fp16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='amx-int8'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='amx-tile'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx-vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-bf16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-fp16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-vpopcntdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bitalg'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512ifma'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi2'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='bus-lock-detect'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fbsdp-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrc'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrs'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fzrm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ibrs-all'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='la57'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='mcdt-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pbrsb-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='prefetchiti'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='psdp-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='sbdr-ssdp-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='serialize'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='taa-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='tsx-ldtrk'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vaes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vpclmulqdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xfd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='GraniteRapids-v2'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='amx-bf16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='amx-fp16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='amx-int8'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='amx-tile'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx-vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx10'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx10-128'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx10-256'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx10-512'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-bf16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-fp16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-vpopcntdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bitalg'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512ifma'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi2'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='bus-lock-detect'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='cldemote'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fbsdp-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrc'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrs'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fzrm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ibrs-all'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='la57'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='mcdt-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='movdir64b'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='movdiri'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pbrsb-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='prefetchiti'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='psdp-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='sbdr-ssdp-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='serialize'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ss'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='taa-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='tsx-ldtrk'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vaes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vpclmulqdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xfd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Haswell'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Haswell-IBRS'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Haswell-noTSX'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Haswell-v1'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Haswell-v2'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Haswell-v3'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Haswell-v4'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Icelake-Server'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-vpopcntdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bitalg'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi2'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='la57'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vaes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vpclmulqdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Icelake-Server-noTSX'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-vpopcntdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bitalg'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi2'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='la57'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vaes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vpclmulqdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Icelake-Server-v1'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-vpopcntdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bitalg'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi2'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='la57'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vaes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vpclmulqdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Icelake-Server-v2'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-vpopcntdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bitalg'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi2'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='la57'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vaes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vpclmulqdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Icelake-Server-v3'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-vpopcntdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bitalg'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi2'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ibrs-all'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='la57'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='taa-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vaes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vpclmulqdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Icelake-Server-v4'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-vpopcntdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bitalg'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512ifma'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi2'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ibrs-all'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='la57'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='taa-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vaes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vpclmulqdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Icelake-Server-v5'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-vpopcntdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bitalg'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512ifma'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi2'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ibrs-all'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='la57'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='taa-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vaes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vpclmulqdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Icelake-Server-v6'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-vpopcntdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bitalg'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512ifma'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi2'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ibrs-all'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='la57'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='taa-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vaes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vpclmulqdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Icelake-Server-v7'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-vpopcntdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bitalg'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512ifma'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi2'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ibrs-all'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='la57'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='taa-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vaes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vpclmulqdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='IvyBridge'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='IvyBridge-IBRS'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='IvyBridge-v1'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='IvyBridge-v2'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='KnightsMill'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-4fmaps'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-4vnniw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-vpopcntdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512er'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512pf'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ss'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='KnightsMill-v1'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-4fmaps'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-4vnniw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-vpopcntdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512er'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512pf'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ss'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Opteron_G4'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fma4'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xop'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Opteron_G4-v1'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fma4'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xop'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Opteron_G5'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fma4'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='tbm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xop'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Opteron_G5-v1'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fma4'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='tbm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xop'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='SapphireRapids'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='amx-bf16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='amx-int8'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='amx-tile'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx-vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-bf16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-fp16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-vpopcntdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bitalg'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512ifma'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi2'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='bus-lock-detect'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrc'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrs'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fzrm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ibrs-all'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='la57'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='serialize'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='taa-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='tsx-ldtrk'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vaes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vpclmulqdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xfd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='SapphireRapids-v1'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='amx-bf16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='amx-int8'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='amx-tile'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx-vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-bf16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-fp16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-vpopcntdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bitalg'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512ifma'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi2'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='bus-lock-detect'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrc'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrs'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fzrm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ibrs-all'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='la57'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='serialize'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='taa-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='tsx-ldtrk'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vaes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vpclmulqdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xfd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='SapphireRapids-v2'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='amx-bf16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='amx-int8'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='amx-tile'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx-vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-bf16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-fp16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-vpopcntdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bitalg'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512ifma'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi2'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='bus-lock-detect'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fbsdp-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrc'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrs'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fzrm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ibrs-all'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='la57'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='psdp-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='sbdr-ssdp-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='serialize'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='taa-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='tsx-ldtrk'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vaes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vpclmulqdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xfd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='SapphireRapids-v3'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='amx-bf16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='amx-int8'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='amx-tile'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx-vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-bf16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-fp16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-vpopcntdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bitalg'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512ifma'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi2'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='bus-lock-detect'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='cldemote'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fbsdp-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrc'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrs'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fzrm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ibrs-all'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='la57'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='movdir64b'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='movdiri'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='psdp-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='sbdr-ssdp-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='serialize'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ss'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='taa-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='tsx-ldtrk'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vaes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vpclmulqdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xfd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='SierraForest'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx-ifma'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx-ne-convert'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx-vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx-vnni-int8'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='bus-lock-detect'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='cmpccxadd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fbsdp-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrs'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ibrs-all'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='mcdt-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pbrsb-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='psdp-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='sbdr-ssdp-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='serialize'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vaes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vpclmulqdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='SierraForest-v1'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx-ifma'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx-ne-convert'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx-vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx-vnni-int8'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='bus-lock-detect'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='cmpccxadd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fbsdp-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrs'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ibrs-all'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='mcdt-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pbrsb-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='psdp-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='sbdr-ssdp-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='serialize'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vaes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vpclmulqdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Skylake-Client'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Skylake-Client-IBRS'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Skylake-Client-v1'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Skylake-Client-v2'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Skylake-Client-v3'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Skylake-Client-v4'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Skylake-Server'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Skylake-Server-IBRS'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Skylake-Server-v1'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Skylake-Server-v2'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Skylake-Server-v3'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Skylake-Server-v4'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Skylake-Server-v5'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Snowridge'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='cldemote'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='core-capability'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='movdir64b'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='movdiri'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='mpx'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='split-lock-detect'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Snowridge-v1'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='cldemote'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='core-capability'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='movdir64b'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='movdiri'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='mpx'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='split-lock-detect'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Snowridge-v2'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='cldemote'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='core-capability'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='movdir64b'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='movdiri'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='split-lock-detect'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Snowridge-v3'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='cldemote'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='core-capability'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='movdir64b'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='movdiri'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='split-lock-detect'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Snowridge-v4'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='cldemote'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='movdir64b'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='movdiri'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='athlon'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='3dnow'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='3dnowext'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='athlon-v1'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='3dnow'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='3dnowext'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='core2duo'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ss'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='core2duo-v1'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ss'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='coreduo'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ss'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='coreduo-v1'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ss'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='n270'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ss'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='n270-v1'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ss'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='phenom'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='3dnow'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='3dnowext'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='phenom-v1'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='3dnow'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='3dnowext'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     </mode>
Nov 25 23:51:04 compute-0 nova_compute[252550]:   </cpu>
Nov 25 23:51:04 compute-0 nova_compute[252550]:   <memoryBacking supported='yes'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <enum name='sourceType'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <value>file</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <value>anonymous</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <value>memfd</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:   </memoryBacking>
Nov 25 23:51:04 compute-0 nova_compute[252550]:   <devices>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <disk supported='yes'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='diskDevice'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>disk</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>cdrom</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>floppy</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>lun</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='bus'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>fdc</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>scsi</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>virtio</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>usb</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>sata</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='model'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>virtio</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>virtio-transitional</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>virtio-non-transitional</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     </disk>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <graphics supported='yes'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='type'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>vnc</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>egl-headless</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>dbus</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     </graphics>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <video supported='yes'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='modelType'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>vga</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>cirrus</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>virtio</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>none</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>bochs</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>ramfb</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     </video>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <hostdev supported='yes'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='mode'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>subsystem</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='startupPolicy'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>default</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>mandatory</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>requisite</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>optional</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='subsysType'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>usb</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>pci</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>scsi</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='capsType'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='pciBackend'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     </hostdev>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <rng supported='yes'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='model'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>virtio</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>virtio-transitional</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>virtio-non-transitional</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='backendModel'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>random</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>egd</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>builtin</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     </rng>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <filesystem supported='yes'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='driverType'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>path</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>handle</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>virtiofs</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     </filesystem>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <tpm supported='yes'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='model'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>tpm-tis</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>tpm-crb</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='backendModel'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>emulator</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>external</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='backendVersion'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>2.0</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     </tpm>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <redirdev supported='yes'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='bus'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>usb</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     </redirdev>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <channel supported='yes'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='type'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>pty</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>unix</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     </channel>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <crypto supported='yes'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='model'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='type'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>qemu</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='backendModel'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>builtin</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     </crypto>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <interface supported='yes'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='backendType'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>default</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>passt</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     </interface>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <panic supported='yes'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='model'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>isa</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>hyperv</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     </panic>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <console supported='yes'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='type'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>null</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>vc</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>pty</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>dev</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>file</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>pipe</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>stdio</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>udp</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>tcp</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>unix</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>qemu-vdagent</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>dbus</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     </console>
Nov 25 23:51:04 compute-0 nova_compute[252550]:   </devices>
Nov 25 23:51:04 compute-0 nova_compute[252550]:   <features>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <gic supported='no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <vmcoreinfo supported='yes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <genid supported='yes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <backingStoreInput supported='yes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <backup supported='yes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <async-teardown supported='yes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <ps2 supported='yes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <sev supported='no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <sgx supported='no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <hyperv supported='yes'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='features'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>relaxed</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>vapic</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>spinlocks</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>vpindex</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>runtime</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>synic</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>stimer</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>reset</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>vendor_id</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>frequencies</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>reenlightenment</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>tlbflush</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>ipi</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>avic</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>emsr_bitmap</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>xmm_input</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <defaults>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <spinlocks>4095</spinlocks>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <stimer_direct>on</stimer_direct>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <tlbflush_direct>on</tlbflush_direct>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <tlbflush_extended>on</tlbflush_extended>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </defaults>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     </hyperv>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <launchSecurity supported='yes'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='sectype'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>tdx</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     </launchSecurity>
Nov 25 23:51:04 compute-0 nova_compute[252550]:   </features>
Nov 25 23:51:04 compute-0 nova_compute[252550]: </domainCapabilities>
Nov 25 23:51:04 compute-0 nova_compute[252550]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 25 23:51:04 compute-0 nova_compute[252550]: 2025-11-25 23:51:04.042 252558 DEBUG nova.virt.libvirt.host [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Nov 25 23:51:04 compute-0 nova_compute[252550]: 2025-11-25 23:51:04.046 252558 DEBUG nova.virt.libvirt.host [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Nov 25 23:51:04 compute-0 nova_compute[252550]: <domainCapabilities>
Nov 25 23:51:04 compute-0 nova_compute[252550]:   <path>/usr/libexec/qemu-kvm</path>
Nov 25 23:51:04 compute-0 nova_compute[252550]:   <domain>kvm</domain>
Nov 25 23:51:04 compute-0 nova_compute[252550]:   <machine>pc-i440fx-rhel7.6.0</machine>
Nov 25 23:51:04 compute-0 nova_compute[252550]:   <arch>x86_64</arch>
Nov 25 23:51:04 compute-0 nova_compute[252550]:   <vcpu max='240'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:   <iothreads supported='yes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:   <os supported='yes'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <enum name='firmware'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <loader supported='yes'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='type'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>rom</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>pflash</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='readonly'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>yes</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>no</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='secure'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>no</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     </loader>
Nov 25 23:51:04 compute-0 nova_compute[252550]:   </os>
Nov 25 23:51:04 compute-0 nova_compute[252550]:   <cpu>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <mode name='host-passthrough' supported='yes'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='hostPassthroughMigratable'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>on</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>off</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     </mode>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <mode name='maximum' supported='yes'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='maximumMigratable'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>on</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>off</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     </mode>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <mode name='host-model' supported='yes'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <vendor>AMD</vendor>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <feature policy='require' name='x2apic'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <feature policy='require' name='tsc-deadline'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <feature policy='require' name='hypervisor'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <feature policy='require' name='tsc_adjust'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <feature policy='require' name='spec-ctrl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <feature policy='require' name='stibp'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <feature policy='require' name='ssbd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <feature policy='require' name='cmp_legacy'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <feature policy='require' name='overflow-recov'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <feature policy='require' name='succor'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <feature policy='require' name='ibrs'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <feature policy='require' name='amd-ssbd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <feature policy='require' name='virt-ssbd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <feature policy='require' name='lbrv'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <feature policy='require' name='tsc-scale'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <feature policy='require' name='vmcb-clean'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <feature policy='require' name='flushbyasid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <feature policy='require' name='pause-filter'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <feature policy='require' name='pfthreshold'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <feature policy='require' name='svme-addr-chk'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <feature policy='disable' name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     </mode>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <mode name='custom' supported='yes'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Broadwell'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Broadwell-IBRS'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Broadwell-noTSX'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Broadwell-v1'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Broadwell-v2'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Broadwell-v3'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Broadwell-v4'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Cascadelake-Server'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ibrs-all'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Cascadelake-Server-v1'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Cascadelake-Server-v2'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ibrs-all'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Cascadelake-Server-v3'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ibrs-all'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Cascadelake-Server-v4'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ibrs-all'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Cascadelake-Server-v5'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ibrs-all'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Cooperlake'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-bf16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ibrs-all'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='taa-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Cooperlake-v1'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-bf16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ibrs-all'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='taa-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Cooperlake-v2'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-bf16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ibrs-all'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='taa-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Denverton'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='mpx'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Denverton-v1'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='mpx'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Denverton-v2'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Denverton-v3'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Dhyana-v2'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='EPYC-Genoa'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='amd-psfd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='auto-ibrs'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-bf16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-vpopcntdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bitalg'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512ifma'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi2'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='la57'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='no-nested-data-bp'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='null-sel-clr-base'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='stibp-always-on'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vaes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vpclmulqdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='EPYC-Genoa-v1'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='amd-psfd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='auto-ibrs'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-bf16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-vpopcntdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bitalg'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512ifma'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi2'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='la57'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='no-nested-data-bp'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='null-sel-clr-base'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='stibp-always-on'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vaes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vpclmulqdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='EPYC-Milan'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='EPYC-Milan-v1'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='EPYC-Milan-v2'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='amd-psfd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='no-nested-data-bp'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='null-sel-clr-base'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='stibp-always-on'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vaes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vpclmulqdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='EPYC-Rome'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='EPYC-Rome-v1'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='EPYC-Rome-v2'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='EPYC-Rome-v3'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='EPYC-v3'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='EPYC-v4'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='GraniteRapids'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='amx-bf16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='amx-fp16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='amx-int8'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='amx-tile'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx-vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-bf16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-fp16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-vpopcntdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bitalg'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512ifma'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi2'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='bus-lock-detect'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fbsdp-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrc'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrs'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fzrm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ibrs-all'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='la57'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='mcdt-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pbrsb-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='prefetchiti'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='psdp-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='sbdr-ssdp-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='serialize'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='taa-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='tsx-ldtrk'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vaes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vpclmulqdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xfd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='GraniteRapids-v1'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='amx-bf16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='amx-fp16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='amx-int8'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='amx-tile'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx-vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-bf16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-fp16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-vpopcntdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bitalg'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512ifma'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi2'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='bus-lock-detect'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fbsdp-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrc'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrs'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fzrm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ibrs-all'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='la57'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='mcdt-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pbrsb-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='prefetchiti'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='psdp-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='sbdr-ssdp-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='serialize'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='taa-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='tsx-ldtrk'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vaes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vpclmulqdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xfd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='GraniteRapids-v2'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='amx-bf16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='amx-fp16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='amx-int8'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='amx-tile'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx-vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx10'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx10-128'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx10-256'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx10-512'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-bf16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-fp16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-vpopcntdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bitalg'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512ifma'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi2'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='bus-lock-detect'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='cldemote'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fbsdp-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrc'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrs'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fzrm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ibrs-all'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='la57'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='mcdt-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='movdir64b'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='movdiri'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pbrsb-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='prefetchiti'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='psdp-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='sbdr-ssdp-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='serialize'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ss'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='taa-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='tsx-ldtrk'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vaes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vpclmulqdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xfd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Haswell'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Haswell-IBRS'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Haswell-noTSX'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Haswell-v1'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Haswell-v2'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Haswell-v3'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Haswell-v4'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Icelake-Server'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-vpopcntdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bitalg'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi2'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='la57'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vaes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vpclmulqdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Icelake-Server-noTSX'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-vpopcntdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bitalg'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi2'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='la57'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vaes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vpclmulqdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Icelake-Server-v1'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-vpopcntdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bitalg'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi2'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='la57'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vaes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vpclmulqdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Icelake-Server-v2'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-vpopcntdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bitalg'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi2'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='la57'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vaes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vpclmulqdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Icelake-Server-v3'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-vpopcntdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bitalg'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi2'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ibrs-all'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='la57'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='taa-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vaes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vpclmulqdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Icelake-Server-v4'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-vpopcntdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bitalg'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512ifma'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi2'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ibrs-all'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='la57'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='taa-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vaes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vpclmulqdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Icelake-Server-v5'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-vpopcntdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bitalg'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512ifma'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi2'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ibrs-all'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='la57'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='taa-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vaes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vpclmulqdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Icelake-Server-v6'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-vpopcntdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bitalg'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512ifma'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi2'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ibrs-all'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='la57'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='taa-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vaes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vpclmulqdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Icelake-Server-v7'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-vpopcntdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bitalg'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512ifma'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi2'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ibrs-all'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='la57'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='taa-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vaes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vpclmulqdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='IvyBridge'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='IvyBridge-IBRS'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='IvyBridge-v1'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='IvyBridge-v2'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='KnightsMill'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-4fmaps'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-4vnniw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-vpopcntdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512er'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512pf'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ss'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='KnightsMill-v1'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-4fmaps'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-4vnniw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-vpopcntdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512er'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512pf'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ss'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Opteron_G4'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fma4'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xop'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Opteron_G4-v1'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fma4'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xop'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Opteron_G5'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fma4'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='tbm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xop'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Opteron_G5-v1'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fma4'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='tbm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xop'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='SapphireRapids'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='amx-bf16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='amx-int8'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='amx-tile'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx-vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-bf16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-fp16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-vpopcntdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bitalg'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512ifma'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi2'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='bus-lock-detect'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrc'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrs'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fzrm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ibrs-all'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='la57'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='serialize'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='taa-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='tsx-ldtrk'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vaes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vpclmulqdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xfd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='SapphireRapids-v1'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='amx-bf16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='amx-int8'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='amx-tile'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx-vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-bf16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-fp16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-vpopcntdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bitalg'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512ifma'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi2'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='bus-lock-detect'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrc'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrs'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fzrm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ibrs-all'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='la57'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='serialize'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='taa-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='tsx-ldtrk'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vaes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vpclmulqdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xfd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='SapphireRapids-v2'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='amx-bf16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='amx-int8'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='amx-tile'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx-vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-bf16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-fp16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-vpopcntdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bitalg'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512ifma'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi2'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='bus-lock-detect'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fbsdp-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrc'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrs'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fzrm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ibrs-all'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='la57'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='psdp-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='sbdr-ssdp-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='serialize'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='taa-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='tsx-ldtrk'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vaes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vpclmulqdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xfd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='SapphireRapids-v3'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='amx-bf16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='amx-int8'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='amx-tile'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx-vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-bf16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-fp16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-vpopcntdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bitalg'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512ifma'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi2'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='bus-lock-detect'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='cldemote'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fbsdp-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrc'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrs'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fzrm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ibrs-all'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='la57'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='movdir64b'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='movdiri'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='psdp-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='sbdr-ssdp-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='serialize'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ss'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='taa-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='tsx-ldtrk'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vaes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vpclmulqdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xfd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='SierraForest'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx-ifma'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx-ne-convert'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx-vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx-vnni-int8'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='bus-lock-detect'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='cmpccxadd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fbsdp-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrs'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ibrs-all'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='mcdt-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pbrsb-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='psdp-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='sbdr-ssdp-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='serialize'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vaes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vpclmulqdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='SierraForest-v1'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx-ifma'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx-ne-convert'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx-vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx-vnni-int8'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='bus-lock-detect'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='cmpccxadd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fbsdp-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrs'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ibrs-all'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='mcdt-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pbrsb-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='psdp-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='sbdr-ssdp-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='serialize'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vaes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vpclmulqdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Skylake-Client'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Skylake-Client-IBRS'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Skylake-Client-v1'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Skylake-Client-v2'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Skylake-Client-v3'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Skylake-Client-v4'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Skylake-Server'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Skylake-Server-IBRS'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Skylake-Server-v1'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Skylake-Server-v2'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Skylake-Server-v3'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Skylake-Server-v4'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Skylake-Server-v5'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Snowridge'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='cldemote'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='core-capability'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='movdir64b'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='movdiri'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='mpx'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='split-lock-detect'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Snowridge-v1'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='cldemote'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='core-capability'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='movdir64b'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='movdiri'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='mpx'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='split-lock-detect'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Snowridge-v2'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='cldemote'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='core-capability'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='movdir64b'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='movdiri'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='split-lock-detect'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Snowridge-v3'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='cldemote'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='core-capability'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='movdir64b'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='movdiri'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='split-lock-detect'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Snowridge-v4'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='cldemote'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='movdir64b'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='movdiri'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='athlon'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='3dnow'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='3dnowext'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='athlon-v1'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='3dnow'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='3dnowext'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='core2duo'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ss'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='core2duo-v1'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ss'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='coreduo'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ss'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='coreduo-v1'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ss'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='n270'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ss'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='n270-v1'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ss'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='phenom'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='3dnow'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='3dnowext'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='phenom-v1'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='3dnow'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='3dnowext'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     </mode>
Nov 25 23:51:04 compute-0 nova_compute[252550]:   </cpu>
Nov 25 23:51:04 compute-0 nova_compute[252550]:   <memoryBacking supported='yes'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <enum name='sourceType'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <value>file</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <value>anonymous</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <value>memfd</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:   </memoryBacking>
Nov 25 23:51:04 compute-0 nova_compute[252550]:   <devices>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <disk supported='yes'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='diskDevice'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>disk</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>cdrom</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>floppy</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>lun</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='bus'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>ide</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>fdc</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>scsi</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>virtio</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>usb</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>sata</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='model'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>virtio</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>virtio-transitional</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>virtio-non-transitional</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     </disk>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <graphics supported='yes'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='type'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>vnc</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>egl-headless</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>dbus</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     </graphics>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <video supported='yes'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='modelType'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>vga</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>cirrus</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>virtio</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>none</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>bochs</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>ramfb</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     </video>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <hostdev supported='yes'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='mode'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>subsystem</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='startupPolicy'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>default</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>mandatory</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>requisite</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>optional</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='subsysType'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>usb</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>pci</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>scsi</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='capsType'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='pciBackend'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     </hostdev>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <rng supported='yes'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='model'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>virtio</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>virtio-transitional</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>virtio-non-transitional</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='backendModel'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>random</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>egd</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>builtin</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     </rng>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <filesystem supported='yes'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='driverType'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>path</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>handle</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>virtiofs</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     </filesystem>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <tpm supported='yes'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='model'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>tpm-tis</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>tpm-crb</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='backendModel'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>emulator</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>external</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='backendVersion'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>2.0</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     </tpm>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <redirdev supported='yes'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='bus'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>usb</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     </redirdev>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <channel supported='yes'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='type'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>pty</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>unix</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     </channel>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <crypto supported='yes'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='model'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='type'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>qemu</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='backendModel'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>builtin</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     </crypto>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <interface supported='yes'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='backendType'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>default</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>passt</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     </interface>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <panic supported='yes'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='model'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>isa</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>hyperv</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     </panic>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <console supported='yes'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='type'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>null</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>vc</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>pty</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>dev</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>file</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>pipe</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>stdio</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>udp</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>tcp</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>unix</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>qemu-vdagent</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>dbus</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     </console>
Nov 25 23:51:04 compute-0 nova_compute[252550]:   </devices>
Nov 25 23:51:04 compute-0 nova_compute[252550]:   <features>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <gic supported='no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <vmcoreinfo supported='yes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <genid supported='yes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <backingStoreInput supported='yes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <backup supported='yes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <async-teardown supported='yes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <ps2 supported='yes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <sev supported='no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <sgx supported='no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <hyperv supported='yes'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='features'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>relaxed</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>vapic</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>spinlocks</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>vpindex</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>runtime</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>synic</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>stimer</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>reset</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>vendor_id</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>frequencies</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>reenlightenment</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>tlbflush</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>ipi</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>avic</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>emsr_bitmap</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>xmm_input</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <defaults>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <spinlocks>4095</spinlocks>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <stimer_direct>on</stimer_direct>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <tlbflush_direct>on</tlbflush_direct>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <tlbflush_extended>on</tlbflush_extended>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </defaults>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     </hyperv>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <launchSecurity supported='yes'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='sectype'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>tdx</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     </launchSecurity>
Nov 25 23:51:04 compute-0 nova_compute[252550]:   </features>
Nov 25 23:51:04 compute-0 nova_compute[252550]: </domainCapabilities>
Nov 25 23:51:04 compute-0 nova_compute[252550]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 25 23:51:04 compute-0 nova_compute[252550]: 2025-11-25 23:51:04.113 252558 DEBUG nova.virt.libvirt.host [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Nov 25 23:51:04 compute-0 nova_compute[252550]: <domainCapabilities>
Nov 25 23:51:04 compute-0 nova_compute[252550]:   <path>/usr/libexec/qemu-kvm</path>
Nov 25 23:51:04 compute-0 nova_compute[252550]:   <domain>kvm</domain>
Nov 25 23:51:04 compute-0 nova_compute[252550]:   <machine>pc-q35-rhel9.8.0</machine>
Nov 25 23:51:04 compute-0 nova_compute[252550]:   <arch>x86_64</arch>
Nov 25 23:51:04 compute-0 nova_compute[252550]:   <vcpu max='4096'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:   <iothreads supported='yes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:   <os supported='yes'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <enum name='firmware'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <value>efi</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <loader supported='yes'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='type'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>rom</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>pflash</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='readonly'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>yes</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>no</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='secure'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>yes</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>no</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     </loader>
Nov 25 23:51:04 compute-0 nova_compute[252550]:   </os>
Nov 25 23:51:04 compute-0 nova_compute[252550]:   <cpu>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <mode name='host-passthrough' supported='yes'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='hostPassthroughMigratable'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>on</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>off</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     </mode>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <mode name='maximum' supported='yes'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='maximumMigratable'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>on</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>off</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     </mode>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <mode name='host-model' supported='yes'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <vendor>AMD</vendor>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <feature policy='require' name='x2apic'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <feature policy='require' name='tsc-deadline'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <feature policy='require' name='hypervisor'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <feature policy='require' name='tsc_adjust'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <feature policy='require' name='spec-ctrl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <feature policy='require' name='stibp'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <feature policy='require' name='ssbd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <feature policy='require' name='cmp_legacy'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <feature policy='require' name='overflow-recov'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <feature policy='require' name='succor'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <feature policy='require' name='ibrs'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <feature policy='require' name='amd-ssbd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <feature policy='require' name='virt-ssbd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <feature policy='require' name='lbrv'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <feature policy='require' name='tsc-scale'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <feature policy='require' name='vmcb-clean'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <feature policy='require' name='flushbyasid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <feature policy='require' name='pause-filter'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <feature policy='require' name='pfthreshold'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <feature policy='require' name='svme-addr-chk'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <feature policy='disable' name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     </mode>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <mode name='custom' supported='yes'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Broadwell'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Broadwell-IBRS'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Broadwell-noTSX'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Broadwell-v1'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Broadwell-v2'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Broadwell-v3'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Broadwell-v4'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Cascadelake-Server'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ibrs-all'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Cascadelake-Server-v1'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Cascadelake-Server-v2'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ibrs-all'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Cascadelake-Server-v3'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ibrs-all'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Cascadelake-Server-v4'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ibrs-all'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Cascadelake-Server-v5'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ibrs-all'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Cooperlake'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-bf16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ibrs-all'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='taa-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Cooperlake-v1'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-bf16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ibrs-all'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='taa-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Cooperlake-v2'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-bf16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ibrs-all'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='taa-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Denverton'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='mpx'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Denverton-v1'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='mpx'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Denverton-v2'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Denverton-v3'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Dhyana-v2'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='EPYC-Genoa'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='amd-psfd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='auto-ibrs'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-bf16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-vpopcntdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bitalg'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512ifma'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi2'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='la57'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='no-nested-data-bp'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='null-sel-clr-base'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='stibp-always-on'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vaes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vpclmulqdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='EPYC-Genoa-v1'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='amd-psfd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='auto-ibrs'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-bf16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-vpopcntdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bitalg'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512ifma'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi2'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='la57'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='no-nested-data-bp'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='null-sel-clr-base'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='stibp-always-on'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vaes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vpclmulqdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='EPYC-Milan'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='EPYC-Milan-v1'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='EPYC-Milan-v2'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='amd-psfd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='no-nested-data-bp'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='null-sel-clr-base'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='stibp-always-on'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vaes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vpclmulqdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='EPYC-Rome'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='EPYC-Rome-v1'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='EPYC-Rome-v2'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='EPYC-Rome-v3'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='EPYC-v3'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='EPYC-v4'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='GraniteRapids'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='amx-bf16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='amx-fp16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='amx-int8'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='amx-tile'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx-vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-bf16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-fp16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-vpopcntdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bitalg'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512ifma'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi2'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='bus-lock-detect'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fbsdp-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrc'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrs'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fzrm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ibrs-all'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='la57'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='mcdt-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pbrsb-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='prefetchiti'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='psdp-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='sbdr-ssdp-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='serialize'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='taa-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='tsx-ldtrk'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vaes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vpclmulqdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xfd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='GraniteRapids-v1'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='amx-bf16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='amx-fp16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='amx-int8'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='amx-tile'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx-vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-bf16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-fp16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-vpopcntdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bitalg'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512ifma'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi2'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='bus-lock-detect'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fbsdp-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrc'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrs'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fzrm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ibrs-all'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='la57'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='mcdt-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pbrsb-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='prefetchiti'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='psdp-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='sbdr-ssdp-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='serialize'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='taa-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='tsx-ldtrk'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vaes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vpclmulqdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xfd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='GraniteRapids-v2'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='amx-bf16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='amx-fp16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='amx-int8'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='amx-tile'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx-vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx10'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx10-128'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx10-256'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx10-512'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-bf16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-fp16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-vpopcntdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bitalg'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512ifma'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi2'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='bus-lock-detect'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='cldemote'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fbsdp-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrc'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrs'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fzrm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ibrs-all'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='la57'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='mcdt-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='movdir64b'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='movdiri'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pbrsb-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='prefetchiti'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='psdp-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='sbdr-ssdp-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='serialize'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ss'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='taa-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='tsx-ldtrk'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vaes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vpclmulqdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xfd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Haswell'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Haswell-IBRS'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Haswell-noTSX'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Haswell-v1'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Haswell-v2'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Haswell-v3'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Haswell-v4'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Icelake-Server'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-vpopcntdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bitalg'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi2'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='la57'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vaes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vpclmulqdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Icelake-Server-noTSX'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-vpopcntdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bitalg'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi2'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='la57'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vaes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vpclmulqdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Icelake-Server-v1'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-vpopcntdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bitalg'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi2'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='la57'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vaes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vpclmulqdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Icelake-Server-v2'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-vpopcntdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bitalg'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi2'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='la57'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vaes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vpclmulqdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Icelake-Server-v3'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-vpopcntdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bitalg'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi2'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ibrs-all'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='la57'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='taa-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vaes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vpclmulqdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Icelake-Server-v4'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-vpopcntdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bitalg'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512ifma'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi2'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ibrs-all'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='la57'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='taa-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vaes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vpclmulqdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Icelake-Server-v5'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-vpopcntdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bitalg'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512ifma'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi2'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ibrs-all'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='la57'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='taa-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vaes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vpclmulqdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Icelake-Server-v6'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-vpopcntdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bitalg'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512ifma'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi2'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ibrs-all'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='la57'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='taa-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vaes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vpclmulqdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Icelake-Server-v7'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-vpopcntdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bitalg'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512ifma'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi2'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ibrs-all'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='la57'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='taa-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vaes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vpclmulqdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='IvyBridge'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='IvyBridge-IBRS'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='IvyBridge-v1'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='IvyBridge-v2'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='KnightsMill'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-4fmaps'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-4vnniw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-vpopcntdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512er'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512pf'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ss'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='KnightsMill-v1'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-4fmaps'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-4vnniw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-vpopcntdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512er'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512pf'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ss'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Opteron_G4'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fma4'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xop'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Opteron_G4-v1'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fma4'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xop'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Opteron_G5'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fma4'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='tbm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xop'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Opteron_G5-v1'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fma4'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='tbm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xop'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='SapphireRapids'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='amx-bf16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='amx-int8'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='amx-tile'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx-vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-bf16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-fp16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-vpopcntdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bitalg'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512ifma'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi2'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='bus-lock-detect'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrc'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrs'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fzrm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ibrs-all'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='la57'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='serialize'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='taa-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='tsx-ldtrk'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vaes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vpclmulqdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xfd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='SapphireRapids-v1'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='amx-bf16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='amx-int8'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='amx-tile'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx-vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-bf16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-fp16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-vpopcntdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bitalg'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512ifma'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi2'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='bus-lock-detect'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrc'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrs'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fzrm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ibrs-all'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='la57'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='serialize'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='taa-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='tsx-ldtrk'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vaes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vpclmulqdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xfd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='SapphireRapids-v2'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='amx-bf16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='amx-int8'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='amx-tile'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx-vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-bf16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-fp16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-vpopcntdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bitalg'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512ifma'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi2'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='bus-lock-detect'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fbsdp-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrc'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrs'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fzrm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ibrs-all'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='la57'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='psdp-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='sbdr-ssdp-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='serialize'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='taa-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='tsx-ldtrk'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vaes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vpclmulqdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xfd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='SapphireRapids-v3'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='amx-bf16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='amx-int8'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='amx-tile'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx-vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-bf16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-fp16'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512-vpopcntdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bitalg'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512ifma'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vbmi2'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='bus-lock-detect'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='cldemote'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fbsdp-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrc'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrs'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fzrm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ibrs-all'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='la57'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='movdir64b'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='movdiri'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='psdp-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='sbdr-ssdp-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='serialize'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ss'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='taa-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='tsx-ldtrk'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vaes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vpclmulqdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xfd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='SierraForest'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx-ifma'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx-ne-convert'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx-vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx-vnni-int8'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='bus-lock-detect'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='cmpccxadd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fbsdp-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrs'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ibrs-all'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='mcdt-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pbrsb-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='psdp-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='sbdr-ssdp-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='serialize'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vaes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vpclmulqdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='SierraForest-v1'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx-ifma'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx-ne-convert'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx-vnni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx-vnni-int8'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='bus-lock-detect'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='cmpccxadd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fbsdp-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='fsrs'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ibrs-all'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='mcdt-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pbrsb-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='psdp-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='sbdr-ssdp-no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='serialize'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vaes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='vpclmulqdq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Skylake-Client'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Skylake-Client-IBRS'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Skylake-Client-v1'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Skylake-Client-v2'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Skylake-Client-v3'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Skylake-Client-v4'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Skylake-Server'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Skylake-Server-IBRS'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Skylake-Server-v1'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Skylake-Server-v2'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='hle'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='rtm'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Skylake-Server-v3'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Skylake-Server-v4'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Skylake-Server-v5'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512bw'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512cd'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512dq'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512f'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='avx512vl'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='invpcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pcid'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='pku'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Snowridge'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='cldemote'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='core-capability'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='movdir64b'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='movdiri'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='mpx'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='split-lock-detect'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Snowridge-v1'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='cldemote'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='core-capability'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='movdir64b'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='movdiri'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='mpx'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='split-lock-detect'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Snowridge-v2'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='cldemote'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='core-capability'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='movdir64b'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='movdiri'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='split-lock-detect'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Snowridge-v3'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='cldemote'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='core-capability'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='movdir64b'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='movdiri'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='split-lock-detect'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='Snowridge-v4'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='cldemote'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='erms'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='gfni'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='movdir64b'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='movdiri'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='xsaves'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='athlon'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='3dnow'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='3dnowext'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='athlon-v1'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='3dnow'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='3dnowext'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='core2duo'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ss'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='core2duo-v1'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ss'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='coreduo'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ss'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='coreduo-v1'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ss'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='n270'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ss'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='n270-v1'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='ss'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='phenom'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='3dnow'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='3dnowext'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <blockers model='phenom-v1'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='3dnow'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <feature name='3dnowext'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </blockers>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     </mode>
Nov 25 23:51:04 compute-0 nova_compute[252550]:   </cpu>
Nov 25 23:51:04 compute-0 nova_compute[252550]:   <memoryBacking supported='yes'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <enum name='sourceType'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <value>file</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <value>anonymous</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <value>memfd</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:   </memoryBacking>
Nov 25 23:51:04 compute-0 nova_compute[252550]:   <devices>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <disk supported='yes'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='diskDevice'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>disk</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>cdrom</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>floppy</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>lun</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='bus'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>fdc</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>scsi</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>virtio</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>usb</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>sata</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='model'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>virtio</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>virtio-transitional</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>virtio-non-transitional</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     </disk>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <graphics supported='yes'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='type'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>vnc</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>egl-headless</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>dbus</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     </graphics>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <video supported='yes'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='modelType'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>vga</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>cirrus</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>virtio</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>none</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>bochs</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>ramfb</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     </video>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <hostdev supported='yes'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='mode'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>subsystem</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='startupPolicy'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>default</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>mandatory</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>requisite</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>optional</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='subsysType'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>usb</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>pci</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>scsi</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='capsType'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='pciBackend'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     </hostdev>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <rng supported='yes'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='model'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>virtio</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>virtio-transitional</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>virtio-non-transitional</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='backendModel'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>random</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>egd</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>builtin</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     </rng>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <filesystem supported='yes'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='driverType'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>path</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>handle</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>virtiofs</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     </filesystem>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <tpm supported='yes'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='model'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>tpm-tis</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>tpm-crb</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='backendModel'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>emulator</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>external</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='backendVersion'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>2.0</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     </tpm>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <redirdev supported='yes'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='bus'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>usb</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     </redirdev>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <channel supported='yes'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='type'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>pty</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>unix</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     </channel>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <crypto supported='yes'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='model'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='type'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>qemu</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='backendModel'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>builtin</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     </crypto>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <interface supported='yes'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='backendType'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>default</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>passt</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     </interface>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <panic supported='yes'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='model'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>isa</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>hyperv</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     </panic>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <console supported='yes'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='type'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>null</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>vc</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>pty</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>dev</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>file</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>pipe</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>stdio</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>udp</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>tcp</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>unix</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>qemu-vdagent</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>dbus</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     </console>
Nov 25 23:51:04 compute-0 nova_compute[252550]:   </devices>
Nov 25 23:51:04 compute-0 nova_compute[252550]:   <features>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <gic supported='no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <vmcoreinfo supported='yes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <genid supported='yes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <backingStoreInput supported='yes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <backup supported='yes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <async-teardown supported='yes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <ps2 supported='yes'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <sev supported='no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <sgx supported='no'/>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <hyperv supported='yes'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='features'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>relaxed</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>vapic</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>spinlocks</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>vpindex</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>runtime</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>synic</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>stimer</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>reset</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>vendor_id</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>frequencies</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>reenlightenment</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>tlbflush</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>ipi</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>avic</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>emsr_bitmap</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>xmm_input</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <defaults>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <spinlocks>4095</spinlocks>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <stimer_direct>on</stimer_direct>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <tlbflush_direct>on</tlbflush_direct>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <tlbflush_extended>on</tlbflush_extended>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </defaults>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     </hyperv>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     <launchSecurity supported='yes'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       <enum name='sectype'>
Nov 25 23:51:04 compute-0 nova_compute[252550]:         <value>tdx</value>
Nov 25 23:51:04 compute-0 nova_compute[252550]:       </enum>
Nov 25 23:51:04 compute-0 nova_compute[252550]:     </launchSecurity>
Nov 25 23:51:04 compute-0 nova_compute[252550]:   </features>
Nov 25 23:51:04 compute-0 nova_compute[252550]: </domainCapabilities>
Nov 25 23:51:04 compute-0 nova_compute[252550]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 25 23:51:04 compute-0 nova_compute[252550]: 2025-11-25 23:51:04.175 252558 DEBUG nova.virt.libvirt.host [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Nov 25 23:51:04 compute-0 nova_compute[252550]: 2025-11-25 23:51:04.175 252558 DEBUG nova.virt.libvirt.host [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Nov 25 23:51:04 compute-0 nova_compute[252550]: 2025-11-25 23:51:04.175 252558 DEBUG nova.virt.libvirt.host [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Nov 25 23:51:04 compute-0 nova_compute[252550]: 2025-11-25 23:51:04.176 252558 INFO nova.virt.libvirt.host [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Secure Boot support detected
Nov 25 23:51:04 compute-0 nova_compute[252550]: 2025-11-25 23:51:04.177 252558 INFO nova.virt.libvirt.driver [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Nov 25 23:51:04 compute-0 nova_compute[252550]: 2025-11-25 23:51:04.178 252558 INFO nova.virt.libvirt.driver [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Nov 25 23:51:04 compute-0 nova_compute[252550]: 2025-11-25 23:51:04.185 252558 DEBUG nova.virt.libvirt.driver [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Nov 25 23:51:04 compute-0 nova_compute[252550]: 2025-11-25 23:51:04.364 252558 INFO nova.virt.node [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Determined node identity 08547965-b35f-4b7b-95d8-902f06aa011c from /var/lib/nova/compute_id
Nov 25 23:51:04 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v628: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:51:04 compute-0 nova_compute[252550]: 2025-11-25 23:51:04.809 252558 WARNING nova.compute.manager [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Compute nodes ['08547965-b35f-4b7b-95d8-902f06aa011c'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Nov 25 23:51:05 compute-0 nova_compute[252550]: 2025-11-25 23:51:05.206 252558 INFO nova.compute.manager [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Nov 25 23:51:05 compute-0 nova_compute[252550]: 2025-11-25 23:51:05.267 252558 WARNING nova.compute.manager [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Nov 25 23:51:05 compute-0 nova_compute[252550]: 2025-11-25 23:51:05.267 252558 DEBUG oslo_concurrency.lockutils [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 23:51:05 compute-0 nova_compute[252550]: 2025-11-25 23:51:05.268 252558 DEBUG oslo_concurrency.lockutils [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 23:51:05 compute-0 nova_compute[252550]: 2025-11-25 23:51:05.268 252558 DEBUG oslo_concurrency.lockutils [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 23:51:05 compute-0 nova_compute[252550]: 2025-11-25 23:51:05.268 252558 DEBUG nova.compute.resource_tracker [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 23:51:05 compute-0 nova_compute[252550]: 2025-11-25 23:51:05.269 252558 DEBUG oslo_concurrency.processutils [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 23:51:05 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:51:05 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 23:51:05 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4294696395' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 23:51:05 compute-0 ceph-mon[75654]: pgmap v628: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:51:05 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/4294696395' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 23:51:05 compute-0 nova_compute[252550]: 2025-11-25 23:51:05.662 252558 DEBUG oslo_concurrency.processutils [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.393s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 23:51:05 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Nov 25 23:51:05 compute-0 systemd[1]: Started libvirt nodedev daemon.
Nov 25 23:51:05 compute-0 nova_compute[252550]: 2025-11-25 23:51:05.953 252558 WARNING nova.virt.libvirt.driver [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 23:51:05 compute-0 nova_compute[252550]: 2025-11-25 23:51:05.955 252558 DEBUG nova.compute.resource_tracker [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5304MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 23:51:05 compute-0 nova_compute[252550]: 2025-11-25 23:51:05.955 252558 DEBUG oslo_concurrency.lockutils [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 23:51:05 compute-0 nova_compute[252550]: 2025-11-25 23:51:05.955 252558 DEBUG oslo_concurrency.lockutils [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 23:51:05 compute-0 nova_compute[252550]: 2025-11-25 23:51:05.976 252558 WARNING nova.compute.resource_tracker [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] No compute node record for compute-0.ctlplane.example.com:08547965-b35f-4b7b-95d8-902f06aa011c: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 08547965-b35f-4b7b-95d8-902f06aa011c could not be found.
Nov 25 23:51:06 compute-0 nova_compute[252550]: 2025-11-25 23:51:06.016 252558 INFO nova.compute.resource_tracker [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: 08547965-b35f-4b7b-95d8-902f06aa011c
Nov 25 23:51:06 compute-0 nova_compute[252550]: 2025-11-25 23:51:06.074 252558 DEBUG nova.compute.resource_tracker [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 23:51:06 compute-0 nova_compute[252550]: 2025-11-25 23:51:06.074 252558 DEBUG nova.compute.resource_tracker [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 23:51:06 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v629: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:51:07 compute-0 nova_compute[252550]: 2025-11-25 23:51:07.029 252558 INFO nova.scheduler.client.report [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] [req-36d7e41c-c7ff-4335-87d4-e2ccdcdfbbe6] Created resource provider record via placement API for resource provider with UUID 08547965-b35f-4b7b-95d8-902f06aa011c and name compute-0.ctlplane.example.com.
Nov 25 23:51:07 compute-0 nova_compute[252550]: 2025-11-25 23:51:07.404 252558 DEBUG oslo_concurrency.processutils [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 23:51:07 compute-0 ceph-mon[75654]: pgmap v629: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:51:07 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 23:51:07 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1322178470' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 23:51:07 compute-0 nova_compute[252550]: 2025-11-25 23:51:07.809 252558 DEBUG oslo_concurrency.processutils [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.405s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 23:51:07 compute-0 nova_compute[252550]: 2025-11-25 23:51:07.815 252558 DEBUG nova.virt.libvirt.host [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Nov 25 23:51:07 compute-0 nova_compute[252550]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Nov 25 23:51:07 compute-0 nova_compute[252550]: 2025-11-25 23:51:07.815 252558 INFO nova.virt.libvirt.host [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] kernel doesn't support AMD SEV
Nov 25 23:51:07 compute-0 nova_compute[252550]: 2025-11-25 23:51:07.816 252558 DEBUG nova.compute.provider_tree [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Updating inventory in ProviderTree for provider 08547965-b35f-4b7b-95d8-902f06aa011c with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 25 23:51:07 compute-0 nova_compute[252550]: 2025-11-25 23:51:07.817 252558 DEBUG nova.virt.libvirt.driver [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 23:51:07 compute-0 nova_compute[252550]: 2025-11-25 23:51:07.895 252558 DEBUG nova.scheduler.client.report [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Updated inventory for provider 08547965-b35f-4b7b-95d8-902f06aa011c with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Nov 25 23:51:07 compute-0 nova_compute[252550]: 2025-11-25 23:51:07.895 252558 DEBUG nova.compute.provider_tree [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Updating resource provider 08547965-b35f-4b7b-95d8-902f06aa011c generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Nov 25 23:51:07 compute-0 nova_compute[252550]: 2025-11-25 23:51:07.896 252558 DEBUG nova.compute.provider_tree [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Updating inventory in ProviderTree for provider 08547965-b35f-4b7b-95d8-902f06aa011c with inventory: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 25 23:51:08 compute-0 nova_compute[252550]: 2025-11-25 23:51:08.026 252558 DEBUG nova.compute.provider_tree [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Updating resource provider 08547965-b35f-4b7b-95d8-902f06aa011c generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Nov 25 23:51:08 compute-0 nova_compute[252550]: 2025-11-25 23:51:08.069 252558 DEBUG nova.compute.resource_tracker [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 23:51:08 compute-0 nova_compute[252550]: 2025-11-25 23:51:08.070 252558 DEBUG oslo_concurrency.lockutils [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.114s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 23:51:08 compute-0 nova_compute[252550]: 2025-11-25 23:51:08.070 252558 DEBUG nova.service [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Nov 25 23:51:08 compute-0 nova_compute[252550]: 2025-11-25 23:51:08.148 252558 DEBUG nova.service [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Nov 25 23:51:08 compute-0 nova_compute[252550]: 2025-11-25 23:51:08.148 252558 DEBUG nova.servicegroup.drivers.db [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Nov 25 23:51:08 compute-0 podman[252949]: 2025-11-25 23:51:08.240929911 +0000 UTC m=+0.058961483 container health_status b33bb0f8771548517ad62d4e0193b93d84fd686bd9113462f17aea9d51587084 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251118, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 23:51:08 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v630: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:51:08 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/1322178470' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 23:51:09 compute-0 ceph-mon[75654]: pgmap v630: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:51:10 compute-0 ceph-mon[75654]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 23:51:10 compute-0 ceph-mon[75654]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Cumulative writes: 3046 writes, 12K keys, 3046 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.01 MB/s
                                           Cumulative WAL: 3046 writes, 3046 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1278 writes, 5305 keys, 1278 commit groups, 1.0 writes per commit group, ingest: 5.67 MB, 0.01 MB/s
                                           Interval WAL: 1278 writes, 1278 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    113.2      0.09              0.05         6    0.015       0      0       0.0       0.0
                                             L6      1/0    4.62 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.4    172.2    140.5      0.17              0.10         5    0.035     16K   2263       0.0       0.0
                                            Sum      1/0    4.62 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   3.4    113.6    131.2      0.26              0.15        11    0.024     16K   2263       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.5    121.1    123.5      0.15              0.09         6    0.026     10K   1494       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    172.2    140.5      0.17              0.10         5    0.035     16K   2263       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    116.2      0.09              0.05         5    0.017       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     18.8      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.010, interval 0.004
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.03 GB write, 0.03 MB/s write, 0.03 GB read, 0.02 MB/s read, 0.3 seconds
                                           Interval compaction: 0.02 GB write, 0.03 MB/s write, 0.02 GB read, 0.03 MB/s read, 0.2 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f0edcc31f0#2 capacity: 308.00 MB usage: 1.22 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 5.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(83,1.07 MB,0.345929%) FilterBlock(12,54.36 KB,0.0172355%) IndexBlock(12,107.14 KB,0.0339706%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 25 23:51:10 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v631: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:51:10 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:51:11 compute-0 ceph-mon[75654]: pgmap v631: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:51:12 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v632: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:51:13 compute-0 ceph-mon[75654]: pgmap v632: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:51:14 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v633: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:51:15 compute-0 podman[252969]: 2025-11-25 23:51:15.277158458 +0000 UTC m=+0.107241022 container health_status 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 25 23:51:15 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:51:15 compute-0 ceph-mon[75654]: pgmap v633: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:51:16 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v634: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:51:17 compute-0 podman[252996]: 2025-11-25 23:51:17.258233886 +0000 UTC m=+0.078678726 container health_status 9c33443d6f21f9126b440289b7d384a59ef92406382d0e40a2b11f1da5dc47d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 23:51:17 compute-0 ceph-mon[75654]: pgmap v634: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:51:18 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v635: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:51:19 compute-0 ceph-mon[75654]: pgmap v635: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:51:20 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v636: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:51:20 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:51:22 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v637: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:51:24 compute-0 ceph-mon[75654]: pgmap v636: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:51:24 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v638: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:51:25 compute-0 ceph-mon[75654]: pgmap v637: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:51:25 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:51:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:51:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:51:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:51:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:51:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:51:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:51:26 compute-0 ceph-mon[75654]: pgmap v638: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:51:26 compute-0 nova_compute[252550]: 2025-11-25 23:51:26.150 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 23:51:26 compute-0 nova_compute[252550]: 2025-11-25 23:51:26.351 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 23:51:26 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v639: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:51:26 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 23:51:26 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3410503557' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 23:51:26 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 23:51:26 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3410503557' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 23:51:27 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 23:51:27 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3915266591' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 23:51:27 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 23:51:27 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3915266591' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 23:51:27 compute-0 ceph-mon[75654]: from='client.? 192.168.122.10:0/3410503557' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 23:51:27 compute-0 ceph-mon[75654]: from='client.? 192.168.122.10:0/3410503557' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 23:51:27 compute-0 ceph-mon[75654]: from='client.? 192.168.122.10:0/3915266591' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 23:51:27 compute-0 ceph-mon[75654]: from='client.? 192.168.122.10:0/3915266591' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 23:51:28 compute-0 ceph-mon[75654]: pgmap v639: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:51:28 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v640: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:51:30 compute-0 ceph-mon[75654]: pgmap v640: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:51:30 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v641: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:51:30 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:51:32 compute-0 ceph-mon[75654]: pgmap v641: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:51:32 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v642: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:51:34 compute-0 ceph-mon[75654]: pgmap v642: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:51:34 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v643: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:51:35 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:51:36 compute-0 ceph-mon[75654]: pgmap v643: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:51:36 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v644: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:51:37 compute-0 ceph-mon[75654]: pgmap v644: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:51:38 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v645: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:51:39 compute-0 podman[253016]: 2025-11-25 23:51:39.245436986 +0000 UTC m=+0.067882149 container health_status b33bb0f8771548517ad62d4e0193b93d84fd686bd9113462f17aea9d51587084 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 25 23:51:39 compute-0 ceph-mon[75654]: pgmap v645: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:51:40 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v646: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:51:40 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:51:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:51:40.759 160725 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 23:51:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:51:40.760 160725 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 23:51:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:51:40.760 160725 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 23:51:41 compute-0 ceph-mon[75654]: pgmap v646: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:51:41 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 23:51:41 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3570208736' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 23:51:41 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 23:51:41 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3570208736' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 23:51:42 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v647: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:51:42 compute-0 ceph-mon[75654]: from='client.? 192.168.122.10:0/3570208736' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 23:51:42 compute-0 ceph-mon[75654]: from='client.? 192.168.122.10:0/3570208736' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 23:51:43 compute-0 ceph-mon[75654]: pgmap v647: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:51:44 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v648: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:51:45 compute-0 ceph-mon[75654]: pgmap v648: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:51:45 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:51:46 compute-0 podman[253036]: 2025-11-25 23:51:46.336503135 +0000 UTC m=+0.158819439 container health_status 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 23:51:46 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v649: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:51:47 compute-0 ceph-mon[75654]: pgmap v649: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:51:48 compute-0 podman[253064]: 2025-11-25 23:51:48.287621939 +0000 UTC m=+0.108536426 container health_status 9c33443d6f21f9126b440289b7d384a59ef92406382d0e40a2b11f1da5dc47d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 25 23:51:48 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v650: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:51:49 compute-0 ceph-mon[75654]: pgmap v650: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:51:50 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v651: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:51:50 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:51:51 compute-0 sshd-session[253083]: Received disconnect from 171.244.140.22 port 33454:11: Bye Bye [preauth]
Nov 25 23:51:51 compute-0 sshd-session[253083]: Disconnected from authenticating user root 171.244.140.22 port 33454 [preauth]
Nov 25 23:51:51 compute-0 ceph-mon[75654]: pgmap v651: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:51:52 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v652: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:51:53 compute-0 ceph-mon[75654]: pgmap v652: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:51:54 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v653: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:51:55 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:51:55 compute-0 ceph-mon[75654]: pgmap v653: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:51:56 compute-0 ceph-mgr[75954]: [balancer INFO root] Optimize plan auto_2025-11-25_23:51:56
Nov 25 23:51:56 compute-0 ceph-mgr[75954]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 23:51:56 compute-0 ceph-mgr[75954]: [balancer INFO root] do_upmap
Nov 25 23:51:56 compute-0 ceph-mgr[75954]: [balancer INFO root] pools ['images', 'backups', '.mgr', 'volumes', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'vms']
Nov 25 23:51:56 compute-0 ceph-mgr[75954]: [balancer INFO root] prepared 0/10 changes
Nov 25 23:51:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:51:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:51:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:51:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:51:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:51:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:51:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 23:51:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 23:51:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 23:51:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 23:51:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 23:51:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 23:51:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 23:51:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 23:51:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 23:51:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 23:51:56 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v654: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:51:57 compute-0 ceph-mon[75654]: pgmap v654: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:51:58 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v655: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:51:59 compute-0 ceph-mon[75654]: pgmap v655: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:52:00 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v656: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:52:00 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:52:00 compute-0 sudo[253085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:52:00 compute-0 sudo[253085]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:52:00 compute-0 sudo[253085]: pam_unix(sudo:session): session closed for user root
Nov 25 23:52:00 compute-0 sudo[253110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:52:00 compute-0 sudo[253110]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:52:00 compute-0 sudo[253110]: pam_unix(sudo:session): session closed for user root
Nov 25 23:52:01 compute-0 sudo[253135]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:52:01 compute-0 sudo[253135]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:52:01 compute-0 sudo[253135]: pam_unix(sudo:session): session closed for user root
Nov 25 23:52:01 compute-0 sudo[253160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 23:52:01 compute-0 sudo[253160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:52:01 compute-0 sudo[253160]: pam_unix(sudo:session): session closed for user root
Nov 25 23:52:01 compute-0 ceph-mon[75654]: pgmap v656: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:52:01 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 23:52:01 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:52:01 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 23:52:01 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 23:52:01 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 23:52:01 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:52:01 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev ec3420d9-9566-43cd-8118-96d9e585ff52 does not exist
Nov 25 23:52:01 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev 750da9d5-d847-4d04-9af5-a9ad762e81b0 does not exist
Nov 25 23:52:01 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev df1494c2-f84f-4f77-b4e8-a529ef8ad0e3 does not exist
Nov 25 23:52:01 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 23:52:01 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 23:52:01 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 23:52:01 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 23:52:01 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 23:52:01 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:52:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 23:52:01 compute-0 sudo[253216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:52:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:52:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 23:52:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:52:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:52:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:52:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:52:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:52:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:52:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:52:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:52:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:52:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 23:52:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:52:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:52:01 compute-0 sudo[253216]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:52:01 compute-0 sudo[253216]: pam_unix(sudo:session): session closed for user root
Nov 25 23:52:01 compute-0 sudo[253241]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:52:01 compute-0 sudo[253241]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:52:01 compute-0 sudo[253241]: pam_unix(sudo:session): session closed for user root
Nov 25 23:52:01 compute-0 anacron[7485]: Job `cron.weekly' started
Nov 25 23:52:01 compute-0 anacron[7485]: Job `cron.weekly' terminated
Nov 25 23:52:01 compute-0 sudo[253266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:52:01 compute-0 sudo[253266]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:52:01 compute-0 sudo[253266]: pam_unix(sudo:session): session closed for user root
Nov 25 23:52:01 compute-0 sudo[253293]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 23:52:01 compute-0 sudo[253293]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:52:02 compute-0 podman[253358]: 2025-11-25 23:52:02.381352279 +0000 UTC m=+0.039035455 container create 5a6e355ddf9adb78f3a46cba93fcc291344661586503f35458b5750939354c79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_khayyam, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:52:02 compute-0 systemd[1]: Started libpod-conmon-5a6e355ddf9adb78f3a46cba93fcc291344661586503f35458b5750939354c79.scope.
Nov 25 23:52:02 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v657: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:52:02 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:52:02 compute-0 podman[253358]: 2025-11-25 23:52:02.362460079 +0000 UTC m=+0.020143245 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:52:02 compute-0 podman[253358]: 2025-11-25 23:52:02.47387387 +0000 UTC m=+0.131557046 container init 5a6e355ddf9adb78f3a46cba93fcc291344661586503f35458b5750939354c79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 23:52:02 compute-0 podman[253358]: 2025-11-25 23:52:02.481408539 +0000 UTC m=+0.139091715 container start 5a6e355ddf9adb78f3a46cba93fcc291344661586503f35458b5750939354c79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_khayyam, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 25 23:52:02 compute-0 podman[253358]: 2025-11-25 23:52:02.484756968 +0000 UTC m=+0.142440154 container attach 5a6e355ddf9adb78f3a46cba93fcc291344661586503f35458b5750939354c79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_khayyam, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 23:52:02 compute-0 fervent_khayyam[253374]: 167 167
Nov 25 23:52:02 compute-0 systemd[1]: libpod-5a6e355ddf9adb78f3a46cba93fcc291344661586503f35458b5750939354c79.scope: Deactivated successfully.
Nov 25 23:52:02 compute-0 conmon[253374]: conmon 5a6e355ddf9adb78f3a4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5a6e355ddf9adb78f3a46cba93fcc291344661586503f35458b5750939354c79.scope/container/memory.events
Nov 25 23:52:02 compute-0 podman[253358]: 2025-11-25 23:52:02.488776665 +0000 UTC m=+0.146459801 container died 5a6e355ddf9adb78f3a46cba93fcc291344661586503f35458b5750939354c79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_khayyam, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:52:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-9a1758dbe8b1e3bcbe70968a02a8dee654ecb8882eab207a28ab3d022afbecb4-merged.mount: Deactivated successfully.
Nov 25 23:52:02 compute-0 podman[253358]: 2025-11-25 23:52:02.533340445 +0000 UTC m=+0.191023611 container remove 5a6e355ddf9adb78f3a46cba93fcc291344661586503f35458b5750939354c79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_khayyam, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 25 23:52:02 compute-0 systemd[1]: libpod-conmon-5a6e355ddf9adb78f3a46cba93fcc291344661586503f35458b5750939354c79.scope: Deactivated successfully.
Nov 25 23:52:02 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:52:02 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 23:52:02 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:52:02 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 23:52:02 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 23:52:02 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:52:02 compute-0 podman[253397]: 2025-11-25 23:52:02.777337578 +0000 UTC m=+0.078689855 container create ca07eb41e1684088d33d22f8e889afd2f92a99b28511d0136d08891dee9d256e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_franklin, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 23:52:02 compute-0 systemd[1]: Started libpod-conmon-ca07eb41e1684088d33d22f8e889afd2f92a99b28511d0136d08891dee9d256e.scope.
Nov 25 23:52:02 compute-0 nova_compute[252550]: 2025-11-25 23:52:02.824 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 23:52:02 compute-0 nova_compute[252550]: 2025-11-25 23:52:02.826 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 23:52:02 compute-0 nova_compute[252550]: 2025-11-25 23:52:02.826 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 23:52:02 compute-0 nova_compute[252550]: 2025-11-25 23:52:02.827 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 23:52:02 compute-0 podman[253397]: 2025-11-25 23:52:02.746462581 +0000 UTC m=+0.047814908 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:52:02 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:52:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/903d9fbcd747f7003f7a12b6c1cf78c037bcac7393ce317cf388b618e7eb7464/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 23:52:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/903d9fbcd747f7003f7a12b6c1cf78c037bcac7393ce317cf388b618e7eb7464/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:52:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/903d9fbcd747f7003f7a12b6c1cf78c037bcac7393ce317cf388b618e7eb7464/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:52:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/903d9fbcd747f7003f7a12b6c1cf78c037bcac7393ce317cf388b618e7eb7464/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:52:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/903d9fbcd747f7003f7a12b6c1cf78c037bcac7393ce317cf388b618e7eb7464/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 23:52:02 compute-0 podman[253397]: 2025-11-25 23:52:02.88083251 +0000 UTC m=+0.182184827 container init ca07eb41e1684088d33d22f8e889afd2f92a99b28511d0136d08891dee9d256e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:52:02 compute-0 podman[253397]: 2025-11-25 23:52:02.889067448 +0000 UTC m=+0.190419725 container start ca07eb41e1684088d33d22f8e889afd2f92a99b28511d0136d08891dee9d256e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_franklin, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 23:52:02 compute-0 podman[253397]: 2025-11-25 23:52:02.892767496 +0000 UTC m=+0.194119773 container attach ca07eb41e1684088d33d22f8e889afd2f92a99b28511d0136d08891dee9d256e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:52:02 compute-0 nova_compute[252550]: 2025-11-25 23:52:02.899 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 23:52:02 compute-0 nova_compute[252550]: 2025-11-25 23:52:02.899 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 23:52:02 compute-0 nova_compute[252550]: 2025-11-25 23:52:02.900 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 23:52:02 compute-0 nova_compute[252550]: 2025-11-25 23:52:02.901 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 23:52:02 compute-0 nova_compute[252550]: 2025-11-25 23:52:02.901 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 23:52:02 compute-0 nova_compute[252550]: 2025-11-25 23:52:02.902 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 23:52:02 compute-0 nova_compute[252550]: 2025-11-25 23:52:02.902 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 23:52:02 compute-0 nova_compute[252550]: 2025-11-25 23:52:02.903 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 23:52:02 compute-0 nova_compute[252550]: 2025-11-25 23:52:02.903 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 23:52:02 compute-0 nova_compute[252550]: 2025-11-25 23:52:02.947 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 23:52:02 compute-0 nova_compute[252550]: 2025-11-25 23:52:02.948 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 23:52:02 compute-0 nova_compute[252550]: 2025-11-25 23:52:02.948 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 23:52:02 compute-0 nova_compute[252550]: 2025-11-25 23:52:02.948 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 23:52:02 compute-0 nova_compute[252550]: 2025-11-25 23:52:02.949 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 23:52:03 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 23:52:03 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2052798394' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 23:52:03 compute-0 nova_compute[252550]: 2025-11-25 23:52:03.362 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 23:52:03 compute-0 nova_compute[252550]: 2025-11-25 23:52:03.601 252558 WARNING nova.virt.libvirt.driver [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 23:52:03 compute-0 nova_compute[252550]: 2025-11-25 23:52:03.605 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5260MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 23:52:03 compute-0 nova_compute[252550]: 2025-11-25 23:52:03.606 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 23:52:03 compute-0 nova_compute[252550]: 2025-11-25 23:52:03.606 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 23:52:03 compute-0 ceph-mon[75654]: pgmap v657: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:52:03 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/2052798394' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 23:52:03 compute-0 nova_compute[252550]: 2025-11-25 23:52:03.723 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 23:52:03 compute-0 nova_compute[252550]: 2025-11-25 23:52:03.724 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 23:52:03 compute-0 nova_compute[252550]: 2025-11-25 23:52:03.754 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 23:52:03 compute-0 competent_franklin[253414]: --> passed data devices: 0 physical, 3 LVM
Nov 25 23:52:03 compute-0 competent_franklin[253414]: --> relative data size: 1.0
Nov 25 23:52:03 compute-0 competent_franklin[253414]: --> All data devices are unavailable
Nov 25 23:52:03 compute-0 systemd[1]: libpod-ca07eb41e1684088d33d22f8e889afd2f92a99b28511d0136d08891dee9d256e.scope: Deactivated successfully.
Nov 25 23:52:03 compute-0 podman[253397]: 2025-11-25 23:52:03.951582542 +0000 UTC m=+1.252934779 container died ca07eb41e1684088d33d22f8e889afd2f92a99b28511d0136d08891dee9d256e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_franklin, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 23:52:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-903d9fbcd747f7003f7a12b6c1cf78c037bcac7393ce317cf388b618e7eb7464-merged.mount: Deactivated successfully.
Nov 25 23:52:04 compute-0 podman[253397]: 2025-11-25 23:52:04.039745397 +0000 UTC m=+1.341097664 container remove ca07eb41e1684088d33d22f8e889afd2f92a99b28511d0136d08891dee9d256e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_franklin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 25 23:52:04 compute-0 systemd[1]: libpod-conmon-ca07eb41e1684088d33d22f8e889afd2f92a99b28511d0136d08891dee9d256e.scope: Deactivated successfully.
Nov 25 23:52:04 compute-0 sudo[253293]: pam_unix(sudo:session): session closed for user root
Nov 25 23:52:04 compute-0 sudo[253499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:52:04 compute-0 sudo[253499]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:52:04 compute-0 sudo[253499]: pam_unix(sudo:session): session closed for user root
Nov 25 23:52:04 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 23:52:04 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1501805013' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 23:52:04 compute-0 nova_compute[252550]: 2025-11-25 23:52:04.217 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 23:52:04 compute-0 nova_compute[252550]: 2025-11-25 23:52:04.225 252558 DEBUG nova.compute.provider_tree [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Inventory has not changed in ProviderTree for provider: 08547965-b35f-4b7b-95d8-902f06aa011c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 23:52:04 compute-0 sudo[253526]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:52:04 compute-0 sudo[253526]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:52:04 compute-0 sudo[253526]: pam_unix(sudo:session): session closed for user root
Nov 25 23:52:04 compute-0 nova_compute[252550]: 2025-11-25 23:52:04.248 252558 DEBUG nova.scheduler.client.report [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Inventory has not changed for provider 08547965-b35f-4b7b-95d8-902f06aa011c based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 23:52:04 compute-0 nova_compute[252550]: 2025-11-25 23:52:04.292 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 23:52:04 compute-0 nova_compute[252550]: 2025-11-25 23:52:04.292 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.686s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 23:52:04 compute-0 sudo[253551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:52:04 compute-0 sudo[253551]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:52:04 compute-0 sudo[253551]: pam_unix(sudo:session): session closed for user root
Nov 25 23:52:04 compute-0 sudo[253576]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 -- lvm list --format json
Nov 25 23:52:04 compute-0 sudo[253576]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:52:04 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v658: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:52:04 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/1501805013' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 23:52:04 compute-0 podman[253641]: 2025-11-25 23:52:04.781142427 +0000 UTC m=+0.039913789 container create f5fee0854b83fb9bbdf5ea1d954d8576d11db661ec7eb1b531ac61c1f14b07ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_chatterjee, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:52:04 compute-0 systemd[1]: Started libpod-conmon-f5fee0854b83fb9bbdf5ea1d954d8576d11db661ec7eb1b531ac61c1f14b07ee.scope.
Nov 25 23:52:04 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:52:04 compute-0 podman[253641]: 2025-11-25 23:52:04.763567781 +0000 UTC m=+0.022339193 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:52:04 compute-0 podman[253641]: 2025-11-25 23:52:04.869506207 +0000 UTC m=+0.128277589 container init f5fee0854b83fb9bbdf5ea1d954d8576d11db661ec7eb1b531ac61c1f14b07ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_chatterjee, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True)
Nov 25 23:52:04 compute-0 podman[253641]: 2025-11-25 23:52:04.881422083 +0000 UTC m=+0.140193445 container start f5fee0854b83fb9bbdf5ea1d954d8576d11db661ec7eb1b531ac61c1f14b07ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_chatterjee, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True)
Nov 25 23:52:04 compute-0 podman[253641]: 2025-11-25 23:52:04.885547453 +0000 UTC m=+0.144318785 container attach f5fee0854b83fb9bbdf5ea1d954d8576d11db661ec7eb1b531ac61c1f14b07ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_chatterjee, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 23:52:04 compute-0 interesting_chatterjee[253658]: 167 167
Nov 25 23:52:04 compute-0 systemd[1]: libpod-f5fee0854b83fb9bbdf5ea1d954d8576d11db661ec7eb1b531ac61c1f14b07ee.scope: Deactivated successfully.
Nov 25 23:52:04 compute-0 podman[253641]: 2025-11-25 23:52:04.889775785 +0000 UTC m=+0.148547147 container died f5fee0854b83fb9bbdf5ea1d954d8576d11db661ec7eb1b531ac61c1f14b07ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:52:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-bc548d54a43c0e581f94437fa3a8812ae7ba65f8fb95e3b8686d74358e3df1d8-merged.mount: Deactivated successfully.
Nov 25 23:52:04 compute-0 podman[253641]: 2025-11-25 23:52:04.93337789 +0000 UTC m=+0.192149252 container remove f5fee0854b83fb9bbdf5ea1d954d8576d11db661ec7eb1b531ac61c1f14b07ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:52:04 compute-0 systemd[1]: libpod-conmon-f5fee0854b83fb9bbdf5ea1d954d8576d11db661ec7eb1b531ac61c1f14b07ee.scope: Deactivated successfully.
Nov 25 23:52:05 compute-0 podman[253682]: 2025-11-25 23:52:05.129855134 +0000 UTC m=+0.048172157 container create ef3c4c3dd92c620173e9942959876d064ded7615134670cccdc26a7ea305ea05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_fermi, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True)
Nov 25 23:52:05 compute-0 systemd[1]: Started libpod-conmon-ef3c4c3dd92c620173e9942959876d064ded7615134670cccdc26a7ea305ea05.scope.
Nov 25 23:52:05 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:52:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b62a8ded2e9e327dd7b60ec311efd791ce4d093e3eeece4b399562ee4858d54/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 23:52:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b62a8ded2e9e327dd7b60ec311efd791ce4d093e3eeece4b399562ee4858d54/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:52:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b62a8ded2e9e327dd7b60ec311efd791ce4d093e3eeece4b399562ee4858d54/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:52:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b62a8ded2e9e327dd7b60ec311efd791ce4d093e3eeece4b399562ee4858d54/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:52:05 compute-0 podman[253682]: 2025-11-25 23:52:05.115826592 +0000 UTC m=+0.034143655 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:52:05 compute-0 podman[253682]: 2025-11-25 23:52:05.220976017 +0000 UTC m=+0.139293070 container init ef3c4c3dd92c620173e9942959876d064ded7615134670cccdc26a7ea305ea05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 23:52:05 compute-0 podman[253682]: 2025-11-25 23:52:05.23805941 +0000 UTC m=+0.156376483 container start ef3c4c3dd92c620173e9942959876d064ded7615134670cccdc26a7ea305ea05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_fermi, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:52:05 compute-0 podman[253682]: 2025-11-25 23:52:05.242066836 +0000 UTC m=+0.160383879 container attach ef3c4c3dd92c620173e9942959876d064ded7615134670cccdc26a7ea305ea05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_fermi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:52:05 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:52:05 compute-0 ceph-mon[75654]: pgmap v658: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:52:05 compute-0 stupefied_fermi[253698]: {
Nov 25 23:52:05 compute-0 stupefied_fermi[253698]:     "0": [
Nov 25 23:52:05 compute-0 stupefied_fermi[253698]:         {
Nov 25 23:52:05 compute-0 stupefied_fermi[253698]:             "devices": [
Nov 25 23:52:05 compute-0 stupefied_fermi[253698]:                 "/dev/loop3"
Nov 25 23:52:05 compute-0 stupefied_fermi[253698]:             ],
Nov 25 23:52:05 compute-0 stupefied_fermi[253698]:             "lv_name": "ceph_lv0",
Nov 25 23:52:05 compute-0 stupefied_fermi[253698]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 23:52:05 compute-0 stupefied_fermi[253698]:             "lv_size": "21470642176",
Nov 25 23:52:05 compute-0 stupefied_fermi[253698]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c5ddf08a-6193-41e0-8332-60b5083aa62e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 23:52:05 compute-0 stupefied_fermi[253698]:             "lv_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 23:52:05 compute-0 stupefied_fermi[253698]:             "name": "ceph_lv0",
Nov 25 23:52:05 compute-0 stupefied_fermi[253698]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 23:52:05 compute-0 stupefied_fermi[253698]:             "tags": {
Nov 25 23:52:05 compute-0 stupefied_fermi[253698]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 23:52:05 compute-0 stupefied_fermi[253698]:                 "ceph.block_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 23:52:05 compute-0 stupefied_fermi[253698]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 23:52:05 compute-0 stupefied_fermi[253698]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:52:05 compute-0 stupefied_fermi[253698]:                 "ceph.cluster_name": "ceph",
Nov 25 23:52:05 compute-0 stupefied_fermi[253698]:                 "ceph.crush_device_class": "",
Nov 25 23:52:05 compute-0 stupefied_fermi[253698]:                 "ceph.encrypted": "0",
Nov 25 23:52:05 compute-0 stupefied_fermi[253698]:                 "ceph.osd_fsid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 23:52:05 compute-0 stupefied_fermi[253698]:                 "ceph.osd_id": "0",
Nov 25 23:52:05 compute-0 stupefied_fermi[253698]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 23:52:05 compute-0 stupefied_fermi[253698]:                 "ceph.type": "block",
Nov 25 23:52:05 compute-0 stupefied_fermi[253698]:                 "ceph.vdo": "0"
Nov 25 23:52:05 compute-0 stupefied_fermi[253698]:             },
Nov 25 23:52:05 compute-0 stupefied_fermi[253698]:             "type": "block",
Nov 25 23:52:05 compute-0 stupefied_fermi[253698]:             "vg_name": "ceph_vg0"
Nov 25 23:52:05 compute-0 stupefied_fermi[253698]:         }
Nov 25 23:52:05 compute-0 stupefied_fermi[253698]:     ],
Nov 25 23:52:05 compute-0 stupefied_fermi[253698]:     "1": [
Nov 25 23:52:05 compute-0 stupefied_fermi[253698]:         {
Nov 25 23:52:05 compute-0 stupefied_fermi[253698]:             "devices": [
Nov 25 23:52:05 compute-0 stupefied_fermi[253698]:                 "/dev/loop4"
Nov 25 23:52:05 compute-0 stupefied_fermi[253698]:             ],
Nov 25 23:52:05 compute-0 stupefied_fermi[253698]:             "lv_name": "ceph_lv1",
Nov 25 23:52:05 compute-0 stupefied_fermi[253698]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 23:52:05 compute-0 stupefied_fermi[253698]:             "lv_size": "21470642176",
Nov 25 23:52:05 compute-0 stupefied_fermi[253698]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21bbab34-bea3-466b-8bf7-812749fcef47,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 23:52:05 compute-0 stupefied_fermi[253698]:             "lv_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 23:52:05 compute-0 stupefied_fermi[253698]:             "name": "ceph_lv1",
Nov 25 23:52:05 compute-0 stupefied_fermi[253698]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 23:52:05 compute-0 stupefied_fermi[253698]:             "tags": {
Nov 25 23:52:05 compute-0 stupefied_fermi[253698]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 23:52:05 compute-0 stupefied_fermi[253698]:                 "ceph.block_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 23:52:05 compute-0 stupefied_fermi[253698]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 23:52:05 compute-0 stupefied_fermi[253698]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:52:05 compute-0 stupefied_fermi[253698]:                 "ceph.cluster_name": "ceph",
Nov 25 23:52:05 compute-0 stupefied_fermi[253698]:                 "ceph.crush_device_class": "",
Nov 25 23:52:05 compute-0 stupefied_fermi[253698]:                 "ceph.encrypted": "0",
Nov 25 23:52:05 compute-0 stupefied_fermi[253698]:                 "ceph.osd_fsid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 23:52:05 compute-0 stupefied_fermi[253698]:                 "ceph.osd_id": "1",
Nov 25 23:52:05 compute-0 stupefied_fermi[253698]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 23:52:05 compute-0 stupefied_fermi[253698]:                 "ceph.type": "block",
Nov 25 23:52:05 compute-0 stupefied_fermi[253698]:                 "ceph.vdo": "0"
Nov 25 23:52:05 compute-0 stupefied_fermi[253698]:             },
Nov 25 23:52:05 compute-0 stupefied_fermi[253698]:             "type": "block",
Nov 25 23:52:05 compute-0 stupefied_fermi[253698]:             "vg_name": "ceph_vg1"
Nov 25 23:52:05 compute-0 stupefied_fermi[253698]:         }
Nov 25 23:52:05 compute-0 stupefied_fermi[253698]:     ],
Nov 25 23:52:05 compute-0 stupefied_fermi[253698]:     "2": [
Nov 25 23:52:05 compute-0 stupefied_fermi[253698]:         {
Nov 25 23:52:05 compute-0 stupefied_fermi[253698]:             "devices": [
Nov 25 23:52:05 compute-0 stupefied_fermi[253698]:                 "/dev/loop5"
Nov 25 23:52:05 compute-0 stupefied_fermi[253698]:             ],
Nov 25 23:52:05 compute-0 stupefied_fermi[253698]:             "lv_name": "ceph_lv2",
Nov 25 23:52:05 compute-0 stupefied_fermi[253698]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 23:52:05 compute-0 stupefied_fermi[253698]:             "lv_size": "21470642176",
Nov 25 23:52:05 compute-0 stupefied_fermi[253698]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=019d967b-1a56-4e90-8682-a890da577e20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 23:52:05 compute-0 stupefied_fermi[253698]:             "lv_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 23:52:05 compute-0 stupefied_fermi[253698]:             "name": "ceph_lv2",
Nov 25 23:52:05 compute-0 stupefied_fermi[253698]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 23:52:05 compute-0 stupefied_fermi[253698]:             "tags": {
Nov 25 23:52:05 compute-0 stupefied_fermi[253698]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 23:52:05 compute-0 stupefied_fermi[253698]:                 "ceph.block_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 23:52:05 compute-0 stupefied_fermi[253698]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 23:52:05 compute-0 stupefied_fermi[253698]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:52:05 compute-0 stupefied_fermi[253698]:                 "ceph.cluster_name": "ceph",
Nov 25 23:52:05 compute-0 stupefied_fermi[253698]:                 "ceph.crush_device_class": "",
Nov 25 23:52:05 compute-0 stupefied_fermi[253698]:                 "ceph.encrypted": "0",
Nov 25 23:52:05 compute-0 stupefied_fermi[253698]:                 "ceph.osd_fsid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 23:52:05 compute-0 stupefied_fermi[253698]:                 "ceph.osd_id": "2",
Nov 25 23:52:05 compute-0 stupefied_fermi[253698]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 23:52:05 compute-0 stupefied_fermi[253698]:                 "ceph.type": "block",
Nov 25 23:52:05 compute-0 stupefied_fermi[253698]:                 "ceph.vdo": "0"
Nov 25 23:52:05 compute-0 stupefied_fermi[253698]:             },
Nov 25 23:52:05 compute-0 stupefied_fermi[253698]:             "type": "block",
Nov 25 23:52:05 compute-0 stupefied_fermi[253698]:             "vg_name": "ceph_vg2"
Nov 25 23:52:05 compute-0 stupefied_fermi[253698]:         }
Nov 25 23:52:05 compute-0 stupefied_fermi[253698]:     ]
Nov 25 23:52:05 compute-0 stupefied_fermi[253698]: }
Nov 25 23:52:06 compute-0 systemd[1]: libpod-ef3c4c3dd92c620173e9942959876d064ded7615134670cccdc26a7ea305ea05.scope: Deactivated successfully.
Nov 25 23:52:06 compute-0 podman[253682]: 2025-11-25 23:52:06.001881493 +0000 UTC m=+0.920198546 container died ef3c4c3dd92c620173e9942959876d064ded7615134670cccdc26a7ea305ea05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_fermi, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 23:52:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-2b62a8ded2e9e327dd7b60ec311efd791ce4d093e3eeece4b399562ee4858d54-merged.mount: Deactivated successfully.
Nov 25 23:52:06 compute-0 podman[253682]: 2025-11-25 23:52:06.060872445 +0000 UTC m=+0.979189488 container remove ef3c4c3dd92c620173e9942959876d064ded7615134670cccdc26a7ea305ea05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_fermi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:52:06 compute-0 systemd[1]: libpod-conmon-ef3c4c3dd92c620173e9942959876d064ded7615134670cccdc26a7ea305ea05.scope: Deactivated successfully.
Nov 25 23:52:06 compute-0 sudo[253576]: pam_unix(sudo:session): session closed for user root
Nov 25 23:52:06 compute-0 sudo[253720]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:52:06 compute-0 sudo[253720]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:52:06 compute-0 sudo[253720]: pam_unix(sudo:session): session closed for user root
Nov 25 23:52:06 compute-0 sudo[253745]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:52:06 compute-0 sudo[253745]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:52:06 compute-0 sudo[253745]: pam_unix(sudo:session): session closed for user root
Nov 25 23:52:06 compute-0 sudo[253770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:52:06 compute-0 sudo[253770]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:52:06 compute-0 sudo[253770]: pam_unix(sudo:session): session closed for user root
Nov 25 23:52:06 compute-0 sudo[253795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 -- raw list --format json
Nov 25 23:52:06 compute-0 sudo[253795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:52:06 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v659: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:52:06 compute-0 podman[253860]: 2025-11-25 23:52:06.811755245 +0000 UTC m=+0.039393144 container create 6711743fc5e13dd927dac8d0e2f40ddbb1d1e72efeb84e00f96a35eaa1b22b2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:52:06 compute-0 systemd[1]: Started libpod-conmon-6711743fc5e13dd927dac8d0e2f40ddbb1d1e72efeb84e00f96a35eaa1b22b2c.scope.
Nov 25 23:52:06 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:52:06 compute-0 podman[253860]: 2025-11-25 23:52:06.79305912 +0000 UTC m=+0.020697029 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:52:06 compute-0 podman[253860]: 2025-11-25 23:52:06.905800346 +0000 UTC m=+0.133438255 container init 6711743fc5e13dd927dac8d0e2f40ddbb1d1e72efeb84e00f96a35eaa1b22b2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_goldstine, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:52:06 compute-0 podman[253860]: 2025-11-25 23:52:06.917666961 +0000 UTC m=+0.145304890 container start 6711743fc5e13dd927dac8d0e2f40ddbb1d1e72efeb84e00f96a35eaa1b22b2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_goldstine, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 25 23:52:06 compute-0 podman[253860]: 2025-11-25 23:52:06.92139115 +0000 UTC m=+0.149029049 container attach 6711743fc5e13dd927dac8d0e2f40ddbb1d1e72efeb84e00f96a35eaa1b22b2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_goldstine, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 25 23:52:06 compute-0 quizzical_goldstine[253876]: 167 167
Nov 25 23:52:06 compute-0 systemd[1]: libpod-6711743fc5e13dd927dac8d0e2f40ddbb1d1e72efeb84e00f96a35eaa1b22b2c.scope: Deactivated successfully.
Nov 25 23:52:06 compute-0 podman[253860]: 2025-11-25 23:52:06.926749251 +0000 UTC m=+0.154387220 container died 6711743fc5e13dd927dac8d0e2f40ddbb1d1e72efeb84e00f96a35eaa1b22b2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_goldstine, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:52:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-35b40ebc2af17f551843363b420e3d9c3b207cca71866bbc43758faef6c0eb9d-merged.mount: Deactivated successfully.
Nov 25 23:52:06 compute-0 podman[253860]: 2025-11-25 23:52:06.972468952 +0000 UTC m=+0.200106881 container remove 6711743fc5e13dd927dac8d0e2f40ddbb1d1e72efeb84e00f96a35eaa1b22b2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_goldstine, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:52:06 compute-0 systemd[1]: libpod-conmon-6711743fc5e13dd927dac8d0e2f40ddbb1d1e72efeb84e00f96a35eaa1b22b2c.scope: Deactivated successfully.
Nov 25 23:52:07 compute-0 podman[253899]: 2025-11-25 23:52:07.198829779 +0000 UTC m=+0.048205039 container create c27d22830548d313eb839deb5b5fa88221ca530c76943d76886d76aba2f73f09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_brattain, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 25 23:52:07 compute-0 podman[253899]: 2025-11-25 23:52:07.172898401 +0000 UTC m=+0.022273721 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:52:07 compute-0 systemd[1]: Started libpod-conmon-c27d22830548d313eb839deb5b5fa88221ca530c76943d76886d76aba2f73f09.scope.
Nov 25 23:52:07 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:52:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74e0bbaaaf36b3b957c904d5fc81dbd58f2a4e63220324b77350dd29e8696b82/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 23:52:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74e0bbaaaf36b3b957c904d5fc81dbd58f2a4e63220324b77350dd29e8696b82/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:52:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74e0bbaaaf36b3b957c904d5fc81dbd58f2a4e63220324b77350dd29e8696b82/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:52:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74e0bbaaaf36b3b957c904d5fc81dbd58f2a4e63220324b77350dd29e8696b82/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:52:07 compute-0 podman[253899]: 2025-11-25 23:52:07.313601289 +0000 UTC m=+0.162976549 container init c27d22830548d313eb839deb5b5fa88221ca530c76943d76886d76aba2f73f09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:52:07 compute-0 podman[253899]: 2025-11-25 23:52:07.323182513 +0000 UTC m=+0.172557733 container start c27d22830548d313eb839deb5b5fa88221ca530c76943d76886d76aba2f73f09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_brattain, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 25 23:52:07 compute-0 podman[253899]: 2025-11-25 23:52:07.326759347 +0000 UTC m=+0.176134617 container attach c27d22830548d313eb839deb5b5fa88221ca530c76943d76886d76aba2f73f09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 23:52:07 compute-0 ceph-mon[75654]: pgmap v659: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:52:08 compute-0 jolly_brattain[253916]: {
Nov 25 23:52:08 compute-0 jolly_brattain[253916]:     "019d967b-1a56-4e90-8682-a890da577e20": {
Nov 25 23:52:08 compute-0 jolly_brattain[253916]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:52:08 compute-0 jolly_brattain[253916]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 23:52:08 compute-0 jolly_brattain[253916]:         "osd_id": 2,
Nov 25 23:52:08 compute-0 jolly_brattain[253916]:         "osd_uuid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 23:52:08 compute-0 jolly_brattain[253916]:         "type": "bluestore"
Nov 25 23:52:08 compute-0 jolly_brattain[253916]:     },
Nov 25 23:52:08 compute-0 jolly_brattain[253916]:     "21bbab34-bea3-466b-8bf7-812749fcef47": {
Nov 25 23:52:08 compute-0 jolly_brattain[253916]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:52:08 compute-0 jolly_brattain[253916]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 23:52:08 compute-0 jolly_brattain[253916]:         "osd_id": 1,
Nov 25 23:52:08 compute-0 jolly_brattain[253916]:         "osd_uuid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 23:52:08 compute-0 jolly_brattain[253916]:         "type": "bluestore"
Nov 25 23:52:08 compute-0 jolly_brattain[253916]:     },
Nov 25 23:52:08 compute-0 jolly_brattain[253916]:     "c5ddf08a-6193-41e0-8332-60b5083aa62e": {
Nov 25 23:52:08 compute-0 jolly_brattain[253916]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:52:08 compute-0 jolly_brattain[253916]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 23:52:08 compute-0 jolly_brattain[253916]:         "osd_id": 0,
Nov 25 23:52:08 compute-0 jolly_brattain[253916]:         "osd_uuid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 23:52:08 compute-0 jolly_brattain[253916]:         "type": "bluestore"
Nov 25 23:52:08 compute-0 jolly_brattain[253916]:     }
Nov 25 23:52:08 compute-0 jolly_brattain[253916]: }
Nov 25 23:52:08 compute-0 systemd[1]: libpod-c27d22830548d313eb839deb5b5fa88221ca530c76943d76886d76aba2f73f09.scope: Deactivated successfully.
Nov 25 23:52:08 compute-0 conmon[253916]: conmon c27d22830548d313eb83 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c27d22830548d313eb839deb5b5fa88221ca530c76943d76886d76aba2f73f09.scope/container/memory.events
Nov 25 23:52:08 compute-0 podman[253899]: 2025-11-25 23:52:08.256793623 +0000 UTC m=+1.106168843 container died c27d22830548d313eb839deb5b5fa88221ca530c76943d76886d76aba2f73f09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_brattain, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 25 23:52:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-74e0bbaaaf36b3b957c904d5fc81dbd58f2a4e63220324b77350dd29e8696b82-merged.mount: Deactivated successfully.
Nov 25 23:52:08 compute-0 podman[253899]: 2025-11-25 23:52:08.315466167 +0000 UTC m=+1.164841377 container remove c27d22830548d313eb839deb5b5fa88221ca530c76943d76886d76aba2f73f09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_brattain, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 23:52:08 compute-0 systemd[1]: libpod-conmon-c27d22830548d313eb839deb5b5fa88221ca530c76943d76886d76aba2f73f09.scope: Deactivated successfully.
Nov 25 23:52:08 compute-0 sudo[253795]: pam_unix(sudo:session): session closed for user root
Nov 25 23:52:08 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 23:52:08 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:52:08 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 23:52:08 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:52:08 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev 166a13fc-f909-4fe2-adff-4ac89d3f399a does not exist
Nov 25 23:52:08 compute-0 sudo[253959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:52:08 compute-0 sudo[253959]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:52:08 compute-0 sudo[253959]: pam_unix(sudo:session): session closed for user root
Nov 25 23:52:08 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v660: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:52:08 compute-0 sudo[253984]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 23:52:08 compute-0 sudo[253984]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:52:08 compute-0 sudo[253984]: pam_unix(sudo:session): session closed for user root
Nov 25 23:52:09 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:52:09 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:52:09 compute-0 ceph-mon[75654]: pgmap v660: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:52:10 compute-0 podman[254009]: 2025-11-25 23:52:10.283158869 +0000 UTC m=+0.097187686 container health_status b33bb0f8771548517ad62d4e0193b93d84fd686bd9113462f17aea9d51587084 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd)
Nov 25 23:52:10 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v661: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:52:10 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:52:11 compute-0 ceph-mon[75654]: pgmap v661: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:52:12 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v662: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:52:13 compute-0 ceph-mon[75654]: pgmap v662: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:52:14 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v663: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:52:15 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0) v1
Nov 25 23:52:15 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2037975218' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Nov 25 23:52:15 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14322 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Nov 25 23:52:15 compute-0 ceph-mgr[75954]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 25 23:52:15 compute-0 ceph-mgr[75954]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 25 23:52:15 compute-0 ceph-mon[75654]: pgmap v663: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:52:15 compute-0 ceph-mon[75654]: from='client.? 192.168.122.10:0/2037975218' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Nov 25 23:52:15 compute-0 ceph-mon[75654]: from='client.14322 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Nov 25 23:52:15 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:52:15 compute-0 ceph-mon[75654]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Nov 25 23:52:15 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:52:15.600898) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 23:52:15 compute-0 ceph-mon[75654]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Nov 25 23:52:15 compute-0 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764114735600934, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1992, "num_deletes": 506, "total_data_size": 1901277, "memory_usage": 1938544, "flush_reason": "Manual Compaction"}
Nov 25 23:52:15 compute-0 ceph-mon[75654]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Nov 25 23:52:15 compute-0 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764114735615875, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 1854530, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 12147, "largest_seqno": 14138, "table_properties": {"data_size": 1846000, "index_size": 4772, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2693, "raw_key_size": 19787, "raw_average_key_size": 18, "raw_value_size": 1826852, "raw_average_value_size": 1712, "num_data_blocks": 219, "num_entries": 1067, "num_filter_entries": 1067, "num_deletions": 506, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764114546, "oldest_key_time": 1764114546, "file_creation_time": 1764114735, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cf57a6b1-796f-4cfa-b350-53eb10a4554d", "db_session_id": "Q7VS70283MEZ1V621ZPR", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Nov 25 23:52:15 compute-0 ceph-mon[75654]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 15028 microseconds, and 7974 cpu microseconds.
Nov 25 23:52:15 compute-0 ceph-mon[75654]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 23:52:15 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:52:15.615924) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 1854530 bytes OK
Nov 25 23:52:15 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:52:15.615944) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Nov 25 23:52:15 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:52:15.617767) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Nov 25 23:52:15 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:52:15.617787) EVENT_LOG_v1 {"time_micros": 1764114735617781, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 23:52:15 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:52:15.617805) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 23:52:15 compute-0 ceph-mon[75654]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 1891819, prev total WAL file size 1891819, number of live WAL files 2.
Nov 25 23:52:15 compute-0 ceph-mon[75654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 23:52:15 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:52:15.618763) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323532' seq:0, type:0; will stop at (end)
Nov 25 23:52:15 compute-0 ceph-mon[75654]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 23:52:15 compute-0 ceph-mon[75654]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(1811KB)], [32(4728KB)]
Nov 25 23:52:15 compute-0 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764114735618802, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 6696543, "oldest_snapshot_seqno": -1}
Nov 25 23:52:15 compute-0 ceph-mon[75654]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 3289 keys, 5268892 bytes, temperature: kUnknown
Nov 25 23:52:15 compute-0 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764114735658050, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 5268892, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 5244521, "index_size": 15046, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8261, "raw_key_size": 77873, "raw_average_key_size": 23, "raw_value_size": 5183030, "raw_average_value_size": 1575, "num_data_blocks": 650, "num_entries": 3289, "num_filter_entries": 3289, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764113467, "oldest_key_time": 0, "file_creation_time": 1764114735, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cf57a6b1-796f-4cfa-b350-53eb10a4554d", "db_session_id": "Q7VS70283MEZ1V621ZPR", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 25 23:52:15 compute-0 ceph-mon[75654]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 23:52:15 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:52:15.658736) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 5268892 bytes
Nov 25 23:52:15 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:52:15.660483) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 168.6 rd, 132.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 4.6 +0.0 blob) out(5.0 +0.0 blob), read-write-amplify(6.5) write-amplify(2.8) OK, records in: 4314, records dropped: 1025 output_compression: NoCompression
Nov 25 23:52:15 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:52:15.660513) EVENT_LOG_v1 {"time_micros": 1764114735660498, "job": 14, "event": "compaction_finished", "compaction_time_micros": 39714, "compaction_time_cpu_micros": 23263, "output_level": 6, "num_output_files": 1, "total_output_size": 5268892, "num_input_records": 4314, "num_output_records": 3289, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 23:52:15 compute-0 ceph-mon[75654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 23:52:15 compute-0 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764114735661361, "job": 14, "event": "table_file_deletion", "file_number": 34}
Nov 25 23:52:15 compute-0 ceph-mon[75654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 23:52:15 compute-0 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764114735662994, "job": 14, "event": "table_file_deletion", "file_number": 32}
Nov 25 23:52:15 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:52:15.618660) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 23:52:15 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:52:15.663111) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 23:52:15 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:52:15.663117) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 23:52:15 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:52:15.663120) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 23:52:15 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:52:15.663123) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 23:52:15 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:52:15.663126) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 23:52:16 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v664: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:52:17 compute-0 podman[254030]: 2025-11-25 23:52:17.302089864 +0000 UTC m=+0.128172536 container health_status 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251118)
Nov 25 23:52:17 compute-0 ceph-mon[75654]: pgmap v664: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:52:18 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v665: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:52:19 compute-0 podman[254057]: 2025-11-25 23:52:19.263520279 +0000 UTC m=+0.090315263 container health_status 9c33443d6f21f9126b440289b7d384a59ef92406382d0e40a2b11f1da5dc47d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 25 23:52:19 compute-0 ceph-mon[75654]: pgmap v665: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:52:20 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v666: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:52:20 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:52:21 compute-0 ceph-mon[75654]: pgmap v666: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:52:22 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v667: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:52:23 compute-0 ceph-mon[75654]: pgmap v667: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:52:24 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v668: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:52:25 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:52:25 compute-0 ceph-mon[75654]: pgmap v668: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:52:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:52:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:52:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:52:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:52:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:52:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:52:26 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v669: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:52:27 compute-0 ceph-mon[75654]: pgmap v669: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:52:28 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v670: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:52:29 compute-0 ceph-mon[75654]: pgmap v670: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:52:30 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0) v1
Nov 25 23:52:30 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2617582484' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Nov 25 23:52:30 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14324 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Nov 25 23:52:30 compute-0 ceph-mgr[75954]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 25 23:52:30 compute-0 ceph-mgr[75954]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 25 23:52:30 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v671: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:52:30 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:52:30 compute-0 ceph-mon[75654]: from='client.? 192.168.122.10:0/2617582484' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Nov 25 23:52:30 compute-0 ceph-mon[75654]: from='client.14324 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Nov 25 23:52:31 compute-0 ceph-mon[75654]: pgmap v671: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:52:32 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v672: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:52:33 compute-0 ceph-mon[75654]: pgmap v672: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:52:34 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v673: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:52:35 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:52:35 compute-0 ceph-mon[75654]: pgmap v673: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:52:36 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v674: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:52:37 compute-0 ceph-mon[75654]: pgmap v674: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:52:38 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v675: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:52:39 compute-0 ceph-mon[75654]: pgmap v675: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:52:40 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v676: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:52:40 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:52:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:52:40.760 160725 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 23:52:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:52:40.760 160725 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 23:52:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:52:40.761 160725 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 23:52:41 compute-0 podman[254076]: 2025-11-25 23:52:41.26865509 +0000 UTC m=+0.090770386 container health_status b33bb0f8771548517ad62d4e0193b93d84fd686bd9113462f17aea9d51587084 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 23:52:41 compute-0 ceph-mon[75654]: pgmap v676: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:52:42 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v677: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:52:43 compute-0 ceph-mon[75654]: pgmap v677: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:52:44 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v678: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:52:45 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:52:45 compute-0 ceph-mon[75654]: pgmap v678: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:52:46 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v679: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:52:47 compute-0 ceph-mon[75654]: pgmap v679: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:52:48 compute-0 podman[254096]: 2025-11-25 23:52:48.29004473 +0000 UTC m=+0.118356107 container health_status 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Nov 25 23:52:48 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v680: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:52:49 compute-0 ceph-mon[75654]: pgmap v680: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:52:50 compute-0 podman[254123]: 2025-11-25 23:52:50.266816271 +0000 UTC m=+0.084309434 container health_status 9c33443d6f21f9126b440289b7d384a59ef92406382d0e40a2b11f1da5dc47d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 25 23:52:50 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v681: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:52:50 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:52:51 compute-0 ceph-osd[89044]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 23:52:51 compute-0 ceph-osd[89044]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 4366 writes, 20K keys, 4366 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 4366 writes, 458 syncs, 9.53 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a4eba691f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a4eba691f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a4eba691f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a4eba691f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a4eba691f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a4eba691f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a4eba691f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a4eba69090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a4eba69090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a4eba69090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a4eba691f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a4eba691f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 25 23:52:51 compute-0 ceph-mon[75654]: pgmap v681: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:52:52 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v682: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:52:53 compute-0 ceph-mon[75654]: pgmap v682: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:52:54 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v683: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:52:55 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:52:55 compute-0 ceph-mon[75654]: pgmap v683: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:52:55 compute-0 ceph-osd[90055]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 23:52:55 compute-0 ceph-osd[90055]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Cumulative writes: 4489 writes, 20K keys, 4489 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 4489 writes, 490 syncs, 9.16 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613e96171f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000107 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613e96171f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000107 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613e96171f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000107 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613e96171f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000107 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613e96171f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000107 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613e96171f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000107 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613e96171f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000107 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613e9617090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613e9617090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613e9617090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613e96171f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000107 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613e96171f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000107 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 25 23:52:56 compute-0 ceph-mgr[75954]: [balancer INFO root] Optimize plan auto_2025-11-25_23:52:56
Nov 25 23:52:56 compute-0 ceph-mgr[75954]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 23:52:56 compute-0 ceph-mgr[75954]: [balancer INFO root] do_upmap
Nov 25 23:52:56 compute-0 ceph-mgr[75954]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.meta', 'backups', '.mgr', 'cephfs.cephfs.data', 'volumes', 'images']
Nov 25 23:52:56 compute-0 ceph-mgr[75954]: [balancer INFO root] prepared 0/10 changes
Nov 25 23:52:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:52:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:52:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:52:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:52:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:52:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:52:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 23:52:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 23:52:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 23:52:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 23:52:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 23:52:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 23:52:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 23:52:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 23:52:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 23:52:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 23:52:56 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v684: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:52:57 compute-0 ceph-mon[75654]: pgmap v684: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:52:58 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v685: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:52:59 compute-0 ceph-mon[75654]: pgmap v685: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:53:00 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v686: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:53:00 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:53:01 compute-0 ceph-osd[91111]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 23:53:01 compute-0 ceph-osd[91111]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 4172 writes, 19K keys, 4172 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 4172 writes, 365 syncs, 11.43 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56223ceb91f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56223ceb91f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56223ceb91f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56223ceb91f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56223ceb91f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56223ceb91f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56223ceb91f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56223ceb9090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56223ceb9090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56223ceb9090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56223ceb91f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56223ceb91f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 25 23:53:01 compute-0 ceph-mgr[75954]: [devicehealth INFO root] Check health
Nov 25 23:53:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 23:53:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:53:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 23:53:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:53:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:53:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:53:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:53:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:53:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:53:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:53:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:53:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:53:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 23:53:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:53:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:53:01 compute-0 ceph-mon[75654]: pgmap v686: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:53:02 compute-0 sshd-session[254142]: Invalid user userb from 45.78.216.132 port 51362
Nov 25 23:53:02 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v687: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:53:03 compute-0 sshd-session[254142]: Received disconnect from 45.78.216.132 port 51362:11: Bye Bye [preauth]
Nov 25 23:53:03 compute-0 sshd-session[254142]: Disconnected from invalid user userb 45.78.216.132 port 51362 [preauth]
Nov 25 23:53:03 compute-0 ceph-mon[75654]: pgmap v687: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:53:04 compute-0 nova_compute[252550]: 2025-11-25 23:53:04.283 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 23:53:04 compute-0 nova_compute[252550]: 2025-11-25 23:53:04.283 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 23:53:04 compute-0 nova_compute[252550]: 2025-11-25 23:53:04.311 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 23:53:04 compute-0 nova_compute[252550]: 2025-11-25 23:53:04.311 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 23:53:04 compute-0 nova_compute[252550]: 2025-11-25 23:53:04.311 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 23:53:04 compute-0 nova_compute[252550]: 2025-11-25 23:53:04.333 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 23:53:04 compute-0 nova_compute[252550]: 2025-11-25 23:53:04.334 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 23:53:04 compute-0 nova_compute[252550]: 2025-11-25 23:53:04.335 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 23:53:04 compute-0 nova_compute[252550]: 2025-11-25 23:53:04.335 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 23:53:04 compute-0 nova_compute[252550]: 2025-11-25 23:53:04.336 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 23:53:04 compute-0 nova_compute[252550]: 2025-11-25 23:53:04.336 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 23:53:04 compute-0 nova_compute[252550]: 2025-11-25 23:53:04.337 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 23:53:04 compute-0 nova_compute[252550]: 2025-11-25 23:53:04.337 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 23:53:04 compute-0 nova_compute[252550]: 2025-11-25 23:53:04.383 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 23:53:04 compute-0 nova_compute[252550]: 2025-11-25 23:53:04.383 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 23:53:04 compute-0 nova_compute[252550]: 2025-11-25 23:53:04.384 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 23:53:04 compute-0 nova_compute[252550]: 2025-11-25 23:53:04.384 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 23:53:04 compute-0 nova_compute[252550]: 2025-11-25 23:53:04.385 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 23:53:04 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v688: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:53:04 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 23:53:04 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/876070695' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 23:53:04 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/876070695' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 23:53:04 compute-0 nova_compute[252550]: 2025-11-25 23:53:04.866 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 23:53:05 compute-0 nova_compute[252550]: 2025-11-25 23:53:05.102 252558 WARNING nova.virt.libvirt.driver [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 23:53:05 compute-0 nova_compute[252550]: 2025-11-25 23:53:05.104 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5332MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 23:53:05 compute-0 nova_compute[252550]: 2025-11-25 23:53:05.104 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 23:53:05 compute-0 nova_compute[252550]: 2025-11-25 23:53:05.105 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 23:53:05 compute-0 nova_compute[252550]: 2025-11-25 23:53:05.198 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 23:53:05 compute-0 nova_compute[252550]: 2025-11-25 23:53:05.199 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 23:53:05 compute-0 nova_compute[252550]: 2025-11-25 23:53:05.236 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 23:53:05 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:53:05 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 23:53:05 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3114545605' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 23:53:05 compute-0 nova_compute[252550]: 2025-11-25 23:53:05.653 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 23:53:05 compute-0 nova_compute[252550]: 2025-11-25 23:53:05.662 252558 DEBUG nova.compute.provider_tree [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Inventory has not changed in ProviderTree for provider: 08547965-b35f-4b7b-95d8-902f06aa011c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 23:53:05 compute-0 nova_compute[252550]: 2025-11-25 23:53:05.690 252558 DEBUG nova.scheduler.client.report [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Inventory has not changed for provider 08547965-b35f-4b7b-95d8-902f06aa011c based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 23:53:05 compute-0 nova_compute[252550]: 2025-11-25 23:53:05.693 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 23:53:05 compute-0 nova_compute[252550]: 2025-11-25 23:53:05.693 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.589s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 23:53:05 compute-0 ceph-mon[75654]: pgmap v688: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:53:05 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/3114545605' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 23:53:06 compute-0 nova_compute[252550]: 2025-11-25 23:53:06.180 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 23:53:06 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v689: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:53:07 compute-0 ceph-mon[75654]: pgmap v689: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:53:08 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v690: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:53:08 compute-0 sudo[254188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:53:08 compute-0 sudo[254188]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:53:08 compute-0 sudo[254188]: pam_unix(sudo:session): session closed for user root
Nov 25 23:53:08 compute-0 sudo[254213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:53:08 compute-0 sudo[254213]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:53:08 compute-0 sudo[254213]: pam_unix(sudo:session): session closed for user root
Nov 25 23:53:08 compute-0 sudo[254238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:53:08 compute-0 sudo[254238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:53:08 compute-0 sudo[254238]: pam_unix(sudo:session): session closed for user root
Nov 25 23:53:08 compute-0 sudo[254263]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 25 23:53:08 compute-0 sudo[254263]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:53:09 compute-0 sudo[254263]: pam_unix(sudo:session): session closed for user root
Nov 25 23:53:09 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 23:53:09 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:53:09 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 23:53:09 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:53:09 compute-0 sudo[254307]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:53:09 compute-0 sudo[254307]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:53:09 compute-0 sudo[254307]: pam_unix(sudo:session): session closed for user root
Nov 25 23:53:09 compute-0 sudo[254332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:53:09 compute-0 sudo[254332]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:53:09 compute-0 sudo[254332]: pam_unix(sudo:session): session closed for user root
Nov 25 23:53:09 compute-0 sudo[254357]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:53:09 compute-0 sudo[254357]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:53:09 compute-0 sudo[254357]: pam_unix(sudo:session): session closed for user root
Nov 25 23:53:09 compute-0 sudo[254382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 23:53:09 compute-0 sudo[254382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:53:09 compute-0 sudo[254382]: pam_unix(sudo:session): session closed for user root
Nov 25 23:53:09 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 23:53:09 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:53:09 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 23:53:09 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 23:53:09 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 23:53:10 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:53:10 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev e2db07c6-6b6a-440f-92ef-9c3df3e7e447 does not exist
Nov 25 23:53:10 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev 7e0dbc29-7ede-450e-a32f-5d59d1a0244e does not exist
Nov 25 23:53:10 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev 61cc813f-25b2-4370-a25a-639befed7035 does not exist
Nov 25 23:53:10 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 23:53:10 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 23:53:10 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 23:53:10 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 23:53:10 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 23:53:10 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:53:10 compute-0 ceph-mon[75654]: pgmap v690: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:53:10 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:53:10 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:53:10 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:53:10 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 23:53:10 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:53:10 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 23:53:10 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 23:53:10 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:53:10 compute-0 sudo[254440]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:53:10 compute-0 sudo[254440]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:53:10 compute-0 sudo[254440]: pam_unix(sudo:session): session closed for user root
Nov 25 23:53:10 compute-0 sudo[254465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:53:10 compute-0 sudo[254465]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:53:10 compute-0 sudo[254465]: pam_unix(sudo:session): session closed for user root
Nov 25 23:53:10 compute-0 sudo[254490]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:53:10 compute-0 sudo[254490]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:53:10 compute-0 sudo[254490]: pam_unix(sudo:session): session closed for user root
Nov 25 23:53:10 compute-0 sudo[254515]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 23:53:10 compute-0 sudo[254515]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:53:10 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v691: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:53:10 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:53:10 compute-0 podman[254581]: 2025-11-25 23:53:10.675507895 +0000 UTC m=+0.062682832 container create 9659f4c064103e5cf91589670617c78bb3be67188f1f1ab1ad884f8deb1c44d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mestorf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 23:53:10 compute-0 systemd[1]: Started libpod-conmon-9659f4c064103e5cf91589670617c78bb3be67188f1f1ab1ad884f8deb1c44d7.scope.
Nov 25 23:53:10 compute-0 podman[254581]: 2025-11-25 23:53:10.65379555 +0000 UTC m=+0.040970507 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:53:10 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:53:10 compute-0 podman[254581]: 2025-11-25 23:53:10.787027168 +0000 UTC m=+0.174202175 container init 9659f4c064103e5cf91589670617c78bb3be67188f1f1ab1ad884f8deb1c44d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mestorf, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 23:53:10 compute-0 podman[254581]: 2025-11-25 23:53:10.800016592 +0000 UTC m=+0.187191559 container start 9659f4c064103e5cf91589670617c78bb3be67188f1f1ab1ad884f8deb1c44d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mestorf, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 23:53:10 compute-0 podman[254581]: 2025-11-25 23:53:10.803872735 +0000 UTC m=+0.191047702 container attach 9659f4c064103e5cf91589670617c78bb3be67188f1f1ab1ad884f8deb1c44d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mestorf, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:53:10 compute-0 elastic_mestorf[254598]: 167 167
Nov 25 23:53:10 compute-0 systemd[1]: libpod-9659f4c064103e5cf91589670617c78bb3be67188f1f1ab1ad884f8deb1c44d7.scope: Deactivated successfully.
Nov 25 23:53:10 compute-0 podman[254581]: 2025-11-25 23:53:10.808689373 +0000 UTC m=+0.195864350 container died 9659f4c064103e5cf91589670617c78bb3be67188f1f1ab1ad884f8deb1c44d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mestorf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:53:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-73a833f86bad84a5b34b0569bd33ba487e3ceafeacafb9e4b08669fd13ea26f7-merged.mount: Deactivated successfully.
Nov 25 23:53:10 compute-0 podman[254581]: 2025-11-25 23:53:10.857104635 +0000 UTC m=+0.244279612 container remove 9659f4c064103e5cf91589670617c78bb3be67188f1f1ab1ad884f8deb1c44d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 23:53:10 compute-0 systemd[1]: libpod-conmon-9659f4c064103e5cf91589670617c78bb3be67188f1f1ab1ad884f8deb1c44d7.scope: Deactivated successfully.
Nov 25 23:53:11 compute-0 podman[254622]: 2025-11-25 23:53:11.049199183 +0000 UTC m=+0.049316767 container create d2f157404ca7bfaee10136a5ae07635321decccf4c3bc8447ab8e421de53bef8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_jemison, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:53:11 compute-0 systemd[1]: Started libpod-conmon-d2f157404ca7bfaee10136a5ae07635321decccf4c3bc8447ab8e421de53bef8.scope.
Nov 25 23:53:11 compute-0 podman[254622]: 2025-11-25 23:53:11.027575421 +0000 UTC m=+0.027693045 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:53:11 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:53:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f60de0da2985e1c0bc9b3fdf918e6eb1d555ea1695a3e2e1ddc845321a596cfb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 23:53:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f60de0da2985e1c0bc9b3fdf918e6eb1d555ea1695a3e2e1ddc845321a596cfb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:53:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f60de0da2985e1c0bc9b3fdf918e6eb1d555ea1695a3e2e1ddc845321a596cfb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:53:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f60de0da2985e1c0bc9b3fdf918e6eb1d555ea1695a3e2e1ddc845321a596cfb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:53:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f60de0da2985e1c0bc9b3fdf918e6eb1d555ea1695a3e2e1ddc845321a596cfb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 23:53:11 compute-0 podman[254622]: 2025-11-25 23:53:11.151049921 +0000 UTC m=+0.151167515 container init d2f157404ca7bfaee10136a5ae07635321decccf4c3bc8447ab8e421de53bef8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 23:53:11 compute-0 podman[254622]: 2025-11-25 23:53:11.167455076 +0000 UTC m=+0.167572700 container start d2f157404ca7bfaee10136a5ae07635321decccf4c3bc8447ab8e421de53bef8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_jemison, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:53:11 compute-0 podman[254622]: 2025-11-25 23:53:11.170824565 +0000 UTC m=+0.170942179 container attach d2f157404ca7bfaee10136a5ae07635321decccf4c3bc8447ab8e421de53bef8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_jemison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Nov 25 23:53:12 compute-0 ceph-mon[75654]: pgmap v691: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:53:12 compute-0 podman[254658]: 2025-11-25 23:53:12.286634271 +0000 UTC m=+0.094317439 container health_status b33bb0f8771548517ad62d4e0193b93d84fd686bd9113462f17aea9d51587084 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 25 23:53:12 compute-0 zen_jemison[254638]: --> passed data devices: 0 physical, 3 LVM
Nov 25 23:53:12 compute-0 zen_jemison[254638]: --> relative data size: 1.0
Nov 25 23:53:12 compute-0 zen_jemison[254638]: --> All data devices are unavailable
Nov 25 23:53:12 compute-0 systemd[1]: libpod-d2f157404ca7bfaee10136a5ae07635321decccf4c3bc8447ab8e421de53bef8.scope: Deactivated successfully.
Nov 25 23:53:12 compute-0 systemd[1]: libpod-d2f157404ca7bfaee10136a5ae07635321decccf4c3bc8447ab8e421de53bef8.scope: Consumed 1.168s CPU time.
Nov 25 23:53:12 compute-0 podman[254622]: 2025-11-25 23:53:12.386917658 +0000 UTC m=+1.387035272 container died d2f157404ca7bfaee10136a5ae07635321decccf4c3bc8447ab8e421de53bef8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_jemison, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:53:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-f60de0da2985e1c0bc9b3fdf918e6eb1d555ea1695a3e2e1ddc845321a596cfb-merged.mount: Deactivated successfully.
Nov 25 23:53:12 compute-0 podman[254622]: 2025-11-25 23:53:12.468412377 +0000 UTC m=+1.468529991 container remove d2f157404ca7bfaee10136a5ae07635321decccf4c3bc8447ab8e421de53bef8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_jemison, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True)
Nov 25 23:53:12 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v692: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:53:12 compute-0 systemd[1]: libpod-conmon-d2f157404ca7bfaee10136a5ae07635321decccf4c3bc8447ab8e421de53bef8.scope: Deactivated successfully.
Nov 25 23:53:12 compute-0 sudo[254515]: pam_unix(sudo:session): session closed for user root
Nov 25 23:53:12 compute-0 sudo[254700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:53:12 compute-0 sudo[254700]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:53:12 compute-0 sudo[254700]: pam_unix(sudo:session): session closed for user root
Nov 25 23:53:12 compute-0 sudo[254725]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:53:12 compute-0 sudo[254725]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:53:12 compute-0 sudo[254725]: pam_unix(sudo:session): session closed for user root
Nov 25 23:53:12 compute-0 sudo[254750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:53:12 compute-0 sudo[254750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:53:12 compute-0 sudo[254750]: pam_unix(sudo:session): session closed for user root
Nov 25 23:53:12 compute-0 sudo[254775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 -- lvm list --format json
Nov 25 23:53:12 compute-0 sudo[254775]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:53:13 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:53:13.077 160725 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '32:82:13', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '36:f3:66:b7:57:d1'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 23:53:13 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:53:13.079 160725 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 23:53:13 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:53:13.080 160725 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=2ba84045-48af-49e3-86f7-35b32300977f, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 23:53:13 compute-0 podman[254841]: 2025-11-25 23:53:13.342683095 +0000 UTC m=+0.059872967 container create 4bdf816496dfb344d29d8a03d5be609b3c272844349debe0e3baadd6a9320078 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:53:13 compute-0 systemd[1]: Started libpod-conmon-4bdf816496dfb344d29d8a03d5be609b3c272844349debe0e3baadd6a9320078.scope.
Nov 25 23:53:13 compute-0 podman[254841]: 2025-11-25 23:53:13.322524291 +0000 UTC m=+0.039714203 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:53:13 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:53:13 compute-0 podman[254841]: 2025-11-25 23:53:13.441860592 +0000 UTC m=+0.159050504 container init 4bdf816496dfb344d29d8a03d5be609b3c272844349debe0e3baadd6a9320078 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_yonath, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Nov 25 23:53:13 compute-0 podman[254841]: 2025-11-25 23:53:13.453632994 +0000 UTC m=+0.170822886 container start 4bdf816496dfb344d29d8a03d5be609b3c272844349debe0e3baadd6a9320078 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_yonath, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:53:13 compute-0 podman[254841]: 2025-11-25 23:53:13.457820585 +0000 UTC m=+0.175010477 container attach 4bdf816496dfb344d29d8a03d5be609b3c272844349debe0e3baadd6a9320078 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 25 23:53:13 compute-0 systemd[1]: libpod-4bdf816496dfb344d29d8a03d5be609b3c272844349debe0e3baadd6a9320078.scope: Deactivated successfully.
Nov 25 23:53:13 compute-0 angry_yonath[254857]: 167 167
Nov 25 23:53:13 compute-0 conmon[254857]: conmon 4bdf816496dfb344d29d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4bdf816496dfb344d29d8a03d5be609b3c272844349debe0e3baadd6a9320078.scope/container/memory.events
Nov 25 23:53:13 compute-0 podman[254841]: 2025-11-25 23:53:13.464605995 +0000 UTC m=+0.181795897 container died 4bdf816496dfb344d29d8a03d5be609b3c272844349debe0e3baadd6a9320078 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_yonath, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:53:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-c0141ee6c347bf809ed57721410b50b959d4469dca2592db5bc32e243deaee10-merged.mount: Deactivated successfully.
Nov 25 23:53:13 compute-0 podman[254841]: 2025-11-25 23:53:13.51990685 +0000 UTC m=+0.237096752 container remove 4bdf816496dfb344d29d8a03d5be609b3c272844349debe0e3baadd6a9320078 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_yonath, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:53:13 compute-0 systemd[1]: libpod-conmon-4bdf816496dfb344d29d8a03d5be609b3c272844349debe0e3baadd6a9320078.scope: Deactivated successfully.
Nov 25 23:53:13 compute-0 podman[254882]: 2025-11-25 23:53:13.713465246 +0000 UTC m=+0.049770138 container create 0bb53afdcd9735b24826c02b420c02e9aa01d03c5565888c91d55523a43dded0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_antonelli, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:53:13 compute-0 systemd[1]: Started libpod-conmon-0bb53afdcd9735b24826c02b420c02e9aa01d03c5565888c91d55523a43dded0.scope.
Nov 25 23:53:13 compute-0 podman[254882]: 2025-11-25 23:53:13.691979758 +0000 UTC m=+0.028284660 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:53:13 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:53:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b81858654bcf5f5ba045e7f68675462fb6a2d97b899d3e6f877f77982507c481/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 23:53:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b81858654bcf5f5ba045e7f68675462fb6a2d97b899d3e6f877f77982507c481/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:53:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b81858654bcf5f5ba045e7f68675462fb6a2d97b899d3e6f877f77982507c481/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:53:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b81858654bcf5f5ba045e7f68675462fb6a2d97b899d3e6f877f77982507c481/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:53:13 compute-0 podman[254882]: 2025-11-25 23:53:13.816832575 +0000 UTC m=+0.153137507 container init 0bb53afdcd9735b24826c02b420c02e9aa01d03c5565888c91d55523a43dded0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_antonelli, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:53:13 compute-0 podman[254882]: 2025-11-25 23:53:13.823894572 +0000 UTC m=+0.160199474 container start 0bb53afdcd9735b24826c02b420c02e9aa01d03c5565888c91d55523a43dded0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_antonelli, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:53:13 compute-0 podman[254882]: 2025-11-25 23:53:13.82758424 +0000 UTC m=+0.163889142 container attach 0bb53afdcd9735b24826c02b420c02e9aa01d03c5565888c91d55523a43dded0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_antonelli, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True)
Nov 25 23:53:14 compute-0 ceph-mon[75654]: pgmap v692: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:53:14 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v693: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:53:14 compute-0 pensive_antonelli[254898]: {
Nov 25 23:53:14 compute-0 pensive_antonelli[254898]:     "0": [
Nov 25 23:53:14 compute-0 pensive_antonelli[254898]:         {
Nov 25 23:53:14 compute-0 pensive_antonelli[254898]:             "devices": [
Nov 25 23:53:14 compute-0 pensive_antonelli[254898]:                 "/dev/loop3"
Nov 25 23:53:14 compute-0 pensive_antonelli[254898]:             ],
Nov 25 23:53:14 compute-0 pensive_antonelli[254898]:             "lv_name": "ceph_lv0",
Nov 25 23:53:14 compute-0 pensive_antonelli[254898]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 23:53:14 compute-0 pensive_antonelli[254898]:             "lv_size": "21470642176",
Nov 25 23:53:14 compute-0 pensive_antonelli[254898]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c5ddf08a-6193-41e0-8332-60b5083aa62e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 23:53:14 compute-0 pensive_antonelli[254898]:             "lv_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 23:53:14 compute-0 pensive_antonelli[254898]:             "name": "ceph_lv0",
Nov 25 23:53:14 compute-0 pensive_antonelli[254898]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 23:53:14 compute-0 pensive_antonelli[254898]:             "tags": {
Nov 25 23:53:14 compute-0 pensive_antonelli[254898]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 23:53:14 compute-0 pensive_antonelli[254898]:                 "ceph.block_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 23:53:14 compute-0 pensive_antonelli[254898]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 23:53:14 compute-0 pensive_antonelli[254898]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:53:14 compute-0 pensive_antonelli[254898]:                 "ceph.cluster_name": "ceph",
Nov 25 23:53:14 compute-0 pensive_antonelli[254898]:                 "ceph.crush_device_class": "",
Nov 25 23:53:14 compute-0 pensive_antonelli[254898]:                 "ceph.encrypted": "0",
Nov 25 23:53:14 compute-0 pensive_antonelli[254898]:                 "ceph.osd_fsid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 23:53:14 compute-0 pensive_antonelli[254898]:                 "ceph.osd_id": "0",
Nov 25 23:53:14 compute-0 pensive_antonelli[254898]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 23:53:14 compute-0 pensive_antonelli[254898]:                 "ceph.type": "block",
Nov 25 23:53:14 compute-0 pensive_antonelli[254898]:                 "ceph.vdo": "0"
Nov 25 23:53:14 compute-0 pensive_antonelli[254898]:             },
Nov 25 23:53:14 compute-0 pensive_antonelli[254898]:             "type": "block",
Nov 25 23:53:14 compute-0 pensive_antonelli[254898]:             "vg_name": "ceph_vg0"
Nov 25 23:53:14 compute-0 pensive_antonelli[254898]:         }
Nov 25 23:53:14 compute-0 pensive_antonelli[254898]:     ],
Nov 25 23:53:14 compute-0 pensive_antonelli[254898]:     "1": [
Nov 25 23:53:14 compute-0 pensive_antonelli[254898]:         {
Nov 25 23:53:14 compute-0 pensive_antonelli[254898]:             "devices": [
Nov 25 23:53:14 compute-0 pensive_antonelli[254898]:                 "/dev/loop4"
Nov 25 23:53:14 compute-0 pensive_antonelli[254898]:             ],
Nov 25 23:53:14 compute-0 pensive_antonelli[254898]:             "lv_name": "ceph_lv1",
Nov 25 23:53:14 compute-0 pensive_antonelli[254898]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 23:53:14 compute-0 pensive_antonelli[254898]:             "lv_size": "21470642176",
Nov 25 23:53:14 compute-0 pensive_antonelli[254898]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21bbab34-bea3-466b-8bf7-812749fcef47,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 23:53:14 compute-0 pensive_antonelli[254898]:             "lv_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 23:53:14 compute-0 pensive_antonelli[254898]:             "name": "ceph_lv1",
Nov 25 23:53:14 compute-0 pensive_antonelli[254898]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 23:53:14 compute-0 pensive_antonelli[254898]:             "tags": {
Nov 25 23:53:14 compute-0 pensive_antonelli[254898]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 23:53:14 compute-0 pensive_antonelli[254898]:                 "ceph.block_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 23:53:14 compute-0 pensive_antonelli[254898]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 23:53:14 compute-0 pensive_antonelli[254898]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:53:14 compute-0 pensive_antonelli[254898]:                 "ceph.cluster_name": "ceph",
Nov 25 23:53:14 compute-0 pensive_antonelli[254898]:                 "ceph.crush_device_class": "",
Nov 25 23:53:14 compute-0 pensive_antonelli[254898]:                 "ceph.encrypted": "0",
Nov 25 23:53:14 compute-0 pensive_antonelli[254898]:                 "ceph.osd_fsid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 23:53:14 compute-0 pensive_antonelli[254898]:                 "ceph.osd_id": "1",
Nov 25 23:53:14 compute-0 pensive_antonelli[254898]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 23:53:14 compute-0 pensive_antonelli[254898]:                 "ceph.type": "block",
Nov 25 23:53:14 compute-0 pensive_antonelli[254898]:                 "ceph.vdo": "0"
Nov 25 23:53:14 compute-0 pensive_antonelli[254898]:             },
Nov 25 23:53:14 compute-0 pensive_antonelli[254898]:             "type": "block",
Nov 25 23:53:14 compute-0 pensive_antonelli[254898]:             "vg_name": "ceph_vg1"
Nov 25 23:53:14 compute-0 pensive_antonelli[254898]:         }
Nov 25 23:53:14 compute-0 pensive_antonelli[254898]:     ],
Nov 25 23:53:14 compute-0 pensive_antonelli[254898]:     "2": [
Nov 25 23:53:14 compute-0 pensive_antonelli[254898]:         {
Nov 25 23:53:14 compute-0 pensive_antonelli[254898]:             "devices": [
Nov 25 23:53:14 compute-0 pensive_antonelli[254898]:                 "/dev/loop5"
Nov 25 23:53:14 compute-0 pensive_antonelli[254898]:             ],
Nov 25 23:53:14 compute-0 pensive_antonelli[254898]:             "lv_name": "ceph_lv2",
Nov 25 23:53:14 compute-0 pensive_antonelli[254898]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 23:53:14 compute-0 pensive_antonelli[254898]:             "lv_size": "21470642176",
Nov 25 23:53:14 compute-0 pensive_antonelli[254898]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=019d967b-1a56-4e90-8682-a890da577e20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 23:53:14 compute-0 pensive_antonelli[254898]:             "lv_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 23:53:14 compute-0 pensive_antonelli[254898]:             "name": "ceph_lv2",
Nov 25 23:53:14 compute-0 pensive_antonelli[254898]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 23:53:14 compute-0 pensive_antonelli[254898]:             "tags": {
Nov 25 23:53:14 compute-0 pensive_antonelli[254898]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 23:53:14 compute-0 pensive_antonelli[254898]:                 "ceph.block_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 23:53:14 compute-0 pensive_antonelli[254898]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 23:53:14 compute-0 pensive_antonelli[254898]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:53:14 compute-0 pensive_antonelli[254898]:                 "ceph.cluster_name": "ceph",
Nov 25 23:53:14 compute-0 pensive_antonelli[254898]:                 "ceph.crush_device_class": "",
Nov 25 23:53:14 compute-0 pensive_antonelli[254898]:                 "ceph.encrypted": "0",
Nov 25 23:53:14 compute-0 pensive_antonelli[254898]:                 "ceph.osd_fsid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 23:53:14 compute-0 pensive_antonelli[254898]:                 "ceph.osd_id": "2",
Nov 25 23:53:14 compute-0 pensive_antonelli[254898]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 23:53:14 compute-0 pensive_antonelli[254898]:                 "ceph.type": "block",
Nov 25 23:53:14 compute-0 pensive_antonelli[254898]:                 "ceph.vdo": "0"
Nov 25 23:53:14 compute-0 pensive_antonelli[254898]:             },
Nov 25 23:53:14 compute-0 pensive_antonelli[254898]:             "type": "block",
Nov 25 23:53:14 compute-0 pensive_antonelli[254898]:             "vg_name": "ceph_vg2"
Nov 25 23:53:14 compute-0 pensive_antonelli[254898]:         }
Nov 25 23:53:14 compute-0 pensive_antonelli[254898]:     ]
Nov 25 23:53:14 compute-0 pensive_antonelli[254898]: }
Nov 25 23:53:14 compute-0 systemd[1]: libpod-0bb53afdcd9735b24826c02b420c02e9aa01d03c5565888c91d55523a43dded0.scope: Deactivated successfully.
Nov 25 23:53:14 compute-0 podman[254882]: 2025-11-25 23:53:14.582390984 +0000 UTC m=+0.918695956 container died 0bb53afdcd9735b24826c02b420c02e9aa01d03c5565888c91d55523a43dded0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_antonelli, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:53:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-b81858654bcf5f5ba045e7f68675462fb6a2d97b899d3e6f877f77982507c481-merged.mount: Deactivated successfully.
Nov 25 23:53:14 compute-0 podman[254882]: 2025-11-25 23:53:14.667670413 +0000 UTC m=+1.003975335 container remove 0bb53afdcd9735b24826c02b420c02e9aa01d03c5565888c91d55523a43dded0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_antonelli, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 23:53:14 compute-0 systemd[1]: libpod-conmon-0bb53afdcd9735b24826c02b420c02e9aa01d03c5565888c91d55523a43dded0.scope: Deactivated successfully.
Nov 25 23:53:14 compute-0 sudo[254775]: pam_unix(sudo:session): session closed for user root
Nov 25 23:53:14 compute-0 sudo[254918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:53:14 compute-0 sudo[254918]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:53:14 compute-0 sudo[254918]: pam_unix(sudo:session): session closed for user root
Nov 25 23:53:14 compute-0 sudo[254943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:53:14 compute-0 sudo[254943]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:53:14 compute-0 sudo[254943]: pam_unix(sudo:session): session closed for user root
Nov 25 23:53:14 compute-0 sudo[254968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:53:15 compute-0 sudo[254968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:53:15 compute-0 sudo[254968]: pam_unix(sudo:session): session closed for user root
Nov 25 23:53:15 compute-0 sudo[254993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 -- raw list --format json
Nov 25 23:53:15 compute-0 sudo[254993]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:53:15 compute-0 podman[255058]: 2025-11-25 23:53:15.555264345 +0000 UTC m=+0.069699758 container create ef17cfdfaae6183aca688a1c2a30b47e1629dc77745886434e1c2f560dc9fbbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 25 23:53:15 compute-0 systemd[1]: Started libpod-conmon-ef17cfdfaae6183aca688a1c2a30b47e1629dc77745886434e1c2f560dc9fbbc.scope.
Nov 25 23:53:15 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:53:15 compute-0 podman[255058]: 2025-11-25 23:53:15.526305487 +0000 UTC m=+0.040740970 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:53:15 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:53:15 compute-0 podman[255058]: 2025-11-25 23:53:15.654045341 +0000 UTC m=+0.168480774 container init ef17cfdfaae6183aca688a1c2a30b47e1629dc77745886434e1c2f560dc9fbbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_beaver, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:53:15 compute-0 podman[255058]: 2025-11-25 23:53:15.665243307 +0000 UTC m=+0.179678730 container start ef17cfdfaae6183aca688a1c2a30b47e1629dc77745886434e1c2f560dc9fbbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_beaver, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 23:53:15 compute-0 podman[255058]: 2025-11-25 23:53:15.669256594 +0000 UTC m=+0.183692047 container attach ef17cfdfaae6183aca688a1c2a30b47e1629dc77745886434e1c2f560dc9fbbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_beaver, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:53:15 compute-0 suspicious_beaver[255074]: 167 167
Nov 25 23:53:15 compute-0 systemd[1]: libpod-ef17cfdfaae6183aca688a1c2a30b47e1629dc77745886434e1c2f560dc9fbbc.scope: Deactivated successfully.
Nov 25 23:53:15 compute-0 podman[255058]: 2025-11-25 23:53:15.673349162 +0000 UTC m=+0.187784585 container died ef17cfdfaae6183aca688a1c2a30b47e1629dc77745886434e1c2f560dc9fbbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:53:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-1cc861ab4644f8d1a9b507b4616bb62bd6bdb034dbfb6c50ac5664e1fee6d50b-merged.mount: Deactivated successfully.
Nov 25 23:53:15 compute-0 podman[255058]: 2025-11-25 23:53:15.720962353 +0000 UTC m=+0.235397776 container remove ef17cfdfaae6183aca688a1c2a30b47e1629dc77745886434e1c2f560dc9fbbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:53:15 compute-0 systemd[1]: libpod-conmon-ef17cfdfaae6183aca688a1c2a30b47e1629dc77745886434e1c2f560dc9fbbc.scope: Deactivated successfully.
Nov 25 23:53:15 compute-0 podman[255097]: 2025-11-25 23:53:15.982788429 +0000 UTC m=+0.071451504 container create cba5fa9584e1c9f147c820dd6239dee3759d7bb85e5601acee231c177b1762b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_einstein, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 25 23:53:16 compute-0 systemd[1]: Started libpod-conmon-cba5fa9584e1c9f147c820dd6239dee3759d7bb85e5601acee231c177b1762b3.scope.
Nov 25 23:53:16 compute-0 podman[255097]: 2025-11-25 23:53:15.956469132 +0000 UTC m=+0.045132267 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:53:16 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:53:16 compute-0 ceph-mon[75654]: pgmap v693: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:53:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0ba34aa54dd988ce00c0703792922cc14ca2c3bee90adf3def72ff5e40a4b04/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 23:53:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0ba34aa54dd988ce00c0703792922cc14ca2c3bee90adf3def72ff5e40a4b04/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:53:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0ba34aa54dd988ce00c0703792922cc14ca2c3bee90adf3def72ff5e40a4b04/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:53:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0ba34aa54dd988ce00c0703792922cc14ca2c3bee90adf3def72ff5e40a4b04/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:53:16 compute-0 podman[255097]: 2025-11-25 23:53:16.101669698 +0000 UTC m=+0.190332813 container init cba5fa9584e1c9f147c820dd6239dee3759d7bb85e5601acee231c177b1762b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_einstein, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 25 23:53:16 compute-0 podman[255097]: 2025-11-25 23:53:16.113604314 +0000 UTC m=+0.202267399 container start cba5fa9584e1c9f147c820dd6239dee3759d7bb85e5601acee231c177b1762b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:53:16 compute-0 podman[255097]: 2025-11-25 23:53:16.11834961 +0000 UTC m=+0.207012745 container attach cba5fa9584e1c9f147c820dd6239dee3759d7bb85e5601acee231c177b1762b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 23:53:16 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v694: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:53:17 compute-0 laughing_einstein[255113]: {
Nov 25 23:53:17 compute-0 laughing_einstein[255113]:     "019d967b-1a56-4e90-8682-a890da577e20": {
Nov 25 23:53:17 compute-0 laughing_einstein[255113]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:53:17 compute-0 laughing_einstein[255113]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 23:53:17 compute-0 laughing_einstein[255113]:         "osd_id": 2,
Nov 25 23:53:17 compute-0 laughing_einstein[255113]:         "osd_uuid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 23:53:17 compute-0 laughing_einstein[255113]:         "type": "bluestore"
Nov 25 23:53:17 compute-0 laughing_einstein[255113]:     },
Nov 25 23:53:17 compute-0 laughing_einstein[255113]:     "21bbab34-bea3-466b-8bf7-812749fcef47": {
Nov 25 23:53:17 compute-0 laughing_einstein[255113]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:53:17 compute-0 laughing_einstein[255113]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 23:53:17 compute-0 laughing_einstein[255113]:         "osd_id": 1,
Nov 25 23:53:17 compute-0 laughing_einstein[255113]:         "osd_uuid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 23:53:17 compute-0 laughing_einstein[255113]:         "type": "bluestore"
Nov 25 23:53:17 compute-0 laughing_einstein[255113]:     },
Nov 25 23:53:17 compute-0 laughing_einstein[255113]:     "c5ddf08a-6193-41e0-8332-60b5083aa62e": {
Nov 25 23:53:17 compute-0 laughing_einstein[255113]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:53:17 compute-0 laughing_einstein[255113]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 23:53:17 compute-0 laughing_einstein[255113]:         "osd_id": 0,
Nov 25 23:53:17 compute-0 laughing_einstein[255113]:         "osd_uuid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 23:53:17 compute-0 laughing_einstein[255113]:         "type": "bluestore"
Nov 25 23:53:17 compute-0 laughing_einstein[255113]:     }
Nov 25 23:53:17 compute-0 laughing_einstein[255113]: }
Nov 25 23:53:17 compute-0 systemd[1]: libpod-cba5fa9584e1c9f147c820dd6239dee3759d7bb85e5601acee231c177b1762b3.scope: Deactivated successfully.
Nov 25 23:53:17 compute-0 podman[255097]: 2025-11-25 23:53:17.206800012 +0000 UTC m=+1.295463087 container died cba5fa9584e1c9f147c820dd6239dee3759d7bb85e5601acee231c177b1762b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 23:53:17 compute-0 systemd[1]: libpod-cba5fa9584e1c9f147c820dd6239dee3759d7bb85e5601acee231c177b1762b3.scope: Consumed 1.100s CPU time.
Nov 25 23:53:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-d0ba34aa54dd988ce00c0703792922cc14ca2c3bee90adf3def72ff5e40a4b04-merged.mount: Deactivated successfully.
Nov 25 23:53:17 compute-0 podman[255097]: 2025-11-25 23:53:17.284542421 +0000 UTC m=+1.373205506 container remove cba5fa9584e1c9f147c820dd6239dee3759d7bb85e5601acee231c177b1762b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 23:53:17 compute-0 systemd[1]: libpod-conmon-cba5fa9584e1c9f147c820dd6239dee3759d7bb85e5601acee231c177b1762b3.scope: Deactivated successfully.
Nov 25 23:53:17 compute-0 sudo[254993]: pam_unix(sudo:session): session closed for user root
Nov 25 23:53:17 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 23:53:17 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:53:17 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 23:53:17 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:53:17 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev bf195f5f-0f37-46b0-a695-6f451d27bf18 does not exist
Nov 25 23:53:17 compute-0 sudo[255159]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:53:17 compute-0 sudo[255159]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:53:17 compute-0 sudo[255159]: pam_unix(sudo:session): session closed for user root
Nov 25 23:53:17 compute-0 sudo[255184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 23:53:17 compute-0 sudo[255184]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:53:17 compute-0 sudo[255184]: pam_unix(sudo:session): session closed for user root
Nov 25 23:53:17 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 23:53:17 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2398708724' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 23:53:17 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 23:53:17 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2398708724' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 23:53:18 compute-0 ceph-mon[75654]: pgmap v694: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:53:18 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:53:18 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:53:18 compute-0 ceph-mon[75654]: from='client.? 192.168.122.10:0/2398708724' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 23:53:18 compute-0 ceph-mon[75654]: from='client.? 192.168.122.10:0/2398708724' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 23:53:18 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v695: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:53:19 compute-0 podman[255209]: 2025-11-25 23:53:19.301489978 +0000 UTC m=+0.131677448 container health_status 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 23:53:20 compute-0 ceph-mon[75654]: pgmap v695: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:53:20 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v696: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:53:20 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:53:21 compute-0 podman[255235]: 2025-11-25 23:53:21.237112762 +0000 UTC m=+0.058630204 container health_status 9c33443d6f21f9126b440289b7d384a59ef92406382d0e40a2b11f1da5dc47d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:53:22 compute-0 ceph-mon[75654]: pgmap v696: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:53:22 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v697: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:53:24 compute-0 ceph-mon[75654]: pgmap v697: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:53:24 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v698: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:53:25 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:53:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:53:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:53:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:53:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:53:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:53:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:53:26 compute-0 ceph-mon[75654]: pgmap v698: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:53:26 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v699: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:53:28 compute-0 ceph-mon[75654]: pgmap v699: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:53:28 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v700: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:53:30 compute-0 ceph-mon[75654]: pgmap v700: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:53:30 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v701: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:53:30 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:53:32 compute-0 ceph-mon[75654]: pgmap v701: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:53:32 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v702: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:53:34 compute-0 ceph-mon[75654]: pgmap v702: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:53:34 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v703: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:53:35 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:53:36 compute-0 ceph-mon[75654]: pgmap v703: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:53:36 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v704: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:53:38 compute-0 ceph-mon[75654]: pgmap v704: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:53:38 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v705: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:53:40 compute-0 ceph-mon[75654]: pgmap v705: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:53:40 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v706: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:53:40 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:53:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:53:40.762 160725 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 23:53:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:53:40.762 160725 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 23:53:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:53:40.763 160725 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 23:53:42 compute-0 ceph-mon[75654]: pgmap v706: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:53:42 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v707: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:53:43 compute-0 podman[255257]: 2025-11-25 23:53:43.27301952 +0000 UTC m=+0.094469983 container health_status b33bb0f8771548517ad62d4e0193b93d84fd686bd9113462f17aea9d51587084 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 25 23:53:44 compute-0 sshd-session[255255]: Received disconnect from 171.244.140.22 port 55864:11: Bye Bye [preauth]
Nov 25 23:53:44 compute-0 sshd-session[255255]: Disconnected from authenticating user root 171.244.140.22 port 55864 [preauth]
Nov 25 23:53:44 compute-0 ceph-mon[75654]: pgmap v707: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:53:44 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v708: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:53:45 compute-0 ceph-mon[75654]: pgmap v708: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:53:45 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:53:46 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v709: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:53:47 compute-0 ceph-mon[75654]: pgmap v709: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:53:48 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v710: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:53:49 compute-0 ceph-mon[75654]: pgmap v710: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:53:50 compute-0 podman[255278]: 2025-11-25 23:53:50.326218222 +0000 UTC m=+0.143945074 container health_status 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 25 23:53:50 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v711: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:53:50 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:53:51 compute-0 ceph-mon[75654]: pgmap v711: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:53:52 compute-0 podman[255306]: 2025-11-25 23:53:52.258609258 +0000 UTC m=+0.080657027 container health_status 9c33443d6f21f9126b440289b7d384a59ef92406382d0e40a2b11f1da5dc47d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Nov 25 23:53:52 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v712: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:53:53 compute-0 ceph-mon[75654]: pgmap v712: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:53:54 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v713: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:53:55 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:53:55 compute-0 ceph-mon[75654]: pgmap v713: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:53:56 compute-0 ceph-mgr[75954]: [balancer INFO root] Optimize plan auto_2025-11-25_23:53:56
Nov 25 23:53:56 compute-0 ceph-mgr[75954]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 23:53:56 compute-0 ceph-mgr[75954]: [balancer INFO root] do_upmap
Nov 25 23:53:56 compute-0 ceph-mgr[75954]: [balancer INFO root] pools ['cephfs.cephfs.data', 'backups', 'images', 'volumes', 'cephfs.cephfs.meta', 'vms', '.mgr']
Nov 25 23:53:56 compute-0 ceph-mgr[75954]: [balancer INFO root] prepared 0/10 changes
Nov 25 23:53:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:53:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:53:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:53:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:53:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:53:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:53:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 23:53:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 23:53:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 23:53:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 23:53:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 23:53:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 23:53:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 23:53:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 23:53:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 23:53:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 23:53:56 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v714: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:53:57 compute-0 ceph-mon[75654]: pgmap v714: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:53:58 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v715: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:53:59 compute-0 ceph-mon[75654]: pgmap v715: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:54:00 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v716: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:54:00 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:54:01 compute-0 ceph-mon[75654]: pgmap v716: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:54:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 23:54:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:54:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 23:54:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:54:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:54:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:54:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:54:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:54:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:54:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:54:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:54:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:54:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 23:54:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:54:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:54:02 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v717: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:54:02 compute-0 nova_compute[252550]: 2025-11-25 23:54:02.818 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 23:54:02 compute-0 nova_compute[252550]: 2025-11-25 23:54:02.821 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 23:54:02 compute-0 nova_compute[252550]: 2025-11-25 23:54:02.863 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 23:54:02 compute-0 nova_compute[252550]: 2025-11-25 23:54:02.864 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 23:54:02 compute-0 nova_compute[252550]: 2025-11-25 23:54:02.864 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 23:54:02 compute-0 nova_compute[252550]: 2025-11-25 23:54:02.864 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 23:54:02 compute-0 nova_compute[252550]: 2025-11-25 23:54:02.865 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 23:54:03 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 23:54:03 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3445300293' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 23:54:03 compute-0 nova_compute[252550]: 2025-11-25 23:54:03.299 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 23:54:03 compute-0 nova_compute[252550]: 2025-11-25 23:54:03.534 252558 WARNING nova.virt.libvirt.driver [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 23:54:03 compute-0 nova_compute[252550]: 2025-11-25 23:54:03.535 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5308MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 23:54:03 compute-0 nova_compute[252550]: 2025-11-25 23:54:03.536 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 23:54:03 compute-0 nova_compute[252550]: 2025-11-25 23:54:03.536 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 23:54:03 compute-0 ceph-mon[75654]: pgmap v717: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:54:03 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/3445300293' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 23:54:04 compute-0 nova_compute[252550]: 2025-11-25 23:54:04.115 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 23:54:04 compute-0 nova_compute[252550]: 2025-11-25 23:54:04.115 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 23:54:04 compute-0 nova_compute[252550]: 2025-11-25 23:54:04.133 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 23:54:04 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v718: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:54:04 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 23:54:04 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2944120377' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 23:54:04 compute-0 nova_compute[252550]: 2025-11-25 23:54:04.586 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 23:54:04 compute-0 nova_compute[252550]: 2025-11-25 23:54:04.595 252558 DEBUG nova.compute.provider_tree [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Inventory has not changed in ProviderTree for provider: 08547965-b35f-4b7b-95d8-902f06aa011c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 23:54:04 compute-0 nova_compute[252550]: 2025-11-25 23:54:04.637 252558 DEBUG nova.scheduler.client.report [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Inventory has not changed for provider 08547965-b35f-4b7b-95d8-902f06aa011c based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 23:54:04 compute-0 nova_compute[252550]: 2025-11-25 23:54:04.640 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 23:54:04 compute-0 nova_compute[252550]: 2025-11-25 23:54:04.640 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.104s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 23:54:04 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/2944120377' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 23:54:05 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:54:05 compute-0 nova_compute[252550]: 2025-11-25 23:54:05.642 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 23:54:05 compute-0 nova_compute[252550]: 2025-11-25 23:54:05.642 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 23:54:05 compute-0 nova_compute[252550]: 2025-11-25 23:54:05.642 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 23:54:05 compute-0 nova_compute[252550]: 2025-11-25 23:54:05.656 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 23:54:05 compute-0 nova_compute[252550]: 2025-11-25 23:54:05.656 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 23:54:05 compute-0 nova_compute[252550]: 2025-11-25 23:54:05.656 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 23:54:05 compute-0 nova_compute[252550]: 2025-11-25 23:54:05.656 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 23:54:05 compute-0 nova_compute[252550]: 2025-11-25 23:54:05.656 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 23:54:05 compute-0 nova_compute[252550]: 2025-11-25 23:54:05.656 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 23:54:05 compute-0 ceph-mon[75654]: pgmap v718: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:54:05 compute-0 nova_compute[252550]: 2025-11-25 23:54:05.821 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 23:54:06 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v719: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:54:06 compute-0 nova_compute[252550]: 2025-11-25 23:54:06.821 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 23:54:07 compute-0 ceph-mon[75654]: pgmap v719: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:54:08 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v720: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:54:09 compute-0 ceph-mon[75654]: pgmap v720: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:54:10 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v721: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:54:10 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:54:11 compute-0 ceph-mon[75654]: pgmap v721: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:54:12 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v722: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:54:13 compute-0 ceph-mon[75654]: pgmap v722: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:54:14 compute-0 podman[255369]: 2025-11-25 23:54:14.260265685 +0000 UTC m=+0.083651260 container health_status b33bb0f8771548517ad62d4e0193b93d84fd686bd9113462f17aea9d51587084 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 25 23:54:14 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v723: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:54:15 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:54:15 compute-0 ceph-mon[75654]: pgmap v723: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:54:16 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v724: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:54:17 compute-0 sudo[255389]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:54:17 compute-0 sudo[255389]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:54:17 compute-0 sudo[255389]: pam_unix(sudo:session): session closed for user root
Nov 25 23:54:17 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 23:54:17 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4023673141' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 23:54:17 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 23:54:17 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4023673141' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 23:54:17 compute-0 ceph-mon[75654]: pgmap v724: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:54:17 compute-0 ceph-mon[75654]: from='client.? 192.168.122.10:0/4023673141' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 23:54:17 compute-0 ceph-mon[75654]: from='client.? 192.168.122.10:0/4023673141' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 23:54:17 compute-0 sudo[255414]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:54:17 compute-0 sudo[255414]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:54:17 compute-0 sudo[255414]: pam_unix(sudo:session): session closed for user root
Nov 25 23:54:17 compute-0 sudo[255439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:54:17 compute-0 sudo[255439]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:54:17 compute-0 sudo[255439]: pam_unix(sudo:session): session closed for user root
Nov 25 23:54:18 compute-0 sudo[255464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 23:54:18 compute-0 sudo[255464]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:54:18 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v725: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:54:18 compute-0 sudo[255464]: pam_unix(sudo:session): session closed for user root
Nov 25 23:54:18 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 25 23:54:18 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 23:54:18 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 23:54:18 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:54:18 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 23:54:18 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 23:54:18 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 23:54:18 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:54:18 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev 7bcc1c24-e996-4266-8ad2-a87459473886 does not exist
Nov 25 23:54:18 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev e202ec38-7e46-4e37-a606-3a3d0d8861cf does not exist
Nov 25 23:54:18 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev cd7fc8a0-648a-496c-bb06-a2d34f04365e does not exist
Nov 25 23:54:18 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 23:54:18 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 23:54:18 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 23:54:18 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 23:54:18 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 23:54:18 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:54:18 compute-0 sudo[255522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:54:18 compute-0 sudo[255522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:54:18 compute-0 sudo[255522]: pam_unix(sudo:session): session closed for user root
Nov 25 23:54:18 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 23:54:18 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:54:18 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 23:54:18 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:54:18 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 23:54:18 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 23:54:18 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:54:18 compute-0 sudo[255547]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:54:18 compute-0 sudo[255547]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:54:18 compute-0 sudo[255547]: pam_unix(sudo:session): session closed for user root
Nov 25 23:54:18 compute-0 sudo[255572]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:54:18 compute-0 sudo[255572]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:54:19 compute-0 sudo[255572]: pam_unix(sudo:session): session closed for user root
Nov 25 23:54:19 compute-0 sudo[255597]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 23:54:19 compute-0 sudo[255597]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:54:19 compute-0 podman[255663]: 2025-11-25 23:54:19.536629293 +0000 UTC m=+0.059430561 container create 8b497cd4b0b6b94630a6d2af9c54c63aa8bd78dfaf41461c73ddbcaa705a4f8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_stonebraker, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:54:19 compute-0 systemd[1]: Started libpod-conmon-8b497cd4b0b6b94630a6d2af9c54c63aa8bd78dfaf41461c73ddbcaa705a4f8b.scope.
Nov 25 23:54:19 compute-0 podman[255663]: 2025-11-25 23:54:19.503225929 +0000 UTC m=+0.026027277 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:54:19 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:54:19 compute-0 podman[255663]: 2025-11-25 23:54:19.643713138 +0000 UTC m=+0.166514496 container init 8b497cd4b0b6b94630a6d2af9c54c63aa8bd78dfaf41461c73ddbcaa705a4f8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_stonebraker, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:54:19 compute-0 podman[255663]: 2025-11-25 23:54:19.65723631 +0000 UTC m=+0.180037608 container start 8b497cd4b0b6b94630a6d2af9c54c63aa8bd78dfaf41461c73ddbcaa705a4f8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_stonebraker, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 23:54:19 compute-0 podman[255663]: 2025-11-25 23:54:19.66134159 +0000 UTC m=+0.184142958 container attach 8b497cd4b0b6b94630a6d2af9c54c63aa8bd78dfaf41461c73ddbcaa705a4f8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_stonebraker, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 23:54:19 compute-0 practical_stonebraker[255680]: 167 167
Nov 25 23:54:19 compute-0 systemd[1]: libpod-8b497cd4b0b6b94630a6d2af9c54c63aa8bd78dfaf41461c73ddbcaa705a4f8b.scope: Deactivated successfully.
Nov 25 23:54:19 compute-0 conmon[255680]: conmon 8b497cd4b0b6b94630a6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8b497cd4b0b6b94630a6d2af9c54c63aa8bd78dfaf41461c73ddbcaa705a4f8b.scope/container/memory.events
Nov 25 23:54:19 compute-0 podman[255663]: 2025-11-25 23:54:19.669244302 +0000 UTC m=+0.192045660 container died 8b497cd4b0b6b94630a6d2af9c54c63aa8bd78dfaf41461c73ddbcaa705a4f8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_stonebraker, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:54:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-ed1052129f83b34ef5b7a94a2a15d0f237df7d6085d9a120574d072e43bdb51f-merged.mount: Deactivated successfully.
Nov 25 23:54:19 compute-0 podman[255663]: 2025-11-25 23:54:19.730408558 +0000 UTC m=+0.253209866 container remove 8b497cd4b0b6b94630a6d2af9c54c63aa8bd78dfaf41461c73ddbcaa705a4f8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_stonebraker, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 23:54:19 compute-0 systemd[1]: libpod-conmon-8b497cd4b0b6b94630a6d2af9c54c63aa8bd78dfaf41461c73ddbcaa705a4f8b.scope: Deactivated successfully.
Nov 25 23:54:19 compute-0 ceph-mon[75654]: pgmap v725: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:54:19 compute-0 podman[255703]: 2025-11-25 23:54:19.976939055 +0000 UTC m=+0.069709675 container create 4af3d0a5799f3e68b4e4c95f43286b31cc9756cc1ce4fd727df5bb4123f1825c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_vaughan, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 23:54:20 compute-0 systemd[1]: Started libpod-conmon-4af3d0a5799f3e68b4e4c95f43286b31cc9756cc1ce4fd727df5bb4123f1825c.scope.
Nov 25 23:54:20 compute-0 podman[255703]: 2025-11-25 23:54:19.946778429 +0000 UTC m=+0.039549099 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:54:20 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:54:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ed708a7ae0bd6f1942ae663295fdc7744a8f7bc665dea44fea176c00bc653fb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 23:54:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ed708a7ae0bd6f1942ae663295fdc7744a8f7bc665dea44fea176c00bc653fb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:54:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ed708a7ae0bd6f1942ae663295fdc7744a8f7bc665dea44fea176c00bc653fb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:54:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ed708a7ae0bd6f1942ae663295fdc7744a8f7bc665dea44fea176c00bc653fb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:54:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ed708a7ae0bd6f1942ae663295fdc7744a8f7bc665dea44fea176c00bc653fb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 23:54:20 compute-0 podman[255703]: 2025-11-25 23:54:20.09705367 +0000 UTC m=+0.189824310 container init 4af3d0a5799f3e68b4e4c95f43286b31cc9756cc1ce4fd727df5bb4123f1825c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_vaughan, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 25 23:54:20 compute-0 podman[255703]: 2025-11-25 23:54:20.111129576 +0000 UTC m=+0.203900196 container start 4af3d0a5799f3e68b4e4c95f43286b31cc9756cc1ce4fd727df5bb4123f1825c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:54:20 compute-0 podman[255703]: 2025-11-25 23:54:20.115588977 +0000 UTC m=+0.208359607 container attach 4af3d0a5799f3e68b4e4c95f43286b31cc9756cc1ce4fd727df5bb4123f1825c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_vaughan, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 23:54:20 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v726: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:54:20 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:54:21 compute-0 mystifying_vaughan[255719]: --> passed data devices: 0 physical, 3 LVM
Nov 25 23:54:21 compute-0 mystifying_vaughan[255719]: --> relative data size: 1.0
Nov 25 23:54:21 compute-0 mystifying_vaughan[255719]: --> All data devices are unavailable
Nov 25 23:54:21 compute-0 systemd[1]: libpod-4af3d0a5799f3e68b4e4c95f43286b31cc9756cc1ce4fd727df5bb4123f1825c.scope: Deactivated successfully.
Nov 25 23:54:21 compute-0 podman[255703]: 2025-11-25 23:54:21.287612291 +0000 UTC m=+1.380382881 container died 4af3d0a5799f3e68b4e4c95f43286b31cc9756cc1ce4fd727df5bb4123f1825c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 23:54:21 compute-0 systemd[1]: libpod-4af3d0a5799f3e68b4e4c95f43286b31cc9756cc1ce4fd727df5bb4123f1825c.scope: Consumed 1.128s CPU time.
Nov 25 23:54:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-9ed708a7ae0bd6f1942ae663295fdc7744a8f7bc665dea44fea176c00bc653fb-merged.mount: Deactivated successfully.
Nov 25 23:54:21 compute-0 podman[255703]: 2025-11-25 23:54:21.382301434 +0000 UTC m=+1.475072064 container remove 4af3d0a5799f3e68b4e4c95f43286b31cc9756cc1ce4fd727df5bb4123f1825c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:54:21 compute-0 podman[255744]: 2025-11-25 23:54:21.387588416 +0000 UTC m=+0.204696069 container health_status 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 25 23:54:21 compute-0 systemd[1]: libpod-conmon-4af3d0a5799f3e68b4e4c95f43286b31cc9756cc1ce4fd727df5bb4123f1825c.scope: Deactivated successfully.
Nov 25 23:54:21 compute-0 sudo[255597]: pam_unix(sudo:session): session closed for user root
Nov 25 23:54:21 compute-0 sudo[255786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:54:21 compute-0 sudo[255786]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:54:21 compute-0 sudo[255786]: pam_unix(sudo:session): session closed for user root
Nov 25 23:54:21 compute-0 sudo[255811]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:54:21 compute-0 sudo[255811]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:54:21 compute-0 sudo[255811]: pam_unix(sudo:session): session closed for user root
Nov 25 23:54:21 compute-0 sudo[255836]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:54:21 compute-0 sudo[255836]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:54:21 compute-0 sudo[255836]: pam_unix(sudo:session): session closed for user root
Nov 25 23:54:21 compute-0 sudo[255861]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 -- lvm list --format json
Nov 25 23:54:21 compute-0 sudo[255861]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:54:21 compute-0 ceph-mon[75654]: pgmap v726: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:54:22 compute-0 podman[255926]: 2025-11-25 23:54:22.146129455 +0000 UTC m=+0.046569138 container create 88098d4b94214abd9ff9571f05c086bca8c5a974ed2616fe3d21496679f9c8b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_chandrasekhar, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:54:22 compute-0 systemd[1]: Started libpod-conmon-88098d4b94214abd9ff9571f05c086bca8c5a974ed2616fe3d21496679f9c8b8.scope.
Nov 25 23:54:22 compute-0 podman[255926]: 2025-11-25 23:54:22.124915647 +0000 UTC m=+0.025355320 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:54:22 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:54:22 compute-0 podman[255926]: 2025-11-25 23:54:22.253563069 +0000 UTC m=+0.154002782 container init 88098d4b94214abd9ff9571f05c086bca8c5a974ed2616fe3d21496679f9c8b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_chandrasekhar, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 23:54:22 compute-0 podman[255926]: 2025-11-25 23:54:22.260776623 +0000 UTC m=+0.161216276 container start 88098d4b94214abd9ff9571f05c086bca8c5a974ed2616fe3d21496679f9c8b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_chandrasekhar, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 23:54:22 compute-0 podman[255926]: 2025-11-25 23:54:22.264386989 +0000 UTC m=+0.164826702 container attach 88098d4b94214abd9ff9571f05c086bca8c5a974ed2616fe3d21496679f9c8b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_chandrasekhar, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 23:54:22 compute-0 vigorous_chandrasekhar[255943]: 167 167
Nov 25 23:54:22 compute-0 systemd[1]: libpod-88098d4b94214abd9ff9571f05c086bca8c5a974ed2616fe3d21496679f9c8b8.scope: Deactivated successfully.
Nov 25 23:54:22 compute-0 podman[255926]: 2025-11-25 23:54:22.266565518 +0000 UTC m=+0.167005171 container died 88098d4b94214abd9ff9571f05c086bca8c5a974ed2616fe3d21496679f9c8b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_chandrasekhar, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:54:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-55d9935370e809b7bfe317c6161d60159ffd1eea305d2a3231512683daf5a00e-merged.mount: Deactivated successfully.
Nov 25 23:54:22 compute-0 podman[255926]: 2025-11-25 23:54:22.307692188 +0000 UTC m=+0.208131831 container remove 88098d4b94214abd9ff9571f05c086bca8c5a974ed2616fe3d21496679f9c8b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_chandrasekhar, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:54:22 compute-0 systemd[1]: libpod-conmon-88098d4b94214abd9ff9571f05c086bca8c5a974ed2616fe3d21496679f9c8b8.scope: Deactivated successfully.
Nov 25 23:54:22 compute-0 podman[255956]: 2025-11-25 23:54:22.392655352 +0000 UTC m=+0.068372471 container health_status 9c33443d6f21f9126b440289b7d384a59ef92406382d0e40a2b11f1da5dc47d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent)
Nov 25 23:54:22 compute-0 podman[255988]: 2025-11-25 23:54:22.484189961 +0000 UTC m=+0.049722611 container create 66d0e4ae39797f0432fde94f3f4e31cf51756adc6293cf2ec02bf7b59ee6432d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_euclid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 25 23:54:22 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v727: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:54:22 compute-0 systemd[1]: Started libpod-conmon-66d0e4ae39797f0432fde94f3f4e31cf51756adc6293cf2ec02bf7b59ee6432d.scope.
Nov 25 23:54:22 compute-0 podman[255988]: 2025-11-25 23:54:22.457841886 +0000 UTC m=+0.023374566 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:54:22 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:54:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52a2d5da11f2f3cd80db120a39e029e9e31ac539ef1b81473121f7f7a776ce94/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 23:54:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52a2d5da11f2f3cd80db120a39e029e9e31ac539ef1b81473121f7f7a776ce94/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:54:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52a2d5da11f2f3cd80db120a39e029e9e31ac539ef1b81473121f7f7a776ce94/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:54:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52a2d5da11f2f3cd80db120a39e029e9e31ac539ef1b81473121f7f7a776ce94/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:54:22 compute-0 podman[255988]: 2025-11-25 23:54:22.588810611 +0000 UTC m=+0.154343301 container init 66d0e4ae39797f0432fde94f3f4e31cf51756adc6293cf2ec02bf7b59ee6432d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_euclid, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:54:22 compute-0 podman[255988]: 2025-11-25 23:54:22.603491523 +0000 UTC m=+0.169024213 container start 66d0e4ae39797f0432fde94f3f4e31cf51756adc6293cf2ec02bf7b59ee6432d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 25 23:54:22 compute-0 podman[255988]: 2025-11-25 23:54:22.607498891 +0000 UTC m=+0.173031591 container attach 66d0e4ae39797f0432fde94f3f4e31cf51756adc6293cf2ec02bf7b59ee6432d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_euclid, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 23:54:23 compute-0 loving_euclid[256005]: {
Nov 25 23:54:23 compute-0 loving_euclid[256005]:     "0": [
Nov 25 23:54:23 compute-0 loving_euclid[256005]:         {
Nov 25 23:54:23 compute-0 loving_euclid[256005]:             "devices": [
Nov 25 23:54:23 compute-0 loving_euclid[256005]:                 "/dev/loop3"
Nov 25 23:54:23 compute-0 loving_euclid[256005]:             ],
Nov 25 23:54:23 compute-0 loving_euclid[256005]:             "lv_name": "ceph_lv0",
Nov 25 23:54:23 compute-0 loving_euclid[256005]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 23:54:23 compute-0 loving_euclid[256005]:             "lv_size": "21470642176",
Nov 25 23:54:23 compute-0 loving_euclid[256005]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c5ddf08a-6193-41e0-8332-60b5083aa62e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 23:54:23 compute-0 loving_euclid[256005]:             "lv_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 23:54:23 compute-0 loving_euclid[256005]:             "name": "ceph_lv0",
Nov 25 23:54:23 compute-0 loving_euclid[256005]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 23:54:23 compute-0 loving_euclid[256005]:             "tags": {
Nov 25 23:54:23 compute-0 loving_euclid[256005]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 23:54:23 compute-0 loving_euclid[256005]:                 "ceph.block_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 23:54:23 compute-0 loving_euclid[256005]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 23:54:23 compute-0 loving_euclid[256005]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:54:23 compute-0 loving_euclid[256005]:                 "ceph.cluster_name": "ceph",
Nov 25 23:54:23 compute-0 loving_euclid[256005]:                 "ceph.crush_device_class": "",
Nov 25 23:54:23 compute-0 loving_euclid[256005]:                 "ceph.encrypted": "0",
Nov 25 23:54:23 compute-0 loving_euclid[256005]:                 "ceph.osd_fsid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 23:54:23 compute-0 loving_euclid[256005]:                 "ceph.osd_id": "0",
Nov 25 23:54:23 compute-0 loving_euclid[256005]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 23:54:23 compute-0 loving_euclid[256005]:                 "ceph.type": "block",
Nov 25 23:54:23 compute-0 loving_euclid[256005]:                 "ceph.vdo": "0"
Nov 25 23:54:23 compute-0 loving_euclid[256005]:             },
Nov 25 23:54:23 compute-0 loving_euclid[256005]:             "type": "block",
Nov 25 23:54:23 compute-0 loving_euclid[256005]:             "vg_name": "ceph_vg0"
Nov 25 23:54:23 compute-0 loving_euclid[256005]:         }
Nov 25 23:54:23 compute-0 loving_euclid[256005]:     ],
Nov 25 23:54:23 compute-0 loving_euclid[256005]:     "1": [
Nov 25 23:54:23 compute-0 loving_euclid[256005]:         {
Nov 25 23:54:23 compute-0 loving_euclid[256005]:             "devices": [
Nov 25 23:54:23 compute-0 loving_euclid[256005]:                 "/dev/loop4"
Nov 25 23:54:23 compute-0 loving_euclid[256005]:             ],
Nov 25 23:54:23 compute-0 loving_euclid[256005]:             "lv_name": "ceph_lv1",
Nov 25 23:54:23 compute-0 loving_euclid[256005]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 23:54:23 compute-0 loving_euclid[256005]:             "lv_size": "21470642176",
Nov 25 23:54:23 compute-0 loving_euclid[256005]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21bbab34-bea3-466b-8bf7-812749fcef47,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 23:54:23 compute-0 loving_euclid[256005]:             "lv_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 23:54:23 compute-0 loving_euclid[256005]:             "name": "ceph_lv1",
Nov 25 23:54:23 compute-0 loving_euclid[256005]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 23:54:23 compute-0 loving_euclid[256005]:             "tags": {
Nov 25 23:54:23 compute-0 loving_euclid[256005]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 23:54:23 compute-0 loving_euclid[256005]:                 "ceph.block_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 23:54:23 compute-0 loving_euclid[256005]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 23:54:23 compute-0 loving_euclid[256005]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:54:23 compute-0 loving_euclid[256005]:                 "ceph.cluster_name": "ceph",
Nov 25 23:54:23 compute-0 loving_euclid[256005]:                 "ceph.crush_device_class": "",
Nov 25 23:54:23 compute-0 loving_euclid[256005]:                 "ceph.encrypted": "0",
Nov 25 23:54:23 compute-0 loving_euclid[256005]:                 "ceph.osd_fsid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 23:54:23 compute-0 loving_euclid[256005]:                 "ceph.osd_id": "1",
Nov 25 23:54:23 compute-0 loving_euclid[256005]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 23:54:23 compute-0 loving_euclid[256005]:                 "ceph.type": "block",
Nov 25 23:54:23 compute-0 loving_euclid[256005]:                 "ceph.vdo": "0"
Nov 25 23:54:23 compute-0 loving_euclid[256005]:             },
Nov 25 23:54:23 compute-0 loving_euclid[256005]:             "type": "block",
Nov 25 23:54:23 compute-0 loving_euclid[256005]:             "vg_name": "ceph_vg1"
Nov 25 23:54:23 compute-0 loving_euclid[256005]:         }
Nov 25 23:54:23 compute-0 loving_euclid[256005]:     ],
Nov 25 23:54:23 compute-0 loving_euclid[256005]:     "2": [
Nov 25 23:54:23 compute-0 loving_euclid[256005]:         {
Nov 25 23:54:23 compute-0 loving_euclid[256005]:             "devices": [
Nov 25 23:54:23 compute-0 loving_euclid[256005]:                 "/dev/loop5"
Nov 25 23:54:23 compute-0 loving_euclid[256005]:             ],
Nov 25 23:54:23 compute-0 loving_euclid[256005]:             "lv_name": "ceph_lv2",
Nov 25 23:54:23 compute-0 loving_euclid[256005]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 23:54:23 compute-0 loving_euclid[256005]:             "lv_size": "21470642176",
Nov 25 23:54:23 compute-0 loving_euclid[256005]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=019d967b-1a56-4e90-8682-a890da577e20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 23:54:23 compute-0 loving_euclid[256005]:             "lv_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 23:54:23 compute-0 loving_euclid[256005]:             "name": "ceph_lv2",
Nov 25 23:54:23 compute-0 loving_euclid[256005]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 23:54:23 compute-0 loving_euclid[256005]:             "tags": {
Nov 25 23:54:23 compute-0 loving_euclid[256005]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 23:54:23 compute-0 loving_euclid[256005]:                 "ceph.block_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 23:54:23 compute-0 loving_euclid[256005]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 23:54:23 compute-0 loving_euclid[256005]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:54:23 compute-0 loving_euclid[256005]:                 "ceph.cluster_name": "ceph",
Nov 25 23:54:23 compute-0 loving_euclid[256005]:                 "ceph.crush_device_class": "",
Nov 25 23:54:23 compute-0 loving_euclid[256005]:                 "ceph.encrypted": "0",
Nov 25 23:54:23 compute-0 loving_euclid[256005]:                 "ceph.osd_fsid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 23:54:23 compute-0 loving_euclid[256005]:                 "ceph.osd_id": "2",
Nov 25 23:54:23 compute-0 loving_euclid[256005]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 23:54:23 compute-0 loving_euclid[256005]:                 "ceph.type": "block",
Nov 25 23:54:23 compute-0 loving_euclid[256005]:                 "ceph.vdo": "0"
Nov 25 23:54:23 compute-0 loving_euclid[256005]:             },
Nov 25 23:54:23 compute-0 loving_euclid[256005]:             "type": "block",
Nov 25 23:54:23 compute-0 loving_euclid[256005]:             "vg_name": "ceph_vg2"
Nov 25 23:54:23 compute-0 loving_euclid[256005]:         }
Nov 25 23:54:23 compute-0 loving_euclid[256005]:     ]
Nov 25 23:54:23 compute-0 loving_euclid[256005]: }
Nov 25 23:54:23 compute-0 systemd[1]: libpod-66d0e4ae39797f0432fde94f3f4e31cf51756adc6293cf2ec02bf7b59ee6432d.scope: Deactivated successfully.
Nov 25 23:54:23 compute-0 podman[255988]: 2025-11-25 23:54:23.379727147 +0000 UTC m=+0.945259817 container died 66d0e4ae39797f0432fde94f3f4e31cf51756adc6293cf2ec02bf7b59ee6432d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 23:54:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-52a2d5da11f2f3cd80db120a39e029e9e31ac539ef1b81473121f7f7a776ce94-merged.mount: Deactivated successfully.
Nov 25 23:54:23 compute-0 podman[255988]: 2025-11-25 23:54:23.43068683 +0000 UTC m=+0.996219490 container remove 66d0e4ae39797f0432fde94f3f4e31cf51756adc6293cf2ec02bf7b59ee6432d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_euclid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 23:54:23 compute-0 systemd[1]: libpod-conmon-66d0e4ae39797f0432fde94f3f4e31cf51756adc6293cf2ec02bf7b59ee6432d.scope: Deactivated successfully.
Nov 25 23:54:23 compute-0 sudo[255861]: pam_unix(sudo:session): session closed for user root
Nov 25 23:54:23 compute-0 sudo[256025]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:54:23 compute-0 sudo[256025]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:54:23 compute-0 sudo[256025]: pam_unix(sudo:session): session closed for user root
Nov 25 23:54:23 compute-0 sudo[256050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:54:23 compute-0 sudo[256050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:54:23 compute-0 sudo[256050]: pam_unix(sudo:session): session closed for user root
Nov 25 23:54:23 compute-0 sudo[256075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:54:23 compute-0 sudo[256075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:54:23 compute-0 sudo[256075]: pam_unix(sudo:session): session closed for user root
Nov 25 23:54:23 compute-0 sudo[256100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 -- raw list --format json
Nov 25 23:54:23 compute-0 sudo[256100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:54:23 compute-0 ceph-mon[75654]: pgmap v727: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:54:24 compute-0 podman[256164]: 2025-11-25 23:54:24.065078656 +0000 UTC m=+0.050690517 container create df1ec64940f1182489d02655b4c55d9ac745d9928b458711178ddd8660461ace (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_gagarin, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 25 23:54:24 compute-0 systemd[1]: Started libpod-conmon-df1ec64940f1182489d02655b4c55d9ac745d9928b458711178ddd8660461ace.scope.
Nov 25 23:54:24 compute-0 podman[256164]: 2025-11-25 23:54:24.038329611 +0000 UTC m=+0.023941522 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:54:24 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:54:24 compute-0 podman[256164]: 2025-11-25 23:54:24.155574499 +0000 UTC m=+0.141186370 container init df1ec64940f1182489d02655b4c55d9ac745d9928b458711178ddd8660461ace (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_gagarin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 23:54:24 compute-0 podman[256164]: 2025-11-25 23:54:24.165428812 +0000 UTC m=+0.151040683 container start df1ec64940f1182489d02655b4c55d9ac745d9928b458711178ddd8660461ace (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_gagarin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 25 23:54:24 compute-0 podman[256164]: 2025-11-25 23:54:24.169864161 +0000 UTC m=+0.155476032 container attach df1ec64940f1182489d02655b4c55d9ac745d9928b458711178ddd8660461ace (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_gagarin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 25 23:54:24 compute-0 pedantic_gagarin[256180]: 167 167
Nov 25 23:54:24 compute-0 systemd[1]: libpod-df1ec64940f1182489d02655b4c55d9ac745d9928b458711178ddd8660461ace.scope: Deactivated successfully.
Nov 25 23:54:24 compute-0 conmon[256180]: conmon df1ec64940f1182489d0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-df1ec64940f1182489d02655b4c55d9ac745d9928b458711178ddd8660461ace.scope/container/memory.events
Nov 25 23:54:24 compute-0 podman[256164]: 2025-11-25 23:54:24.175932653 +0000 UTC m=+0.161544484 container died df1ec64940f1182489d02655b4c55d9ac745d9928b458711178ddd8660461ace (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_gagarin, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:54:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-0acc40e5fe6d09075b61532ea2fc196c48ce7722ffe3c0321c0c3b38de5180f4-merged.mount: Deactivated successfully.
Nov 25 23:54:24 compute-0 podman[256164]: 2025-11-25 23:54:24.216402916 +0000 UTC m=+0.202014747 container remove df1ec64940f1182489d02655b4c55d9ac745d9928b458711178ddd8660461ace (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_gagarin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:54:24 compute-0 systemd[1]: libpod-conmon-df1ec64940f1182489d02655b4c55d9ac745d9928b458711178ddd8660461ace.scope: Deactivated successfully.
Nov 25 23:54:24 compute-0 podman[256204]: 2025-11-25 23:54:24.41158387 +0000 UTC m=+0.052898257 container create 2e6886b124e2a84591a0e41b42146b4b39cb7c52dfd127deb4da302f773f80f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_bartik, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:54:24 compute-0 systemd[1]: Started libpod-conmon-2e6886b124e2a84591a0e41b42146b4b39cb7c52dfd127deb4da302f773f80f2.scope.
Nov 25 23:54:24 compute-0 podman[256204]: 2025-11-25 23:54:24.389631372 +0000 UTC m=+0.030945809 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:54:24 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:54:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86908d6d3cba6317c257beb6864283ed727bc0e1202a00fb8252ce93ff94c648/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 23:54:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86908d6d3cba6317c257beb6864283ed727bc0e1202a00fb8252ce93ff94c648/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:54:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86908d6d3cba6317c257beb6864283ed727bc0e1202a00fb8252ce93ff94c648/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:54:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86908d6d3cba6317c257beb6864283ed727bc0e1202a00fb8252ce93ff94c648/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:54:24 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v728: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:54:24 compute-0 podman[256204]: 2025-11-25 23:54:24.530906713 +0000 UTC m=+0.172221130 container init 2e6886b124e2a84591a0e41b42146b4b39cb7c52dfd127deb4da302f773f80f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_bartik, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:54:24 compute-0 podman[256204]: 2025-11-25 23:54:24.53942222 +0000 UTC m=+0.180736597 container start 2e6886b124e2a84591a0e41b42146b4b39cb7c52dfd127deb4da302f773f80f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_bartik, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:54:24 compute-0 podman[256204]: 2025-11-25 23:54:24.542695288 +0000 UTC m=+0.184009685 container attach 2e6886b124e2a84591a0e41b42146b4b39cb7c52dfd127deb4da302f773f80f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_bartik, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 25 23:54:25 compute-0 cool_bartik[256221]: {
Nov 25 23:54:25 compute-0 cool_bartik[256221]:     "019d967b-1a56-4e90-8682-a890da577e20": {
Nov 25 23:54:25 compute-0 cool_bartik[256221]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:54:25 compute-0 cool_bartik[256221]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 23:54:25 compute-0 cool_bartik[256221]:         "osd_id": 2,
Nov 25 23:54:25 compute-0 cool_bartik[256221]:         "osd_uuid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 23:54:25 compute-0 cool_bartik[256221]:         "type": "bluestore"
Nov 25 23:54:25 compute-0 cool_bartik[256221]:     },
Nov 25 23:54:25 compute-0 cool_bartik[256221]:     "21bbab34-bea3-466b-8bf7-812749fcef47": {
Nov 25 23:54:25 compute-0 cool_bartik[256221]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:54:25 compute-0 cool_bartik[256221]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 23:54:25 compute-0 cool_bartik[256221]:         "osd_id": 1,
Nov 25 23:54:25 compute-0 cool_bartik[256221]:         "osd_uuid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 23:54:25 compute-0 cool_bartik[256221]:         "type": "bluestore"
Nov 25 23:54:25 compute-0 cool_bartik[256221]:     },
Nov 25 23:54:25 compute-0 cool_bartik[256221]:     "c5ddf08a-6193-41e0-8332-60b5083aa62e": {
Nov 25 23:54:25 compute-0 cool_bartik[256221]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:54:25 compute-0 cool_bartik[256221]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 23:54:25 compute-0 cool_bartik[256221]:         "osd_id": 0,
Nov 25 23:54:25 compute-0 cool_bartik[256221]:         "osd_uuid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 23:54:25 compute-0 cool_bartik[256221]:         "type": "bluestore"
Nov 25 23:54:25 compute-0 cool_bartik[256221]:     }
Nov 25 23:54:25 compute-0 cool_bartik[256221]: }
Nov 25 23:54:25 compute-0 systemd[1]: libpod-2e6886b124e2a84591a0e41b42146b4b39cb7c52dfd127deb4da302f773f80f2.scope: Deactivated successfully.
Nov 25 23:54:25 compute-0 podman[256204]: 2025-11-25 23:54:25.479675602 +0000 UTC m=+1.120989989 container died 2e6886b124e2a84591a0e41b42146b4b39cb7c52dfd127deb4da302f773f80f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 25 23:54:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-86908d6d3cba6317c257beb6864283ed727bc0e1202a00fb8252ce93ff94c648-merged.mount: Deactivated successfully.
Nov 25 23:54:25 compute-0 podman[256204]: 2025-11-25 23:54:25.557460703 +0000 UTC m=+1.198775080 container remove 2e6886b124e2a84591a0e41b42146b4b39cb7c52dfd127deb4da302f773f80f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_bartik, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 23:54:25 compute-0 systemd[1]: libpod-conmon-2e6886b124e2a84591a0e41b42146b4b39cb7c52dfd127deb4da302f773f80f2.scope: Deactivated successfully.
Nov 25 23:54:25 compute-0 sudo[256100]: pam_unix(sudo:session): session closed for user root
Nov 25 23:54:25 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 23:54:25 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:54:25 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:54:25 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 23:54:25 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:54:25 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev 9fb0b7c0-acd8-4f3a-8c53-e7cebf3f8697 does not exist
Nov 25 23:54:25 compute-0 sudo[256268]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:54:25 compute-0 sudo[256268]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:54:25 compute-0 sudo[256268]: pam_unix(sudo:session): session closed for user root
Nov 25 23:54:25 compute-0 sudo[256293]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 23:54:25 compute-0 sudo[256293]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:54:25 compute-0 sudo[256293]: pam_unix(sudo:session): session closed for user root
Nov 25 23:54:25 compute-0 ceph-mon[75654]: pgmap v728: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:54:25 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:54:25 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:54:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:54:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:54:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:54:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:54:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:54:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:54:26 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v729: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:54:27 compute-0 ceph-mon[75654]: pgmap v729: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:54:28 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v730: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:54:29 compute-0 ceph-mon[75654]: pgmap v730: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:54:30 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v731: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:54:30 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:54:31 compute-0 ceph-mon[75654]: pgmap v731: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:54:32 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v732: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:54:33 compute-0 ceph-mon[75654]: pgmap v732: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:54:34 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v733: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:54:35 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:54:35 compute-0 ceph-mon[75654]: pgmap v733: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:54:36 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v734: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:54:37 compute-0 ceph-mon[75654]: pgmap v734: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:54:38 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v735: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:54:39 compute-0 ceph-mon[75654]: pgmap v735: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:54:40 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v736: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:54:40 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:54:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:54:40.763 160725 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 23:54:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:54:40.764 160725 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 23:54:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:54:40.764 160725 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 23:54:41 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Nov 25 23:54:41 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Nov 25 23:54:41 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Nov 25 23:54:41 compute-0 ceph-mon[75654]: pgmap v736: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:54:42 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v738: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:54:42 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Nov 25 23:54:42 compute-0 ceph-mon[75654]: osdmap e64: 3 total, 3 up, 3 in
Nov 25 23:54:42 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Nov 25 23:54:42 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Nov 25 23:54:43 compute-0 ceph-mon[75654]: pgmap v738: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:54:43 compute-0 ceph-mon[75654]: osdmap e65: 3 total, 3 up, 3 in
Nov 25 23:54:44 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v740: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:54:44 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Nov 25 23:54:44 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Nov 25 23:54:44 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Nov 25 23:54:45 compute-0 podman[256318]: 2025-11-25 23:54:45.290062009 +0000 UTC m=+0.111041023 container health_status b33bb0f8771548517ad62d4e0193b93d84fd686bd9113462f17aea9d51587084 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 23:54:45 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e66 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:54:45 compute-0 ceph-mon[75654]: pgmap v740: 177 pgs: 177 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:54:45 compute-0 ceph-mon[75654]: osdmap e66: 3 total, 3 up, 3 in
Nov 25 23:54:46 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v742: 177 pgs: 177 active+clean; 16 MiB data, 97 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 2.7 MiB/s wr, 31 op/s
Nov 25 23:54:47 compute-0 ceph-mon[75654]: pgmap v742: 177 pgs: 177 active+clean; 16 MiB data, 97 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 2.7 MiB/s wr, 31 op/s
Nov 25 23:54:48 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v743: 177 pgs: 177 active+clean; 21 MiB data, 102 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 3.1 MiB/s wr, 29 op/s
Nov 25 23:54:48 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Nov 25 23:54:49 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Nov 25 23:54:49 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Nov 25 23:54:50 compute-0 ceph-mon[75654]: pgmap v743: 177 pgs: 177 active+clean; 21 MiB data, 102 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 3.1 MiB/s wr, 29 op/s
Nov 25 23:54:50 compute-0 ceph-mon[75654]: osdmap e67: 3 total, 3 up, 3 in
Nov 25 23:54:50 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v745: 177 pgs: 177 active+clean; 21 MiB data, 102 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 2.7 MiB/s wr, 25 op/s
Nov 25 23:54:50 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e67 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:54:50 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Nov 25 23:54:50 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Nov 25 23:54:50 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Nov 25 23:54:51 compute-0 ceph-mon[75654]: pgmap v745: 177 pgs: 177 active+clean; 21 MiB data, 102 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 2.7 MiB/s wr, 25 op/s
Nov 25 23:54:51 compute-0 ceph-mon[75654]: osdmap e68: 3 total, 3 up, 3 in
Nov 25 23:54:52 compute-0 podman[256341]: 2025-11-25 23:54:52.345441966 +0000 UTC m=+0.169662541 container health_status 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Nov 25 23:54:52 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v747: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 5.4 MiB/s wr, 50 op/s
Nov 25 23:54:53 compute-0 podman[256369]: 2025-11-25 23:54:53.252982733 +0000 UTC m=+0.074177686 container health_status 9c33443d6f21f9126b440289b7d384a59ef92406382d0e40a2b11f1da5dc47d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 23:54:53 compute-0 ceph-mon[75654]: pgmap v747: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 5.4 MiB/s wr, 50 op/s
Nov 25 23:54:54 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v748: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 3.1 MiB/s wr, 24 op/s
Nov 25 23:54:55 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:54:55 compute-0 ceph-mon[75654]: pgmap v748: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 3.1 MiB/s wr, 24 op/s
Nov 25 23:54:56 compute-0 ceph-mgr[75954]: [balancer INFO root] Optimize plan auto_2025-11-25_23:54:56
Nov 25 23:54:56 compute-0 ceph-mgr[75954]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 23:54:56 compute-0 ceph-mgr[75954]: [balancer INFO root] do_upmap
Nov 25 23:54:56 compute-0 ceph-mgr[75954]: [balancer INFO root] pools ['vms', 'backups', 'images', '.mgr', 'volumes', 'cephfs.cephfs.meta', 'cephfs.cephfs.data']
Nov 25 23:54:56 compute-0 ceph-mgr[75954]: [balancer INFO root] prepared 0/10 changes
Nov 25 23:54:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:54:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:54:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:54:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:54:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:54:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:54:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 23:54:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 23:54:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 23:54:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 23:54:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 23:54:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 23:54:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 23:54:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 23:54:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 23:54:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 23:54:56 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v749: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 2.6 MiB/s wr, 23 op/s
Nov 25 23:54:57 compute-0 ceph-mon[75654]: pgmap v749: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 2.6 MiB/s wr, 23 op/s
Nov 25 23:54:58 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v750: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 2.2 MiB/s wr, 20 op/s
Nov 25 23:54:59 compute-0 ceph-mon[75654]: pgmap v750: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 2.2 MiB/s wr, 20 op/s
Nov 25 23:55:00 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v751: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 2.0 MiB/s wr, 19 op/s
Nov 25 23:55:00 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:55:00 compute-0 ceph-mon[75654]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Nov 25 23:55:00 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:55:00.689987) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 23:55:00 compute-0 ceph-mon[75654]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Nov 25 23:55:00 compute-0 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764114900690024, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1584, "num_deletes": 251, "total_data_size": 1722541, "memory_usage": 1754272, "flush_reason": "Manual Compaction"}
Nov 25 23:55:00 compute-0 ceph-mon[75654]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Nov 25 23:55:00 compute-0 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764114900704018, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 1679012, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14139, "largest_seqno": 15722, "table_properties": {"data_size": 1671642, "index_size": 4381, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 15020, "raw_average_key_size": 19, "raw_value_size": 1656768, "raw_average_value_size": 2185, "num_data_blocks": 201, "num_entries": 758, "num_filter_entries": 758, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764114736, "oldest_key_time": 1764114736, "file_creation_time": 1764114900, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cf57a6b1-796f-4cfa-b350-53eb10a4554d", "db_session_id": "Q7VS70283MEZ1V621ZPR", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 25 23:55:00 compute-0 ceph-mon[75654]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 14101 microseconds, and 7832 cpu microseconds.
Nov 25 23:55:00 compute-0 ceph-mon[75654]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 23:55:00 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:55:00.704084) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 1679012 bytes OK
Nov 25 23:55:00 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:55:00.704109) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Nov 25 23:55:00 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:55:00.705851) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Nov 25 23:55:00 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:55:00.705879) EVENT_LOG_v1 {"time_micros": 1764114900705870, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 23:55:00 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:55:00.705906) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 23:55:00 compute-0 ceph-mon[75654]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 1715654, prev total WAL file size 1715654, number of live WAL files 2.
Nov 25 23:55:00 compute-0 ceph-mon[75654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 23:55:00 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:55:00.706949) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Nov 25 23:55:00 compute-0 ceph-mon[75654]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 23:55:00 compute-0 ceph-mon[75654]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(1639KB)], [35(5145KB)]
Nov 25 23:55:00 compute-0 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764114900706994, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 6947904, "oldest_snapshot_seqno": -1}
Nov 25 23:55:00 compute-0 ceph-mon[75654]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 3529 keys, 5759352 bytes, temperature: kUnknown
Nov 25 23:55:00 compute-0 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764114900746972, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 5759352, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 5732687, "index_size": 16763, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8837, "raw_key_size": 83358, "raw_average_key_size": 23, "raw_value_size": 5666168, "raw_average_value_size": 1605, "num_data_blocks": 722, "num_entries": 3529, "num_filter_entries": 3529, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764113467, "oldest_key_time": 0, "file_creation_time": 1764114900, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cf57a6b1-796f-4cfa-b350-53eb10a4554d", "db_session_id": "Q7VS70283MEZ1V621ZPR", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Nov 25 23:55:00 compute-0 ceph-mon[75654]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 23:55:00 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:55:00.747371) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 5759352 bytes
Nov 25 23:55:00 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:55:00.748896) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 173.3 rd, 143.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 5.0 +0.0 blob) out(5.5 +0.0 blob), read-write-amplify(7.6) write-amplify(3.4) OK, records in: 4047, records dropped: 518 output_compression: NoCompression
Nov 25 23:55:00 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:55:00.748926) EVENT_LOG_v1 {"time_micros": 1764114900748911, "job": 16, "event": "compaction_finished", "compaction_time_micros": 40096, "compaction_time_cpu_micros": 23614, "output_level": 6, "num_output_files": 1, "total_output_size": 5759352, "num_input_records": 4047, "num_output_records": 3529, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 23:55:00 compute-0 ceph-mon[75654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 23:55:00 compute-0 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764114900749776, "job": 16, "event": "table_file_deletion", "file_number": 37}
Nov 25 23:55:00 compute-0 ceph-mon[75654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 23:55:00 compute-0 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764114900751663, "job": 16, "event": "table_file_deletion", "file_number": 35}
Nov 25 23:55:00 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:55:00.706822) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 23:55:00 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:55:00.751763) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 23:55:00 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:55:00.751769) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 23:55:00 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:55:00.751772) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 23:55:00 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:55:00.751773) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 23:55:00 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:55:00.751775) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 23:55:01 compute-0 ceph-mon[75654]: pgmap v751: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 2.0 MiB/s wr, 19 op/s
Nov 25 23:55:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 23:55:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:55:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 23:55:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:55:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:55:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:55:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:55:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:55:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:55:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:55:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 25 23:55:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:55:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 23:55:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:55:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:55:02 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v752: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 1.7 MiB/s wr, 16 op/s
Nov 25 23:55:02 compute-0 nova_compute[252550]: 2025-11-25 23:55:02.818 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 23:55:03 compute-0 ceph-mon[75654]: pgmap v752: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 1.7 MiB/s wr, 16 op/s
Nov 25 23:55:03 compute-0 nova_compute[252550]: 2025-11-25 23:55:03.816 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 23:55:04 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v753: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:55:04 compute-0 nova_compute[252550]: 2025-11-25 23:55:04.822 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 23:55:04 compute-0 nova_compute[252550]: 2025-11-25 23:55:04.822 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 23:55:04 compute-0 nova_compute[252550]: 2025-11-25 23:55:04.955 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 23:55:04 compute-0 nova_compute[252550]: 2025-11-25 23:55:04.955 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 23:55:04 compute-0 nova_compute[252550]: 2025-11-25 23:55:04.955 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 23:55:04 compute-0 nova_compute[252550]: 2025-11-25 23:55:04.956 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 23:55:04 compute-0 nova_compute[252550]: 2025-11-25 23:55:04.956 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 23:55:05 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 23:55:05 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3745899408' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 23:55:05 compute-0 nova_compute[252550]: 2025-11-25 23:55:05.409 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 23:55:05 compute-0 nova_compute[252550]: 2025-11-25 23:55:05.594 252558 WARNING nova.virt.libvirt.driver [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 23:55:05 compute-0 nova_compute[252550]: 2025-11-25 23:55:05.595 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5298MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 23:55:05 compute-0 nova_compute[252550]: 2025-11-25 23:55:05.595 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 23:55:05 compute-0 nova_compute[252550]: 2025-11-25 23:55:05.595 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 23:55:05 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:55:05 compute-0 nova_compute[252550]: 2025-11-25 23:55:05.667 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 23:55:05 compute-0 nova_compute[252550]: 2025-11-25 23:55:05.667 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 23:55:05 compute-0 nova_compute[252550]: 2025-11-25 23:55:05.682 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 23:55:05 compute-0 ceph-mon[75654]: pgmap v753: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:55:05 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/3745899408' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 23:55:06 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 23:55:06 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3783575913' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 23:55:06 compute-0 nova_compute[252550]: 2025-11-25 23:55:06.103 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 23:55:06 compute-0 nova_compute[252550]: 2025-11-25 23:55:06.109 252558 DEBUG nova.compute.provider_tree [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Inventory has not changed in ProviderTree for provider: 08547965-b35f-4b7b-95d8-902f06aa011c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 23:55:06 compute-0 nova_compute[252550]: 2025-11-25 23:55:06.172 252558 DEBUG nova.scheduler.client.report [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Inventory has not changed for provider 08547965-b35f-4b7b-95d8-902f06aa011c based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 23:55:06 compute-0 nova_compute[252550]: 2025-11-25 23:55:06.174 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 23:55:06 compute-0 nova_compute[252550]: 2025-11-25 23:55:06.174 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.579s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 23:55:06 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v754: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:55:06 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/3783575913' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 23:55:07 compute-0 nova_compute[252550]: 2025-11-25 23:55:07.174 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 23:55:07 compute-0 nova_compute[252550]: 2025-11-25 23:55:07.175 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 23:55:07 compute-0 nova_compute[252550]: 2025-11-25 23:55:07.175 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 23:55:07 compute-0 nova_compute[252550]: 2025-11-25 23:55:07.197 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 23:55:07 compute-0 nova_compute[252550]: 2025-11-25 23:55:07.198 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 23:55:07 compute-0 nova_compute[252550]: 2025-11-25 23:55:07.199 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 23:55:07 compute-0 nova_compute[252550]: 2025-11-25 23:55:07.199 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 23:55:07 compute-0 nova_compute[252550]: 2025-11-25 23:55:07.201 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 23:55:07 compute-0 nova_compute[252550]: 2025-11-25 23:55:07.201 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 23:55:07 compute-0 ceph-mon[75654]: pgmap v754: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:55:07 compute-0 nova_compute[252550]: 2025-11-25 23:55:07.822 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 23:55:08 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v755: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:55:09 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:55:09.605 160725 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '32:82:13', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '36:f3:66:b7:57:d1'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 23:55:09 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:55:09.606 160725 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 23:55:09 compute-0 ceph-mon[75654]: pgmap v755: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:55:10 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v756: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:55:10 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:55:11 compute-0 ceph-mon[75654]: pgmap v756: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:55:12 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v757: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:55:13 compute-0 ceph-mon[75654]: pgmap v757: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:55:14 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v758: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:55:15 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:55:15 compute-0 ceph-mon[75654]: pgmap v758: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:55:16 compute-0 podman[256432]: 2025-11-25 23:55:16.263014791 +0000 UTC m=+0.083078425 container health_status b33bb0f8771548517ad62d4e0193b93d84fd686bd9113462f17aea9d51587084 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible)
Nov 25 23:55:16 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v759: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:55:16 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:55:16.609 160725 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=2ba84045-48af-49e3-86f7-35b32300977f, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 23:55:17 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 23:55:17 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3709224588' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 23:55:17 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 23:55:17 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3709224588' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 23:55:17 compute-0 ceph-mon[75654]: pgmap v759: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:55:17 compute-0 ceph-mon[75654]: from='client.? 192.168.122.10:0/3709224588' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 23:55:17 compute-0 ceph-mon[75654]: from='client.? 192.168.122.10:0/3709224588' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 23:55:18 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v760: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:55:19 compute-0 ceph-mon[75654]: pgmap v760: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:55:20 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v761: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:55:20 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:55:22 compute-0 ceph-mon[75654]: pgmap v761: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:55:22 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v762: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:55:23 compute-0 podman[256452]: 2025-11-25 23:55:23.312405087 +0000 UTC m=+0.133066162 container health_status 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 25 23:55:23 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Nov 25 23:55:23 compute-0 ceph-mon[75654]: pgmap v762: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:55:23 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Nov 25 23:55:23 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Nov 25 23:55:23 compute-0 podman[256478]: 2025-11-25 23:55:23.437256279 +0000 UTC m=+0.089167208 container health_status 9c33443d6f21f9126b440289b7d384a59ef92406382d0e40a2b11f1da5dc47d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_managed=true)
Nov 25 23:55:24 compute-0 ceph-mon[75654]: osdmap e69: 3 total, 3 up, 3 in
Nov 25 23:55:24 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v764: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:55:25 compute-0 ceph-mon[75654]: pgmap v764: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:55:25 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e69 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:55:25 compute-0 sudo[256496]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:55:25 compute-0 sudo[256496]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:55:25 compute-0 sudo[256496]: pam_unix(sudo:session): session closed for user root
Nov 25 23:55:25 compute-0 sudo[256521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:55:25 compute-0 sudo[256521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:55:25 compute-0 sudo[256521]: pam_unix(sudo:session): session closed for user root
Nov 25 23:55:26 compute-0 sudo[256546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:55:26 compute-0 sudo[256546]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:55:26 compute-0 sudo[256546]: pam_unix(sudo:session): session closed for user root
Nov 25 23:55:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:55:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:55:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:55:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:55:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:55:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:55:26 compute-0 sudo[256571]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 25 23:55:26 compute-0 sudo[256571]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:55:26 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v765: 177 pgs: 177 active+clean; 65 MiB data, 130 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 2.4 MiB/s wr, 16 op/s
Nov 25 23:55:26 compute-0 podman[256669]: 2025-11-25 23:55:26.75450209 +0000 UTC m=+0.118510933 container exec 42789e176a5dd28a3c234ac87642cbe4e8e4b6a7a43641214b6da8a496b18af2 (image=quay.io/ceph/ceph:v18, name=ceph-101922db-575f-58e2-980f-928050464f69-mon-compute-0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 25 23:55:26 compute-0 podman[256669]: 2025-11-25 23:55:26.851017772 +0000 UTC m=+0.215026595 container exec_died 42789e176a5dd28a3c234ac87642cbe4e8e4b6a7a43641214b6da8a496b18af2 (image=quay.io/ceph/ceph:v18, name=ceph-101922db-575f-58e2-980f-928050464f69-mon-compute-0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 25 23:55:27 compute-0 ceph-mon[75654]: pgmap v765: 177 pgs: 177 active+clean; 65 MiB data, 130 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 2.4 MiB/s wr, 16 op/s
Nov 25 23:55:27 compute-0 sudo[256571]: pam_unix(sudo:session): session closed for user root
Nov 25 23:55:27 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 23:55:27 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:55:27 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 23:55:27 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:55:27 compute-0 sudo[256802]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:55:27 compute-0 sudo[256802]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:55:27 compute-0 sudo[256802]: pam_unix(sudo:session): session closed for user root
Nov 25 23:55:27 compute-0 sudo[256827]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:55:27 compute-0 sudo[256827]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:55:27 compute-0 sudo[256827]: pam_unix(sudo:session): session closed for user root
Nov 25 23:55:27 compute-0 sudo[256852]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:55:27 compute-0 sudo[256852]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:55:27 compute-0 sudo[256852]: pam_unix(sudo:session): session closed for user root
Nov 25 23:55:27 compute-0 sudo[256877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 23:55:27 compute-0 sudo[256877]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:55:28 compute-0 sudo[256877]: pam_unix(sudo:session): session closed for user root
Nov 25 23:55:28 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 23:55:28 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:55:28 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 23:55:28 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 23:55:28 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 23:55:28 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:55:28 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev 3535cbed-4b46-4a93-8c61-82797458b847 does not exist
Nov 25 23:55:28 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev 49e16c39-4f57-42b9-8d82-18d4ce3db8d7 does not exist
Nov 25 23:55:28 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev 8dc95a44-b238-42ec-a1c9-003c9675db29 does not exist
Nov 25 23:55:28 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 23:55:28 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 23:55:28 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 23:55:28 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 23:55:28 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 23:55:28 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:55:28 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v766: 177 pgs: 177 active+clean; 81 MiB data, 146 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 4.0 MiB/s wr, 38 op/s
Nov 25 23:55:28 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Nov 25 23:55:28 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Nov 25 23:55:28 compute-0 sudo[256934]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:55:28 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Nov 25 23:55:28 compute-0 sudo[256934]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:55:28 compute-0 sudo[256934]: pam_unix(sudo:session): session closed for user root
Nov 25 23:55:28 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:55:28 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:55:28 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:55:28 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 23:55:28 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:55:28 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 23:55:28 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 23:55:28 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:55:28 compute-0 ceph-mon[75654]: osdmap e70: 3 total, 3 up, 3 in
Nov 25 23:55:28 compute-0 sudo[256959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:55:28 compute-0 sudo[256959]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:55:28 compute-0 sudo[256959]: pam_unix(sudo:session): session closed for user root
Nov 25 23:55:28 compute-0 sudo[256984]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:55:28 compute-0 sudo[256984]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:55:28 compute-0 sudo[256984]: pam_unix(sudo:session): session closed for user root
Nov 25 23:55:28 compute-0 sudo[257009]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 23:55:28 compute-0 sudo[257009]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:55:29 compute-0 podman[257074]: 2025-11-25 23:55:29.2447731 +0000 UTC m=+0.069610583 container create 068c62ed7b3e5108ed97d29e2c954ad3e47e62209cde238ec8bf0e90318b2b75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_proskuriakova, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:55:29 compute-0 systemd[1]: Started libpod-conmon-068c62ed7b3e5108ed97d29e2c954ad3e47e62209cde238ec8bf0e90318b2b75.scope.
Nov 25 23:55:29 compute-0 podman[257074]: 2025-11-25 23:55:29.219484313 +0000 UTC m=+0.044321856 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:55:29 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:55:29 compute-0 podman[257074]: 2025-11-25 23:55:29.340115362 +0000 UTC m=+0.164952865 container init 068c62ed7b3e5108ed97d29e2c954ad3e47e62209cde238ec8bf0e90318b2b75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_proskuriakova, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:55:29 compute-0 podman[257074]: 2025-11-25 23:55:29.351448165 +0000 UTC m=+0.176285648 container start 068c62ed7b3e5108ed97d29e2c954ad3e47e62209cde238ec8bf0e90318b2b75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_proskuriakova, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:55:29 compute-0 podman[257074]: 2025-11-25 23:55:29.355030871 +0000 UTC m=+0.179868364 container attach 068c62ed7b3e5108ed97d29e2c954ad3e47e62209cde238ec8bf0e90318b2b75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_proskuriakova, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:55:29 compute-0 festive_proskuriakova[257091]: 167 167
Nov 25 23:55:29 compute-0 systemd[1]: libpod-068c62ed7b3e5108ed97d29e2c954ad3e47e62209cde238ec8bf0e90318b2b75.scope: Deactivated successfully.
Nov 25 23:55:29 compute-0 podman[257074]: 2025-11-25 23:55:29.359388427 +0000 UTC m=+0.184225920 container died 068c62ed7b3e5108ed97d29e2c954ad3e47e62209cde238ec8bf0e90318b2b75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_proskuriakova, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 23:55:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-4bfb86f8928dd0530e6f57e5f355013a9259d89e7795247990c5ccf64d07b952-merged.mount: Deactivated successfully.
Nov 25 23:55:29 compute-0 podman[257074]: 2025-11-25 23:55:29.41816725 +0000 UTC m=+0.243004733 container remove 068c62ed7b3e5108ed97d29e2c954ad3e47e62209cde238ec8bf0e90318b2b75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_proskuriakova, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:55:29 compute-0 systemd[1]: libpod-conmon-068c62ed7b3e5108ed97d29e2c954ad3e47e62209cde238ec8bf0e90318b2b75.scope: Deactivated successfully.
Nov 25 23:55:29 compute-0 ceph-mon[75654]: pgmap v766: 177 pgs: 177 active+clean; 81 MiB data, 146 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 4.0 MiB/s wr, 38 op/s
Nov 25 23:55:29 compute-0 podman[257115]: 2025-11-25 23:55:29.680276454 +0000 UTC m=+0.065852753 container create 8823f82eb19919887e6601406c155fd82fa380bf650feb955ffffdbc2068d96c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_morse, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:55:29 compute-0 systemd[1]: Started libpod-conmon-8823f82eb19919887e6601406c155fd82fa380bf650feb955ffffdbc2068d96c.scope.
Nov 25 23:55:29 compute-0 podman[257115]: 2025-11-25 23:55:29.655614705 +0000 UTC m=+0.041191074 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:55:29 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:55:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf72c123acbbb7372ccbeb0c4ae38ee06b8b26d09dde7d2180c739bbefab3d82/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 23:55:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf72c123acbbb7372ccbeb0c4ae38ee06b8b26d09dde7d2180c739bbefab3d82/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:55:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf72c123acbbb7372ccbeb0c4ae38ee06b8b26d09dde7d2180c739bbefab3d82/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:55:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf72c123acbbb7372ccbeb0c4ae38ee06b8b26d09dde7d2180c739bbefab3d82/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:55:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf72c123acbbb7372ccbeb0c4ae38ee06b8b26d09dde7d2180c739bbefab3d82/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 23:55:29 compute-0 podman[257115]: 2025-11-25 23:55:29.777835995 +0000 UTC m=+0.163412324 container init 8823f82eb19919887e6601406c155fd82fa380bf650feb955ffffdbc2068d96c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_morse, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 25 23:55:29 compute-0 podman[257115]: 2025-11-25 23:55:29.793023812 +0000 UTC m=+0.178600141 container start 8823f82eb19919887e6601406c155fd82fa380bf650feb955ffffdbc2068d96c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_morse, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:55:29 compute-0 podman[257115]: 2025-11-25 23:55:29.79707528 +0000 UTC m=+0.182651569 container attach 8823f82eb19919887e6601406c155fd82fa380bf650feb955ffffdbc2068d96c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_morse, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:55:30 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Nov 25 23:55:30 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Nov 25 23:55:30 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Nov 25 23:55:30 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v769: 177 pgs: 177 active+clean; 81 MiB data, 146 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 5.6 MiB/s wr, 54 op/s
Nov 25 23:55:30 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e71 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:55:30 compute-0 sad_morse[257132]: --> passed data devices: 0 physical, 3 LVM
Nov 25 23:55:30 compute-0 sad_morse[257132]: --> relative data size: 1.0
Nov 25 23:55:30 compute-0 sad_morse[257132]: --> All data devices are unavailable
Nov 25 23:55:30 compute-0 systemd[1]: libpod-8823f82eb19919887e6601406c155fd82fa380bf650feb955ffffdbc2068d96c.scope: Deactivated successfully.
Nov 25 23:55:30 compute-0 podman[257115]: 2025-11-25 23:55:30.875755686 +0000 UTC m=+1.261332045 container died 8823f82eb19919887e6601406c155fd82fa380bf650feb955ffffdbc2068d96c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_morse, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 25 23:55:30 compute-0 systemd[1]: libpod-8823f82eb19919887e6601406c155fd82fa380bf650feb955ffffdbc2068d96c.scope: Consumed 1.036s CPU time.
Nov 25 23:55:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-cf72c123acbbb7372ccbeb0c4ae38ee06b8b26d09dde7d2180c739bbefab3d82-merged.mount: Deactivated successfully.
Nov 25 23:55:30 compute-0 podman[257115]: 2025-11-25 23:55:30.930778969 +0000 UTC m=+1.316355258 container remove 8823f82eb19919887e6601406c155fd82fa380bf650feb955ffffdbc2068d96c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_morse, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 25 23:55:30 compute-0 systemd[1]: libpod-conmon-8823f82eb19919887e6601406c155fd82fa380bf650feb955ffffdbc2068d96c.scope: Deactivated successfully.
Nov 25 23:55:30 compute-0 sudo[257009]: pam_unix(sudo:session): session closed for user root
Nov 25 23:55:31 compute-0 sudo[257175]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:55:31 compute-0 sudo[257175]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:55:31 compute-0 sudo[257175]: pam_unix(sudo:session): session closed for user root
Nov 25 23:55:31 compute-0 sudo[257200]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:55:31 compute-0 sudo[257200]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:55:31 compute-0 sudo[257200]: pam_unix(sudo:session): session closed for user root
Nov 25 23:55:31 compute-0 sudo[257225]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:55:31 compute-0 sudo[257225]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:55:31 compute-0 sudo[257225]: pam_unix(sudo:session): session closed for user root
Nov 25 23:55:31 compute-0 sudo[257250]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 -- lvm list --format json
Nov 25 23:55:31 compute-0 sudo[257250]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:55:31 compute-0 ceph-mon[75654]: osdmap e71: 3 total, 3 up, 3 in
Nov 25 23:55:31 compute-0 ceph-mon[75654]: pgmap v769: 177 pgs: 177 active+clean; 81 MiB data, 146 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 5.6 MiB/s wr, 54 op/s
Nov 25 23:55:31 compute-0 podman[257315]: 2025-11-25 23:55:31.706165408 +0000 UTC m=+0.062422631 container create dce1411b54eddf69c388253b3a6d3b4ffc80521bb8d15cadd36f0dd1129b0952 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_rhodes, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:55:31 compute-0 systemd[1]: Started libpod-conmon-dce1411b54eddf69c388253b3a6d3b4ffc80521bb8d15cadd36f0dd1129b0952.scope.
Nov 25 23:55:31 compute-0 podman[257315]: 2025-11-25 23:55:31.68007157 +0000 UTC m=+0.036328844 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:55:31 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:55:31 compute-0 podman[257315]: 2025-11-25 23:55:31.798323905 +0000 UTC m=+0.154581168 container init dce1411b54eddf69c388253b3a6d3b4ffc80521bb8d15cadd36f0dd1129b0952 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 23:55:31 compute-0 podman[257315]: 2025-11-25 23:55:31.808640121 +0000 UTC m=+0.164897304 container start dce1411b54eddf69c388253b3a6d3b4ffc80521bb8d15cadd36f0dd1129b0952 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:55:31 compute-0 podman[257315]: 2025-11-25 23:55:31.811399945 +0000 UTC m=+0.167657168 container attach dce1411b54eddf69c388253b3a6d3b4ffc80521bb8d15cadd36f0dd1129b0952 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_rhodes, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 23:55:31 compute-0 thirsty_rhodes[257331]: 167 167
Nov 25 23:55:31 compute-0 systemd[1]: libpod-dce1411b54eddf69c388253b3a6d3b4ffc80521bb8d15cadd36f0dd1129b0952.scope: Deactivated successfully.
Nov 25 23:55:31 compute-0 podman[257315]: 2025-11-25 23:55:31.81607612 +0000 UTC m=+0.172333333 container died dce1411b54eddf69c388253b3a6d3b4ffc80521bb8d15cadd36f0dd1129b0952 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_rhodes, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:55:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-610bcf078433a5ff4a6e0128982712f94017d0a3d30e715f8a5a6d7f55664b91-merged.mount: Deactivated successfully.
Nov 25 23:55:31 compute-0 podman[257315]: 2025-11-25 23:55:31.862137993 +0000 UTC m=+0.218395206 container remove dce1411b54eddf69c388253b3a6d3b4ffc80521bb8d15cadd36f0dd1129b0952 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:55:31 compute-0 systemd[1]: libpod-conmon-dce1411b54eddf69c388253b3a6d3b4ffc80521bb8d15cadd36f0dd1129b0952.scope: Deactivated successfully.
Nov 25 23:55:32 compute-0 podman[257357]: 2025-11-25 23:55:32.11538108 +0000 UTC m=+0.067578320 container create 489db22fd14a758d2186c553ba0496600e0873362a466f3842d8a279e49453b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_bouman, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:55:32 compute-0 systemd[1]: Started libpod-conmon-489db22fd14a758d2186c553ba0496600e0873362a466f3842d8a279e49453b1.scope.
Nov 25 23:55:32 compute-0 podman[257357]: 2025-11-25 23:55:32.086983939 +0000 UTC m=+0.039181219 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:55:32 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:55:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c07b9f4939c6ce145de44f49a6fa7a738e70026ab9708c252c4bd3f9a80133a5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 23:55:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c07b9f4939c6ce145de44f49a6fa7a738e70026ab9708c252c4bd3f9a80133a5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:55:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c07b9f4939c6ce145de44f49a6fa7a738e70026ab9708c252c4bd3f9a80133a5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:55:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c07b9f4939c6ce145de44f49a6fa7a738e70026ab9708c252c4bd3f9a80133a5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:55:32 compute-0 podman[257357]: 2025-11-25 23:55:32.222392873 +0000 UTC m=+0.174590163 container init 489db22fd14a758d2186c553ba0496600e0873362a466f3842d8a279e49453b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_bouman, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:55:32 compute-0 podman[257357]: 2025-11-25 23:55:32.232980937 +0000 UTC m=+0.185178167 container start 489db22fd14a758d2186c553ba0496600e0873362a466f3842d8a279e49453b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_bouman, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:55:32 compute-0 podman[257357]: 2025-11-25 23:55:32.237373764 +0000 UTC m=+0.189570994 container attach 489db22fd14a758d2186c553ba0496600e0873362a466f3842d8a279e49453b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_bouman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:55:32 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v770: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 64 KiB/s rd, 14 MiB/s wr, 94 op/s
Nov 25 23:55:32 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Nov 25 23:55:32 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Nov 25 23:55:32 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Nov 25 23:55:32 compute-0 competent_bouman[257373]: {
Nov 25 23:55:32 compute-0 competent_bouman[257373]:     "0": [
Nov 25 23:55:32 compute-0 competent_bouman[257373]:         {
Nov 25 23:55:32 compute-0 competent_bouman[257373]:             "devices": [
Nov 25 23:55:32 compute-0 competent_bouman[257373]:                 "/dev/loop3"
Nov 25 23:55:32 compute-0 competent_bouman[257373]:             ],
Nov 25 23:55:32 compute-0 competent_bouman[257373]:             "lv_name": "ceph_lv0",
Nov 25 23:55:32 compute-0 competent_bouman[257373]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 23:55:32 compute-0 competent_bouman[257373]:             "lv_size": "21470642176",
Nov 25 23:55:32 compute-0 competent_bouman[257373]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c5ddf08a-6193-41e0-8332-60b5083aa62e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 23:55:32 compute-0 competent_bouman[257373]:             "lv_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 23:55:32 compute-0 competent_bouman[257373]:             "name": "ceph_lv0",
Nov 25 23:55:32 compute-0 competent_bouman[257373]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 23:55:32 compute-0 competent_bouman[257373]:             "tags": {
Nov 25 23:55:32 compute-0 competent_bouman[257373]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 23:55:32 compute-0 competent_bouman[257373]:                 "ceph.block_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 23:55:32 compute-0 competent_bouman[257373]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 23:55:32 compute-0 competent_bouman[257373]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:55:32 compute-0 competent_bouman[257373]:                 "ceph.cluster_name": "ceph",
Nov 25 23:55:32 compute-0 competent_bouman[257373]:                 "ceph.crush_device_class": "",
Nov 25 23:55:32 compute-0 competent_bouman[257373]:                 "ceph.encrypted": "0",
Nov 25 23:55:32 compute-0 competent_bouman[257373]:                 "ceph.osd_fsid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 23:55:32 compute-0 competent_bouman[257373]:                 "ceph.osd_id": "0",
Nov 25 23:55:32 compute-0 competent_bouman[257373]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 23:55:32 compute-0 competent_bouman[257373]:                 "ceph.type": "block",
Nov 25 23:55:32 compute-0 competent_bouman[257373]:                 "ceph.vdo": "0"
Nov 25 23:55:32 compute-0 competent_bouman[257373]:             },
Nov 25 23:55:32 compute-0 competent_bouman[257373]:             "type": "block",
Nov 25 23:55:32 compute-0 competent_bouman[257373]:             "vg_name": "ceph_vg0"
Nov 25 23:55:32 compute-0 competent_bouman[257373]:         }
Nov 25 23:55:32 compute-0 competent_bouman[257373]:     ],
Nov 25 23:55:32 compute-0 competent_bouman[257373]:     "1": [
Nov 25 23:55:32 compute-0 competent_bouman[257373]:         {
Nov 25 23:55:32 compute-0 competent_bouman[257373]:             "devices": [
Nov 25 23:55:32 compute-0 competent_bouman[257373]:                 "/dev/loop4"
Nov 25 23:55:32 compute-0 competent_bouman[257373]:             ],
Nov 25 23:55:32 compute-0 competent_bouman[257373]:             "lv_name": "ceph_lv1",
Nov 25 23:55:32 compute-0 competent_bouman[257373]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 23:55:32 compute-0 competent_bouman[257373]:             "lv_size": "21470642176",
Nov 25 23:55:32 compute-0 competent_bouman[257373]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21bbab34-bea3-466b-8bf7-812749fcef47,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 23:55:32 compute-0 competent_bouman[257373]:             "lv_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 23:55:32 compute-0 competent_bouman[257373]:             "name": "ceph_lv1",
Nov 25 23:55:32 compute-0 competent_bouman[257373]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 23:55:32 compute-0 competent_bouman[257373]:             "tags": {
Nov 25 23:55:32 compute-0 competent_bouman[257373]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 23:55:32 compute-0 competent_bouman[257373]:                 "ceph.block_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 23:55:32 compute-0 competent_bouman[257373]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 23:55:32 compute-0 competent_bouman[257373]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:55:32 compute-0 competent_bouman[257373]:                 "ceph.cluster_name": "ceph",
Nov 25 23:55:32 compute-0 competent_bouman[257373]:                 "ceph.crush_device_class": "",
Nov 25 23:55:32 compute-0 competent_bouman[257373]:                 "ceph.encrypted": "0",
Nov 25 23:55:32 compute-0 competent_bouman[257373]:                 "ceph.osd_fsid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 23:55:32 compute-0 competent_bouman[257373]:                 "ceph.osd_id": "1",
Nov 25 23:55:32 compute-0 competent_bouman[257373]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 23:55:32 compute-0 competent_bouman[257373]:                 "ceph.type": "block",
Nov 25 23:55:32 compute-0 competent_bouman[257373]:                 "ceph.vdo": "0"
Nov 25 23:55:32 compute-0 competent_bouman[257373]:             },
Nov 25 23:55:32 compute-0 competent_bouman[257373]:             "type": "block",
Nov 25 23:55:32 compute-0 competent_bouman[257373]:             "vg_name": "ceph_vg1"
Nov 25 23:55:32 compute-0 competent_bouman[257373]:         }
Nov 25 23:55:32 compute-0 competent_bouman[257373]:     ],
Nov 25 23:55:32 compute-0 competent_bouman[257373]:     "2": [
Nov 25 23:55:32 compute-0 competent_bouman[257373]:         {
Nov 25 23:55:32 compute-0 competent_bouman[257373]:             "devices": [
Nov 25 23:55:32 compute-0 competent_bouman[257373]:                 "/dev/loop5"
Nov 25 23:55:32 compute-0 competent_bouman[257373]:             ],
Nov 25 23:55:32 compute-0 competent_bouman[257373]:             "lv_name": "ceph_lv2",
Nov 25 23:55:32 compute-0 competent_bouman[257373]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 23:55:32 compute-0 competent_bouman[257373]:             "lv_size": "21470642176",
Nov 25 23:55:32 compute-0 competent_bouman[257373]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=019d967b-1a56-4e90-8682-a890da577e20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 23:55:32 compute-0 competent_bouman[257373]:             "lv_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 23:55:32 compute-0 competent_bouman[257373]:             "name": "ceph_lv2",
Nov 25 23:55:32 compute-0 competent_bouman[257373]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 23:55:32 compute-0 competent_bouman[257373]:             "tags": {
Nov 25 23:55:32 compute-0 competent_bouman[257373]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 23:55:32 compute-0 competent_bouman[257373]:                 "ceph.block_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 23:55:32 compute-0 competent_bouman[257373]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 23:55:32 compute-0 competent_bouman[257373]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:55:32 compute-0 competent_bouman[257373]:                 "ceph.cluster_name": "ceph",
Nov 25 23:55:32 compute-0 competent_bouman[257373]:                 "ceph.crush_device_class": "",
Nov 25 23:55:32 compute-0 competent_bouman[257373]:                 "ceph.encrypted": "0",
Nov 25 23:55:32 compute-0 competent_bouman[257373]:                 "ceph.osd_fsid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 23:55:32 compute-0 competent_bouman[257373]:                 "ceph.osd_id": "2",
Nov 25 23:55:32 compute-0 competent_bouman[257373]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 23:55:32 compute-0 competent_bouman[257373]:                 "ceph.type": "block",
Nov 25 23:55:32 compute-0 competent_bouman[257373]:                 "ceph.vdo": "0"
Nov 25 23:55:32 compute-0 competent_bouman[257373]:             },
Nov 25 23:55:32 compute-0 competent_bouman[257373]:             "type": "block",
Nov 25 23:55:32 compute-0 competent_bouman[257373]:             "vg_name": "ceph_vg2"
Nov 25 23:55:32 compute-0 competent_bouman[257373]:         }
Nov 25 23:55:32 compute-0 competent_bouman[257373]:     ]
Nov 25 23:55:32 compute-0 competent_bouman[257373]: }
Nov 25 23:55:33 compute-0 systemd[1]: libpod-489db22fd14a758d2186c553ba0496600e0873362a466f3842d8a279e49453b1.scope: Deactivated successfully.
Nov 25 23:55:33 compute-0 podman[257357]: 2025-11-25 23:55:33.001300517 +0000 UTC m=+0.953497727 container died 489db22fd14a758d2186c553ba0496600e0873362a466f3842d8a279e49453b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:55:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-c07b9f4939c6ce145de44f49a6fa7a738e70026ab9708c252c4bd3f9a80133a5-merged.mount: Deactivated successfully.
Nov 25 23:55:33 compute-0 podman[257357]: 2025-11-25 23:55:33.067293022 +0000 UTC m=+1.019490262 container remove 489db22fd14a758d2186c553ba0496600e0873362a466f3842d8a279e49453b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_bouman, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:55:33 compute-0 systemd[1]: libpod-conmon-489db22fd14a758d2186c553ba0496600e0873362a466f3842d8a279e49453b1.scope: Deactivated successfully.
Nov 25 23:55:33 compute-0 sudo[257250]: pam_unix(sudo:session): session closed for user root
Nov 25 23:55:33 compute-0 sudo[257396]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:55:33 compute-0 sudo[257396]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:55:33 compute-0 sudo[257396]: pam_unix(sudo:session): session closed for user root
Nov 25 23:55:33 compute-0 sudo[257421]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:55:33 compute-0 sudo[257421]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:55:33 compute-0 sudo[257421]: pam_unix(sudo:session): session closed for user root
Nov 25 23:55:33 compute-0 sudo[257446]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:55:33 compute-0 sudo[257446]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:55:33 compute-0 sudo[257446]: pam_unix(sudo:session): session closed for user root
Nov 25 23:55:33 compute-0 sudo[257471]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 -- raw list --format json
Nov 25 23:55:33 compute-0 sudo[257471]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:55:33 compute-0 ceph-mon[75654]: pgmap v770: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 64 KiB/s rd, 14 MiB/s wr, 94 op/s
Nov 25 23:55:33 compute-0 ceph-mon[75654]: osdmap e72: 3 total, 3 up, 3 in
Nov 25 23:55:33 compute-0 podman[257540]: 2025-11-25 23:55:33.872585003 +0000 UTC m=+0.050720819 container create cc4031b0cafaab30df1f659bb0278c7d26a712726a1caf1ba714960b78895cff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_colden, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 25 23:55:33 compute-0 systemd[1]: Started libpod-conmon-cc4031b0cafaab30df1f659bb0278c7d26a712726a1caf1ba714960b78895cff.scope.
Nov 25 23:55:33 compute-0 podman[257540]: 2025-11-25 23:55:33.848378885 +0000 UTC m=+0.026514751 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:55:33 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:55:33 compute-0 podman[257540]: 2025-11-25 23:55:33.971394787 +0000 UTC m=+0.149530593 container init cc4031b0cafaab30df1f659bb0278c7d26a712726a1caf1ba714960b78895cff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_colden, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 25 23:55:33 compute-0 podman[257540]: 2025-11-25 23:55:33.982564656 +0000 UTC m=+0.160700452 container start cc4031b0cafaab30df1f659bb0278c7d26a712726a1caf1ba714960b78895cff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_colden, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:55:33 compute-0 podman[257540]: 2025-11-25 23:55:33.985923836 +0000 UTC m=+0.164059662 container attach cc4031b0cafaab30df1f659bb0278c7d26a712726a1caf1ba714960b78895cff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_colden, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 23:55:33 compute-0 tender_colden[257557]: 167 167
Nov 25 23:55:33 compute-0 systemd[1]: libpod-cc4031b0cafaab30df1f659bb0278c7d26a712726a1caf1ba714960b78895cff.scope: Deactivated successfully.
Nov 25 23:55:33 compute-0 podman[257540]: 2025-11-25 23:55:33.991089834 +0000 UTC m=+0.169225650 container died cc4031b0cafaab30df1f659bb0278c7d26a712726a1caf1ba714960b78895cff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_colden, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 23:55:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-2ae8a80c5d1fea38673994d8ab86ee5e502347a4581e1a13eb653102f9b5b6a4-merged.mount: Deactivated successfully.
Nov 25 23:55:34 compute-0 podman[257540]: 2025-11-25 23:55:34.029570034 +0000 UTC m=+0.207705820 container remove cc4031b0cafaab30df1f659bb0278c7d26a712726a1caf1ba714960b78895cff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:55:34 compute-0 systemd[1]: libpod-conmon-cc4031b0cafaab30df1f659bb0278c7d26a712726a1caf1ba714960b78895cff.scope: Deactivated successfully.
Nov 25 23:55:34 compute-0 podman[257580]: 2025-11-25 23:55:34.228882598 +0000 UTC m=+0.058471696 container create d6eef7ce3986a3efe3ae315470cd586fb212571ab1d517e736f700ab1503f2f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_snyder, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 23:55:34 compute-0 systemd[1]: Started libpod-conmon-d6eef7ce3986a3efe3ae315470cd586fb212571ab1d517e736f700ab1503f2f1.scope.
Nov 25 23:55:34 compute-0 podman[257580]: 2025-11-25 23:55:34.201088454 +0000 UTC m=+0.030677592 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:55:34 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:55:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c26599e0651c0519b63224edd334872ca9ee7144281376dbff40f4ce796a8df/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 23:55:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c26599e0651c0519b63224edd334872ca9ee7144281376dbff40f4ce796a8df/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:55:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c26599e0651c0519b63224edd334872ca9ee7144281376dbff40f4ce796a8df/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:55:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c26599e0651c0519b63224edd334872ca9ee7144281376dbff40f4ce796a8df/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:55:34 compute-0 podman[257580]: 2025-11-25 23:55:34.342086157 +0000 UTC m=+0.171675275 container init d6eef7ce3986a3efe3ae315470cd586fb212571ab1d517e736f700ab1503f2f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_snyder, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:55:34 compute-0 podman[257580]: 2025-11-25 23:55:34.359884414 +0000 UTC m=+0.189473492 container start d6eef7ce3986a3efe3ae315470cd586fb212571ab1d517e736f700ab1503f2f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_snyder, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:55:34 compute-0 podman[257580]: 2025-11-25 23:55:34.36571261 +0000 UTC m=+0.195301678 container attach d6eef7ce3986a3efe3ae315470cd586fb212571ab1d517e736f700ab1503f2f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_snyder, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 25 23:55:34 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v772: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 12 MiB/s wr, 60 op/s
Nov 25 23:55:34 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Nov 25 23:55:34 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Nov 25 23:55:34 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Nov 25 23:55:35 compute-0 sshd-session[257496]: Invalid user sistemas from 45.78.216.132 port 35850
Nov 25 23:55:35 compute-0 amazing_snyder[257596]: {
Nov 25 23:55:35 compute-0 amazing_snyder[257596]:     "019d967b-1a56-4e90-8682-a890da577e20": {
Nov 25 23:55:35 compute-0 amazing_snyder[257596]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:55:35 compute-0 amazing_snyder[257596]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 23:55:35 compute-0 amazing_snyder[257596]:         "osd_id": 2,
Nov 25 23:55:35 compute-0 amazing_snyder[257596]:         "osd_uuid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 23:55:35 compute-0 amazing_snyder[257596]:         "type": "bluestore"
Nov 25 23:55:35 compute-0 amazing_snyder[257596]:     },
Nov 25 23:55:35 compute-0 amazing_snyder[257596]:     "21bbab34-bea3-466b-8bf7-812749fcef47": {
Nov 25 23:55:35 compute-0 amazing_snyder[257596]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:55:35 compute-0 amazing_snyder[257596]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 23:55:35 compute-0 amazing_snyder[257596]:         "osd_id": 1,
Nov 25 23:55:35 compute-0 amazing_snyder[257596]:         "osd_uuid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 23:55:35 compute-0 amazing_snyder[257596]:         "type": "bluestore"
Nov 25 23:55:35 compute-0 amazing_snyder[257596]:     },
Nov 25 23:55:35 compute-0 amazing_snyder[257596]:     "c5ddf08a-6193-41e0-8332-60b5083aa62e": {
Nov 25 23:55:35 compute-0 amazing_snyder[257596]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:55:35 compute-0 amazing_snyder[257596]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 23:55:35 compute-0 amazing_snyder[257596]:         "osd_id": 0,
Nov 25 23:55:35 compute-0 amazing_snyder[257596]:         "osd_uuid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 23:55:35 compute-0 amazing_snyder[257596]:         "type": "bluestore"
Nov 25 23:55:35 compute-0 amazing_snyder[257596]:     }
Nov 25 23:55:35 compute-0 amazing_snyder[257596]: }
Nov 25 23:55:35 compute-0 systemd[1]: libpod-d6eef7ce3986a3efe3ae315470cd586fb212571ab1d517e736f700ab1503f2f1.scope: Deactivated successfully.
Nov 25 23:55:35 compute-0 systemd[1]: libpod-d6eef7ce3986a3efe3ae315470cd586fb212571ab1d517e736f700ab1503f2f1.scope: Consumed 1.083s CPU time.
Nov 25 23:55:35 compute-0 podman[257580]: 2025-11-25 23:55:35.445363622 +0000 UTC m=+1.274952720 container died d6eef7ce3986a3efe3ae315470cd586fb212571ab1d517e736f700ab1503f2f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_snyder, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True)
Nov 25 23:55:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-7c26599e0651c0519b63224edd334872ca9ee7144281376dbff40f4ce796a8df-merged.mount: Deactivated successfully.
Nov 25 23:55:35 compute-0 podman[257580]: 2025-11-25 23:55:35.524995133 +0000 UTC m=+1.354584191 container remove d6eef7ce3986a3efe3ae315470cd586fb212571ab1d517e736f700ab1503f2f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_snyder, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:55:35 compute-0 systemd[1]: libpod-conmon-d6eef7ce3986a3efe3ae315470cd586fb212571ab1d517e736f700ab1503f2f1.scope: Deactivated successfully.
Nov 25 23:55:35 compute-0 sudo[257471]: pam_unix(sudo:session): session closed for user root
Nov 25 23:55:35 compute-0 ceph-mon[75654]: pgmap v772: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 12 MiB/s wr, 60 op/s
Nov 25 23:55:35 compute-0 ceph-mon[75654]: osdmap e73: 3 total, 3 up, 3 in
Nov 25 23:55:35 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 23:55:35 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:55:35 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 23:55:35 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:55:35 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev dd1bb234-b4a6-48f0-9f40-badc8287b381 does not exist
Nov 25 23:55:35 compute-0 sshd-session[257496]: Received disconnect from 45.78.216.132 port 35850:11: Bye Bye [preauth]
Nov 25 23:55:35 compute-0 sshd-session[257496]: Disconnected from invalid user sistemas 45.78.216.132 port 35850 [preauth]
Nov 25 23:55:35 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:55:35 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Nov 25 23:55:35 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Nov 25 23:55:35 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Nov 25 23:55:35 compute-0 sudo[257641]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:55:35 compute-0 sudo[257641]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:55:35 compute-0 sudo[257641]: pam_unix(sudo:session): session closed for user root
Nov 25 23:55:35 compute-0 sudo[257666]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 23:55:35 compute-0 sudo[257666]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:55:35 compute-0 sudo[257666]: pam_unix(sudo:session): session closed for user root
Nov 25 23:55:36 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v775: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 75 KiB/s rd, 12 MiB/s wr, 108 op/s
Nov 25 23:55:36 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:55:36 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:55:36 compute-0 ceph-mon[75654]: osdmap e74: 3 total, 3 up, 3 in
Nov 25 23:55:37 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Nov 25 23:55:37 compute-0 ceph-mon[75654]: pgmap v775: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 75 KiB/s rd, 12 MiB/s wr, 108 op/s
Nov 25 23:55:37 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Nov 25 23:55:37 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Nov 25 23:55:38 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v777: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 67 KiB/s rd, 5.5 KiB/s wr, 89 op/s
Nov 25 23:55:38 compute-0 ceph-mon[75654]: osdmap e75: 3 total, 3 up, 3 in
Nov 25 23:55:39 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Nov 25 23:55:39 compute-0 ceph-mon[75654]: pgmap v777: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 67 KiB/s rd, 5.5 KiB/s wr, 89 op/s
Nov 25 23:55:39 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Nov 25 23:55:39 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Nov 25 23:55:40 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v779: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 67 KiB/s rd, 5.5 KiB/s wr, 90 op/s
Nov 25 23:55:40 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e76 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:55:40 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Nov 25 23:55:40 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Nov 25 23:55:40 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Nov 25 23:55:40 compute-0 ceph-mon[75654]: osdmap e76: 3 total, 3 up, 3 in
Nov 25 23:55:40 compute-0 ceph-mon[75654]: osdmap e77: 3 total, 3 up, 3 in
Nov 25 23:55:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:55:40.765 160725 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 23:55:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:55:40.766 160725 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 23:55:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:55:40.767 160725 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 23:55:41 compute-0 ceph-mon[75654]: pgmap v779: 177 pgs: 177 active+clean; 41 MiB data, 122 MiB used, 60 GiB / 60 GiB avail; 67 KiB/s rd, 5.5 KiB/s wr, 90 op/s
Nov 25 23:55:42 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v781: 177 pgs: 177 active+clean; 41 MiB data, 127 MiB used, 60 GiB / 60 GiB avail; 177 KiB/s rd, 17 KiB/s wr, 244 op/s
Nov 25 23:55:43 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Nov 25 23:55:43 compute-0 ceph-mon[75654]: pgmap v781: 177 pgs: 177 active+clean; 41 MiB data, 127 MiB used, 60 GiB / 60 GiB avail; 177 KiB/s rd, 17 KiB/s wr, 244 op/s
Nov 25 23:55:43 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Nov 25 23:55:43 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Nov 25 23:55:44 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v783: 177 pgs: 177 active+clean; 41 MiB data, 127 MiB used, 60 GiB / 60 GiB avail; 145 KiB/s rd, 15 KiB/s wr, 202 op/s
Nov 25 23:55:44 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Nov 25 23:55:44 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Nov 25 23:55:44 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Nov 25 23:55:44 compute-0 ceph-mon[75654]: osdmap e78: 3 total, 3 up, 3 in
Nov 25 23:55:45 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e79 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:55:45 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Nov 25 23:55:45 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Nov 25 23:55:45 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Nov 25 23:55:45 compute-0 ceph-mon[75654]: pgmap v783: 177 pgs: 177 active+clean; 41 MiB data, 127 MiB used, 60 GiB / 60 GiB avail; 145 KiB/s rd, 15 KiB/s wr, 202 op/s
Nov 25 23:55:45 compute-0 ceph-mon[75654]: osdmap e79: 3 total, 3 up, 3 in
Nov 25 23:55:45 compute-0 ceph-mon[75654]: osdmap e80: 3 total, 3 up, 3 in
Nov 25 23:55:46 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v786: 177 pgs: 177 active+clean; 41 MiB data, 127 MiB used, 60 GiB / 60 GiB avail; 160 KiB/s rd, 17 KiB/s wr, 225 op/s
Nov 25 23:55:47 compute-0 podman[257692]: 2025-11-25 23:55:47.263654786 +0000 UTC m=+0.080068153 container health_status b33bb0f8771548517ad62d4e0193b93d84fd686bd9113462f17aea9d51587084 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 25 23:55:47 compute-0 ceph-mon[75654]: pgmap v786: 177 pgs: 177 active+clean; 41 MiB data, 127 MiB used, 60 GiB / 60 GiB avail; 160 KiB/s rd, 17 KiB/s wr, 225 op/s
Nov 25 23:55:48 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v787: 177 pgs: 177 active+clean; 41 MiB data, 127 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 2.5 KiB/s wr, 41 op/s
Nov 25 23:55:49 compute-0 ceph-mon[75654]: pgmap v787: 177 pgs: 177 active+clean; 41 MiB data, 127 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 2.5 KiB/s wr, 41 op/s
Nov 25 23:55:50 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v788: 177 pgs: 177 active+clean; 41 MiB data, 127 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 2.2 KiB/s wr, 36 op/s
Nov 25 23:55:50 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e80 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:55:50 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Nov 25 23:55:50 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Nov 25 23:55:50 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Nov 25 23:55:51 compute-0 ceph-mon[75654]: pgmap v788: 177 pgs: 177 active+clean; 41 MiB data, 127 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 2.2 KiB/s wr, 36 op/s
Nov 25 23:55:51 compute-0 ceph-mon[75654]: osdmap e81: 3 total, 3 up, 3 in
Nov 25 23:55:51 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Nov 25 23:55:51 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Nov 25 23:55:51 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Nov 25 23:55:52 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v791: 177 pgs: 177 active+clean; 41 MiB data, 131 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 5.8 KiB/s wr, 73 op/s
Nov 25 23:55:52 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Nov 25 23:55:52 compute-0 ceph-mon[75654]: osdmap e82: 3 total, 3 up, 3 in
Nov 25 23:55:52 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Nov 25 23:55:52 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Nov 25 23:55:53 compute-0 nova_compute[252550]: 2025-11-25 23:55:53.395 252558 DEBUG oslo_concurrency.lockutils [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Acquiring lock "bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 23:55:53 compute-0 nova_compute[252550]: 2025-11-25 23:55:53.396 252558 DEBUG oslo_concurrency.lockutils [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Lock "bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 23:55:53 compute-0 nova_compute[252550]: 2025-11-25 23:55:53.484 252558 DEBUG nova.compute.manager [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 23:55:53 compute-0 nova_compute[252550]: 2025-11-25 23:55:53.633 252558 DEBUG oslo_concurrency.lockutils [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 23:55:53 compute-0 nova_compute[252550]: 2025-11-25 23:55:53.634 252558 DEBUG oslo_concurrency.lockutils [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 23:55:53 compute-0 nova_compute[252550]: 2025-11-25 23:55:53.646 252558 DEBUG nova.virt.hardware [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 23:55:53 compute-0 nova_compute[252550]: 2025-11-25 23:55:53.647 252558 INFO nova.compute.claims [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Claim successful on node compute-0.ctlplane.example.com
Nov 25 23:55:53 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Nov 25 23:55:53 compute-0 nova_compute[252550]: 2025-11-25 23:55:53.780 252558 DEBUG oslo_concurrency.processutils [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 23:55:53 compute-0 ceph-mon[75654]: pgmap v791: 177 pgs: 177 active+clean; 41 MiB data, 131 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 5.8 KiB/s wr, 73 op/s
Nov 25 23:55:53 compute-0 ceph-mon[75654]: osdmap e83: 3 total, 3 up, 3 in
Nov 25 23:55:53 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Nov 25 23:55:53 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Nov 25 23:55:54 compute-0 podman[257733]: 2025-11-25 23:55:54.216901841 +0000 UTC m=+0.047082852 container health_status 9c33443d6f21f9126b440289b7d384a59ef92406382d0e40a2b11f1da5dc47d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 25 23:55:54 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 23:55:54 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2613194685' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 23:55:54 compute-0 podman[257732]: 2025-11-25 23:55:54.245134496 +0000 UTC m=+0.076220070 container health_status 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller)
Nov 25 23:55:54 compute-0 nova_compute[252550]: 2025-11-25 23:55:54.253 252558 DEBUG oslo_concurrency.processutils [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 23:55:54 compute-0 nova_compute[252550]: 2025-11-25 23:55:54.260 252558 DEBUG nova.compute.provider_tree [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Inventory has not changed in ProviderTree for provider: 08547965-b35f-4b7b-95d8-902f06aa011c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 23:55:54 compute-0 nova_compute[252550]: 2025-11-25 23:55:54.275 252558 DEBUG nova.scheduler.client.report [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Inventory has not changed for provider 08547965-b35f-4b7b-95d8-902f06aa011c based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 23:55:54 compute-0 nova_compute[252550]: 2025-11-25 23:55:54.294 252558 DEBUG oslo_concurrency.lockutils [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.660s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 23:55:54 compute-0 nova_compute[252550]: 2025-11-25 23:55:54.295 252558 DEBUG nova.compute.manager [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 23:55:54 compute-0 nova_compute[252550]: 2025-11-25 23:55:54.334 252558 DEBUG nova.compute.manager [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 23:55:54 compute-0 nova_compute[252550]: 2025-11-25 23:55:54.335 252558 DEBUG nova.network.neutron [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 23:55:54 compute-0 nova_compute[252550]: 2025-11-25 23:55:54.391 252558 INFO nova.virt.libvirt.driver [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 23:55:54 compute-0 nova_compute[252550]: 2025-11-25 23:55:54.484 252558 DEBUG nova.compute.manager [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 23:55:54 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v794: 177 pgs: 177 active+clean; 41 MiB data, 131 MiB used, 60 GiB / 60 GiB avail; 46 KiB/s rd, 6.2 KiB/s wr, 63 op/s
Nov 25 23:55:54 compute-0 nova_compute[252550]: 2025-11-25 23:55:54.664 252558 INFO nova.virt.block_device [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Booting with volume 6a6b9d67-6cf8-4dcc-abf1-e7df17195818 at /dev/vda
Nov 25 23:55:54 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Nov 25 23:55:54 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Nov 25 23:55:54 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Nov 25 23:55:54 compute-0 ceph-mon[75654]: osdmap e84: 3 total, 3 up, 3 in
Nov 25 23:55:54 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/2613194685' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 23:55:55 compute-0 nova_compute[252550]: 2025-11-25 23:55:55.036 252558 DEBUG os_brick.utils [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 25 23:55:55 compute-0 nova_compute[252550]: 2025-11-25 23:55:55.037 252558 INFO oslo.privsep.daemon [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'os_brick.privileged.default', '--privsep_sock_path', '/tmp/tmpj8bitk30/privsep.sock']
Nov 25 23:55:55 compute-0 nova_compute[252550]: 2025-11-25 23:55:55.370 252558 DEBUG nova.network.neutron [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Nov 25 23:55:55 compute-0 nova_compute[252550]: 2025-11-25 23:55:55.370 252558 DEBUG nova.compute.manager [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 23:55:55 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:55:55 compute-0 nova_compute[252550]: 2025-11-25 23:55:55.761 252558 INFO oslo.privsep.daemon [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Spawned new privsep daemon via rootwrap
Nov 25 23:55:55 compute-0 nova_compute[252550]: 2025-11-25 23:55:55.639 257781 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 25 23:55:55 compute-0 nova_compute[252550]: 2025-11-25 23:55:55.643 257781 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 25 23:55:55 compute-0 nova_compute[252550]: 2025-11-25 23:55:55.644 257781 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Nov 25 23:55:55 compute-0 nova_compute[252550]: 2025-11-25 23:55:55.645 257781 INFO oslo.privsep.daemon [-] privsep daemon running as pid 257781
Nov 25 23:55:55 compute-0 nova_compute[252550]: 2025-11-25 23:55:55.767 257781 DEBUG oslo.privsep.daemon [-] privsep: reply[4996de3e-0789-477d-8653-1710a7be7b56]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 23:55:55 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Nov 25 23:55:55 compute-0 ceph-mon[75654]: pgmap v794: 177 pgs: 177 active+clean; 41 MiB data, 131 MiB used, 60 GiB / 60 GiB avail; 46 KiB/s rd, 6.2 KiB/s wr, 63 op/s
Nov 25 23:55:55 compute-0 ceph-mon[75654]: osdmap e85: 3 total, 3 up, 3 in
Nov 25 23:55:55 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Nov 25 23:55:55 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Nov 25 23:55:55 compute-0 nova_compute[252550]: 2025-11-25 23:55:55.885 257781 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 23:55:55 compute-0 nova_compute[252550]: 2025-11-25 23:55:55.899 257781 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 23:55:55 compute-0 nova_compute[252550]: 2025-11-25 23:55:55.899 257781 DEBUG oslo.privsep.daemon [-] privsep: reply[859ca9d5-fb4b-40ba-827d-c8bd6d70cb84]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 23:55:55 compute-0 nova_compute[252550]: 2025-11-25 23:55:55.901 257781 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 23:55:55 compute-0 nova_compute[252550]: 2025-11-25 23:55:55.911 257781 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 23:55:55 compute-0 nova_compute[252550]: 2025-11-25 23:55:55.911 257781 DEBUG oslo.privsep.daemon [-] privsep: reply[8bb366ec-5c72-4bf5-a2d7-ce2f4753e53d]: (4, ('InitiatorName=iqn.1994-05.com.redhat:eb1ba11079b3', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 23:55:55 compute-0 nova_compute[252550]: 2025-11-25 23:55:55.914 257781 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 23:55:55 compute-0 nova_compute[252550]: 2025-11-25 23:55:55.926 257781 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 23:55:55 compute-0 nova_compute[252550]: 2025-11-25 23:55:55.926 257781 DEBUG oslo.privsep.daemon [-] privsep: reply[502dc846-31c8-421e-a921-27bc260a094b]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 23:55:55 compute-0 nova_compute[252550]: 2025-11-25 23:55:55.928 257781 DEBUG oslo.privsep.daemon [-] privsep: reply[8546ee9e-d96f-46da-a89a-80b26c06591c]: (4, '99edd01f-cb88-4b88-a56d-15f374f9d1d0') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 23:55:55 compute-0 nova_compute[252550]: 2025-11-25 23:55:55.929 252558 DEBUG oslo_concurrency.processutils [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 23:55:55 compute-0 nova_compute[252550]: 2025-11-25 23:55:55.952 252558 DEBUG oslo_concurrency.processutils [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] CMD "nvme version" returned: 0 in 0.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 23:55:55 compute-0 nova_compute[252550]: 2025-11-25 23:55:55.957 252558 DEBUG os_brick.initiator.connectors.lightos [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 25 23:55:55 compute-0 nova_compute[252550]: 2025-11-25 23:55:55.958 252558 DEBUG os_brick.initiator.connectors.lightos [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 25 23:55:55 compute-0 nova_compute[252550]: 2025-11-25 23:55:55.958 252558 DEBUG os_brick.initiator.connectors.lightos [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 25 23:55:55 compute-0 nova_compute[252550]: 2025-11-25 23:55:55.959 252558 DEBUG os_brick.utils [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] <== get_connector_properties: return (922ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:eb1ba11079b3', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '99edd01f-cb88-4b88-a56d-15f374f9d1d0', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 25 23:55:55 compute-0 nova_compute[252550]: 2025-11-25 23:55:55.960 252558 DEBUG nova.virt.block_device [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Updating existing volume attachment record: 7875ce81-3fea-4dc5-9323-881b03756e90 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 25 23:55:56 compute-0 ceph-mgr[75954]: [balancer INFO root] Optimize plan auto_2025-11-25_23:55:56
Nov 25 23:55:56 compute-0 ceph-mgr[75954]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 23:55:56 compute-0 ceph-mgr[75954]: [balancer INFO root] do_upmap
Nov 25 23:55:56 compute-0 ceph-mgr[75954]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.data', 'backups', '.mgr', 'images', 'cephfs.cephfs.meta', 'vms']
Nov 25 23:55:56 compute-0 ceph-mgr[75954]: [balancer INFO root] prepared 0/10 changes
Nov 25 23:55:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:55:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:55:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:55:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:55:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:55:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:55:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 23:55:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 23:55:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 23:55:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 23:55:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 23:55:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 23:55:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 23:55:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 23:55:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 23:55:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 23:55:56 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v797: 177 pgs: 177 active+clean; 41 MiB data, 149 MiB used, 60 GiB / 60 GiB avail; 62 KiB/s rd, 13 KiB/s wr, 91 op/s
Nov 25 23:55:56 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 23:55:56 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1576037613' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 23:55:56 compute-0 ceph-mon[75654]: osdmap e86: 3 total, 3 up, 3 in
Nov 25 23:55:56 compute-0 ceph-mon[75654]: from='client.? 192.168.122.10:0/1576037613' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 23:55:57 compute-0 nova_compute[252550]: 2025-11-25 23:55:57.312 252558 DEBUG nova.compute.manager [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 23:55:57 compute-0 nova_compute[252550]: 2025-11-25 23:55:57.315 252558 DEBUG nova.virt.libvirt.driver [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 23:55:57 compute-0 nova_compute[252550]: 2025-11-25 23:55:57.316 252558 INFO nova.virt.libvirt.driver [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Creating image(s)
Nov 25 23:55:57 compute-0 nova_compute[252550]: 2025-11-25 23:55:57.317 252558 DEBUG nova.virt.libvirt.driver [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Nov 25 23:55:57 compute-0 nova_compute[252550]: 2025-11-25 23:55:57.318 252558 DEBUG nova.virt.libvirt.driver [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Ensure instance console log exists: /var/lib/nova/instances/bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 23:55:57 compute-0 nova_compute[252550]: 2025-11-25 23:55:57.318 252558 DEBUG oslo_concurrency.lockutils [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 23:55:57 compute-0 nova_compute[252550]: 2025-11-25 23:55:57.319 252558 DEBUG oslo_concurrency.lockutils [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 23:55:57 compute-0 nova_compute[252550]: 2025-11-25 23:55:57.319 252558 DEBUG oslo_concurrency.lockutils [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 23:55:57 compute-0 nova_compute[252550]: 2025-11-25 23:55:57.323 252558 DEBUG nova.virt.libvirt.driver [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'device_type': 'disk', 'disk_bus': 'virtio', 'attachment_id': '7875ce81-3fea-4dc5-9323-881b03756e90', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-6a6b9d67-6cf8-4dcc-abf1-e7df17195818', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '6a6b9d67-6cf8-4dcc-abf1-e7df17195818', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91', 'attached_at': '', 'detached_at': '', 'volume_id': '6a6b9d67-6cf8-4dcc-abf1-e7df17195818', 'serial': '6a6b9d67-6cf8-4dcc-abf1-e7df17195818'}, 'boot_index': 0, 'delete_on_termination': True, 'mount_device': '/dev/vda', 'guest_format': None, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 23:55:57 compute-0 nova_compute[252550]: 2025-11-25 23:55:57.330 252558 WARNING nova.virt.libvirt.driver [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 23:55:57 compute-0 nova_compute[252550]: 2025-11-25 23:55:57.338 252558 DEBUG nova.virt.libvirt.host [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 23:55:57 compute-0 nova_compute[252550]: 2025-11-25 23:55:57.339 252558 DEBUG nova.virt.libvirt.host [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 23:55:57 compute-0 nova_compute[252550]: 2025-11-25 23:55:57.342 252558 DEBUG nova.virt.libvirt.host [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 23:55:57 compute-0 nova_compute[252550]: 2025-11-25 23:55:57.343 252558 DEBUG nova.virt.libvirt.host [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 23:55:57 compute-0 nova_compute[252550]: 2025-11-25 23:55:57.344 252558 DEBUG nova.virt.libvirt.driver [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 23:55:57 compute-0 nova_compute[252550]: 2025-11-25 23:55:57.344 252558 DEBUG nova.virt.hardware [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T23:54:39Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='897d55b7-f73b-41fe-b70f-d9aa95d4456d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 23:55:57 compute-0 nova_compute[252550]: 2025-11-25 23:55:57.345 252558 DEBUG nova.virt.hardware [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 23:55:57 compute-0 nova_compute[252550]: 2025-11-25 23:55:57.346 252558 DEBUG nova.virt.hardware [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 23:55:57 compute-0 nova_compute[252550]: 2025-11-25 23:55:57.346 252558 DEBUG nova.virt.hardware [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 23:55:57 compute-0 nova_compute[252550]: 2025-11-25 23:55:57.347 252558 DEBUG nova.virt.hardware [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 23:55:57 compute-0 nova_compute[252550]: 2025-11-25 23:55:57.347 252558 DEBUG nova.virt.hardware [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 23:55:57 compute-0 nova_compute[252550]: 2025-11-25 23:55:57.347 252558 DEBUG nova.virt.hardware [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 23:55:57 compute-0 nova_compute[252550]: 2025-11-25 23:55:57.348 252558 DEBUG nova.virt.hardware [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 23:55:57 compute-0 nova_compute[252550]: 2025-11-25 23:55:57.348 252558 DEBUG nova.virt.hardware [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 23:55:57 compute-0 nova_compute[252550]: 2025-11-25 23:55:57.349 252558 DEBUG nova.virt.hardware [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 23:55:57 compute-0 nova_compute[252550]: 2025-11-25 23:55:57.350 252558 DEBUG nova.virt.hardware [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 23:55:57 compute-0 nova_compute[252550]: 2025-11-25 23:55:57.390 252558 DEBUG nova.storage.rbd_utils [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] rbd image bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 23:55:57 compute-0 nova_compute[252550]: 2025-11-25 23:55:57.398 252558 DEBUG nova.privsep.utils [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Nov 25 23:55:57 compute-0 nova_compute[252550]: 2025-11-25 23:55:57.398 252558 DEBUG oslo_concurrency.processutils [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 23:55:57 compute-0 ceph-mon[75654]: pgmap v797: 177 pgs: 177 active+clean; 41 MiB data, 149 MiB used, 60 GiB / 60 GiB avail; 62 KiB/s rd, 13 KiB/s wr, 91 op/s
Nov 25 23:55:57 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 23:55:57 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1342532240' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 23:55:57 compute-0 nova_compute[252550]: 2025-11-25 23:55:57.874 252558 DEBUG oslo_concurrency.processutils [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 23:55:57 compute-0 nova_compute[252550]: 2025-11-25 23:55:57.876 252558 DEBUG oslo_concurrency.lockutils [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Acquiring lock "cache_volume_driver" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 23:55:57 compute-0 nova_compute[252550]: 2025-11-25 23:55:57.876 252558 DEBUG oslo_concurrency.lockutils [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Lock "cache_volume_driver" acquired by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 23:55:57 compute-0 nova_compute[252550]: 2025-11-25 23:55:57.877 252558 DEBUG oslo_concurrency.lockutils [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Lock "cache_volume_driver" "released" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 23:55:57 compute-0 systemd[1]: Starting libvirt secret daemon...
Nov 25 23:55:57 compute-0 systemd[1]: Started libvirt secret daemon.
Nov 25 23:55:57 compute-0 nova_compute[252550]: 2025-11-25 23:55:57.946 252558 DEBUG nova.objects.instance [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Lazy-loading 'pci_devices' on Instance uuid bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 23:55:57 compute-0 nova_compute[252550]: 2025-11-25 23:55:57.959 252558 DEBUG nova.virt.libvirt.driver [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] End _get_guest_xml xml=<domain type="kvm">
Nov 25 23:55:57 compute-0 nova_compute[252550]:   <uuid>bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91</uuid>
Nov 25 23:55:57 compute-0 nova_compute[252550]:   <name>instance-00000001</name>
Nov 25 23:55:57 compute-0 nova_compute[252550]:   <memory>131072</memory>
Nov 25 23:55:57 compute-0 nova_compute[252550]:   <vcpu>1</vcpu>
Nov 25 23:55:57 compute-0 nova_compute[252550]:   <metadata>
Nov 25 23:55:57 compute-0 nova_compute[252550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 23:55:57 compute-0 nova_compute[252550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 23:55:57 compute-0 nova_compute[252550]:       <nova:name>instance-depend-image</nova:name>
Nov 25 23:55:57 compute-0 nova_compute[252550]:       <nova:creationTime>2025-11-25 23:55:57</nova:creationTime>
Nov 25 23:55:57 compute-0 nova_compute[252550]:       <nova:flavor name="m1.nano">
Nov 25 23:55:57 compute-0 nova_compute[252550]:         <nova:memory>128</nova:memory>
Nov 25 23:55:57 compute-0 nova_compute[252550]:         <nova:disk>1</nova:disk>
Nov 25 23:55:57 compute-0 nova_compute[252550]:         <nova:swap>0</nova:swap>
Nov 25 23:55:57 compute-0 nova_compute[252550]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 23:55:57 compute-0 nova_compute[252550]:         <nova:vcpus>1</nova:vcpus>
Nov 25 23:55:57 compute-0 nova_compute[252550]:       </nova:flavor>
Nov 25 23:55:57 compute-0 nova_compute[252550]:       <nova:owner>
Nov 25 23:55:57 compute-0 nova_compute[252550]:         <nova:user uuid="210f8faea4e1416ab82c35b428209415">tempest-ImageDependencyTests-795400484-project-member</nova:user>
Nov 25 23:55:57 compute-0 nova_compute[252550]:         <nova:project uuid="cda2ac0afb334f238d6d956454314f3d">tempest-ImageDependencyTests-795400484</nova:project>
Nov 25 23:55:57 compute-0 nova_compute[252550]:       </nova:owner>
Nov 25 23:55:57 compute-0 nova_compute[252550]:       <nova:ports/>
Nov 25 23:55:57 compute-0 nova_compute[252550]:     </nova:instance>
Nov 25 23:55:57 compute-0 nova_compute[252550]:   </metadata>
Nov 25 23:55:57 compute-0 nova_compute[252550]:   <sysinfo type="smbios">
Nov 25 23:55:57 compute-0 nova_compute[252550]:     <system>
Nov 25 23:55:57 compute-0 nova_compute[252550]:       <entry name="manufacturer">RDO</entry>
Nov 25 23:55:57 compute-0 nova_compute[252550]:       <entry name="product">OpenStack Compute</entry>
Nov 25 23:55:57 compute-0 nova_compute[252550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 23:55:57 compute-0 nova_compute[252550]:       <entry name="serial">bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91</entry>
Nov 25 23:55:57 compute-0 nova_compute[252550]:       <entry name="uuid">bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91</entry>
Nov 25 23:55:57 compute-0 nova_compute[252550]:       <entry name="family">Virtual Machine</entry>
Nov 25 23:55:57 compute-0 nova_compute[252550]:     </system>
Nov 25 23:55:57 compute-0 nova_compute[252550]:   </sysinfo>
Nov 25 23:55:57 compute-0 nova_compute[252550]:   <os>
Nov 25 23:55:57 compute-0 nova_compute[252550]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 23:55:57 compute-0 nova_compute[252550]:     <boot dev="hd"/>
Nov 25 23:55:57 compute-0 nova_compute[252550]:     <smbios mode="sysinfo"/>
Nov 25 23:55:57 compute-0 nova_compute[252550]:   </os>
Nov 25 23:55:57 compute-0 nova_compute[252550]:   <features>
Nov 25 23:55:57 compute-0 nova_compute[252550]:     <acpi/>
Nov 25 23:55:57 compute-0 nova_compute[252550]:     <apic/>
Nov 25 23:55:57 compute-0 nova_compute[252550]:     <vmcoreinfo/>
Nov 25 23:55:57 compute-0 nova_compute[252550]:   </features>
Nov 25 23:55:57 compute-0 nova_compute[252550]:   <clock offset="utc">
Nov 25 23:55:57 compute-0 nova_compute[252550]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 23:55:57 compute-0 nova_compute[252550]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 23:55:57 compute-0 nova_compute[252550]:     <timer name="hpet" present="no"/>
Nov 25 23:55:57 compute-0 nova_compute[252550]:   </clock>
Nov 25 23:55:57 compute-0 nova_compute[252550]:   <cpu mode="host-model" match="exact">
Nov 25 23:55:57 compute-0 nova_compute[252550]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 23:55:57 compute-0 nova_compute[252550]:   </cpu>
Nov 25 23:55:57 compute-0 nova_compute[252550]:   <devices>
Nov 25 23:55:57 compute-0 nova_compute[252550]:     <disk type="network" device="cdrom">
Nov 25 23:55:57 compute-0 nova_compute[252550]:       <driver type="raw" cache="none"/>
Nov 25 23:55:57 compute-0 nova_compute[252550]:       <source protocol="rbd" name="vms/bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91_disk.config">
Nov 25 23:55:57 compute-0 nova_compute[252550]:         <host name="192.168.122.100" port="6789"/>
Nov 25 23:55:57 compute-0 nova_compute[252550]:       </source>
Nov 25 23:55:57 compute-0 nova_compute[252550]:       <auth username="openstack">
Nov 25 23:55:57 compute-0 nova_compute[252550]:         <secret type="ceph" uuid="101922db-575f-58e2-980f-928050464f69"/>
Nov 25 23:55:57 compute-0 nova_compute[252550]:       </auth>
Nov 25 23:55:57 compute-0 nova_compute[252550]:       <target dev="sda" bus="sata"/>
Nov 25 23:55:57 compute-0 nova_compute[252550]:     </disk>
Nov 25 23:55:57 compute-0 nova_compute[252550]:     <disk type="network" device="disk">
Nov 25 23:55:57 compute-0 nova_compute[252550]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 25 23:55:57 compute-0 nova_compute[252550]:       <source protocol="rbd" name="volumes/volume-6a6b9d67-6cf8-4dcc-abf1-e7df17195818">
Nov 25 23:55:57 compute-0 nova_compute[252550]:         <host name="192.168.122.100" port="6789"/>
Nov 25 23:55:57 compute-0 nova_compute[252550]:       </source>
Nov 25 23:55:57 compute-0 nova_compute[252550]:       <auth username="openstack">
Nov 25 23:55:57 compute-0 nova_compute[252550]:         <secret type="ceph" uuid="101922db-575f-58e2-980f-928050464f69"/>
Nov 25 23:55:57 compute-0 nova_compute[252550]:       </auth>
Nov 25 23:55:57 compute-0 nova_compute[252550]:       <target dev="vda" bus="virtio"/>
Nov 25 23:55:57 compute-0 nova_compute[252550]:       <serial>6a6b9d67-6cf8-4dcc-abf1-e7df17195818</serial>
Nov 25 23:55:57 compute-0 nova_compute[252550]:     </disk>
Nov 25 23:55:57 compute-0 nova_compute[252550]:     <serial type="pty">
Nov 25 23:55:57 compute-0 nova_compute[252550]:       <log file="/var/lib/nova/instances/bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91/console.log" append="off"/>
Nov 25 23:55:57 compute-0 nova_compute[252550]:     </serial>
Nov 25 23:55:57 compute-0 nova_compute[252550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 23:55:57 compute-0 nova_compute[252550]:     <video>
Nov 25 23:55:57 compute-0 nova_compute[252550]:       <model type="virtio"/>
Nov 25 23:55:57 compute-0 nova_compute[252550]:     </video>
Nov 25 23:55:57 compute-0 nova_compute[252550]:     <input type="tablet" bus="usb"/>
Nov 25 23:55:57 compute-0 nova_compute[252550]:     <rng model="virtio">
Nov 25 23:55:57 compute-0 nova_compute[252550]:       <backend model="random">/dev/urandom</backend>
Nov 25 23:55:57 compute-0 nova_compute[252550]:     </rng>
Nov 25 23:55:57 compute-0 nova_compute[252550]:     <controller type="pci" model="pcie-root"/>
Nov 25 23:55:57 compute-0 nova_compute[252550]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 23:55:57 compute-0 nova_compute[252550]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 23:55:57 compute-0 nova_compute[252550]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 23:55:57 compute-0 nova_compute[252550]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 23:55:57 compute-0 nova_compute[252550]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 23:55:57 compute-0 nova_compute[252550]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 23:55:57 compute-0 nova_compute[252550]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 23:55:57 compute-0 nova_compute[252550]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 23:55:57 compute-0 nova_compute[252550]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 23:55:57 compute-0 nova_compute[252550]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 23:55:57 compute-0 nova_compute[252550]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 23:55:57 compute-0 nova_compute[252550]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 23:55:57 compute-0 nova_compute[252550]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 23:55:57 compute-0 nova_compute[252550]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 23:55:57 compute-0 nova_compute[252550]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 23:55:57 compute-0 nova_compute[252550]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 23:55:57 compute-0 nova_compute[252550]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 23:55:57 compute-0 nova_compute[252550]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 23:55:57 compute-0 nova_compute[252550]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 23:55:57 compute-0 nova_compute[252550]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 23:55:57 compute-0 nova_compute[252550]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 23:55:57 compute-0 nova_compute[252550]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 23:55:57 compute-0 nova_compute[252550]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 23:55:57 compute-0 nova_compute[252550]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 23:55:57 compute-0 nova_compute[252550]:     <controller type="usb" index="0"/>
Nov 25 23:55:57 compute-0 nova_compute[252550]:     <memballoon model="virtio">
Nov 25 23:55:57 compute-0 nova_compute[252550]:       <stats period="10"/>
Nov 25 23:55:57 compute-0 nova_compute[252550]:     </memballoon>
Nov 25 23:55:57 compute-0 nova_compute[252550]:   </devices>
Nov 25 23:55:57 compute-0 nova_compute[252550]: </domain>
Nov 25 23:55:57 compute-0 nova_compute[252550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 23:55:58 compute-0 nova_compute[252550]: 2025-11-25 23:55:58.017 252558 DEBUG nova.virt.libvirt.driver [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 23:55:58 compute-0 nova_compute[252550]: 2025-11-25 23:55:58.017 252558 DEBUG nova.virt.libvirt.driver [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 23:55:58 compute-0 nova_compute[252550]: 2025-11-25 23:55:58.018 252558 INFO nova.virt.libvirt.driver [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Using config drive
Nov 25 23:55:58 compute-0 nova_compute[252550]: 2025-11-25 23:55:58.051 252558 DEBUG nova.storage.rbd_utils [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] rbd image bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 23:55:58 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v798: 177 pgs: 177 active+clean; 41 MiB data, 149 MiB used, 60 GiB / 60 GiB avail; 76 KiB/s rd, 15 KiB/s wr, 112 op/s
Nov 25 23:55:58 compute-0 nova_compute[252550]: 2025-11-25 23:55:58.634 252558 INFO nova.virt.libvirt.driver [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Creating config drive at /var/lib/nova/instances/bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91/disk.config
Nov 25 23:55:58 compute-0 nova_compute[252550]: 2025-11-25 23:55:58.643 252558 DEBUG oslo_concurrency.processutils [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzmixopyw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 23:55:58 compute-0 nova_compute[252550]: 2025-11-25 23:55:58.794 252558 DEBUG oslo_concurrency.processutils [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzmixopyw" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 23:55:58 compute-0 nova_compute[252550]: 2025-11-25 23:55:58.818 252558 DEBUG nova.storage.rbd_utils [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] rbd image bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 23:55:58 compute-0 nova_compute[252550]: 2025-11-25 23:55:58.821 252558 DEBUG oslo_concurrency.processutils [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91/disk.config bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 23:55:58 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/1342532240' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 23:55:59 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Nov 25 23:55:59 compute-0 ceph-mon[75654]: pgmap v798: 177 pgs: 177 active+clean; 41 MiB data, 149 MiB used, 60 GiB / 60 GiB avail; 76 KiB/s rd, 15 KiB/s wr, 112 op/s
Nov 25 23:55:59 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Nov 25 23:55:59 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Nov 25 23:56:00 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v800: 177 pgs: 177 active+clean; 41 MiB data, 149 MiB used, 60 GiB / 60 GiB avail; 73 KiB/s rd, 14 KiB/s wr, 107 op/s
Nov 25 23:56:00 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:56:00 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Nov 25 23:56:00 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Nov 25 23:56:00 compute-0 ceph-mon[75654]: osdmap e87: 3 total, 3 up, 3 in
Nov 25 23:56:00 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Nov 25 23:56:01 compute-0 nova_compute[252550]: 2025-11-25 23:56:01.031 252558 DEBUG oslo_concurrency.processutils [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91/disk.config bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.210s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 23:56:01 compute-0 nova_compute[252550]: 2025-11-25 23:56:01.032 252558 INFO nova.virt.libvirt.driver [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Deleting local config drive /var/lib/nova/instances/bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91/disk.config because it was imported into RBD.
Nov 25 23:56:01 compute-0 systemd-machined[213892]: New machine qemu-1-instance-00000001.
Nov 25 23:56:01 compute-0 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Nov 25 23:56:01 compute-0 nova_compute[252550]: 2025-11-25 23:56:01.660 252558 DEBUG nova.virt.driver [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Emitting event <LifecycleEvent: 1764114961.6597042, bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 23:56:01 compute-0 nova_compute[252550]: 2025-11-25 23:56:01.661 252558 INFO nova.compute.manager [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] VM Resumed (Lifecycle Event)
Nov 25 23:56:01 compute-0 nova_compute[252550]: 2025-11-25 23:56:01.665 252558 DEBUG nova.compute.manager [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 23:56:01 compute-0 nova_compute[252550]: 2025-11-25 23:56:01.666 252558 DEBUG nova.virt.libvirt.driver [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 23:56:01 compute-0 nova_compute[252550]: 2025-11-25 23:56:01.671 252558 INFO nova.virt.libvirt.driver [-] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Instance spawned successfully.
Nov 25 23:56:01 compute-0 nova_compute[252550]: 2025-11-25 23:56:01.672 252558 DEBUG nova.virt.libvirt.driver [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 23:56:01 compute-0 nova_compute[252550]: 2025-11-25 23:56:01.714 252558 DEBUG nova.compute.manager [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 23:56:01 compute-0 nova_compute[252550]: 2025-11-25 23:56:01.721 252558 DEBUG nova.compute.manager [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 23:56:01 compute-0 nova_compute[252550]: 2025-11-25 23:56:01.776 252558 INFO nova.compute.manager [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 23:56:01 compute-0 nova_compute[252550]: 2025-11-25 23:56:01.776 252558 DEBUG nova.virt.driver [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Emitting event <LifecycleEvent: 1764114961.6651616, bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 23:56:01 compute-0 nova_compute[252550]: 2025-11-25 23:56:01.777 252558 INFO nova.compute.manager [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] VM Started (Lifecycle Event)
Nov 25 23:56:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 23:56:01 compute-0 nova_compute[252550]: 2025-11-25 23:56:01.796 252558 DEBUG nova.compute.manager [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 23:56:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:56:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 23:56:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:56:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:56:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:56:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Nov 25 23:56:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:56:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:56:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:56:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000667638841407827 of space, bias 1.0, pg target 0.2002916524223481 quantized to 32 (current 32)
Nov 25 23:56:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:56:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 23:56:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:56:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:56:01 compute-0 nova_compute[252550]: 2025-11-25 23:56:01.807 252558 DEBUG nova.virt.libvirt.driver [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 23:56:01 compute-0 nova_compute[252550]: 2025-11-25 23:56:01.807 252558 DEBUG nova.virt.libvirt.driver [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 23:56:01 compute-0 nova_compute[252550]: 2025-11-25 23:56:01.808 252558 DEBUG nova.virt.libvirt.driver [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 23:56:01 compute-0 nova_compute[252550]: 2025-11-25 23:56:01.808 252558 DEBUG nova.virt.libvirt.driver [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 23:56:01 compute-0 nova_compute[252550]: 2025-11-25 23:56:01.809 252558 DEBUG nova.virt.libvirt.driver [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 23:56:01 compute-0 nova_compute[252550]: 2025-11-25 23:56:01.810 252558 DEBUG nova.virt.libvirt.driver [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 23:56:01 compute-0 nova_compute[252550]: 2025-11-25 23:56:01.816 252558 DEBUG nova.compute.manager [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 23:56:01 compute-0 nova_compute[252550]: 2025-11-25 23:56:01.855 252558 INFO nova.compute.manager [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 23:56:01 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Nov 25 23:56:01 compute-0 ceph-mon[75654]: pgmap v800: 177 pgs: 177 active+clean; 41 MiB data, 149 MiB used, 60 GiB / 60 GiB avail; 73 KiB/s rd, 14 KiB/s wr, 107 op/s
Nov 25 23:56:01 compute-0 ceph-mon[75654]: osdmap e88: 3 total, 3 up, 3 in
Nov 25 23:56:01 compute-0 nova_compute[252550]: 2025-11-25 23:56:01.880 252558 INFO nova.compute.manager [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Took 4.57 seconds to spawn the instance on the hypervisor.
Nov 25 23:56:01 compute-0 nova_compute[252550]: 2025-11-25 23:56:01.883 252558 DEBUG nova.compute.manager [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 23:56:01 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Nov 25 23:56:01 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Nov 25 23:56:01 compute-0 nova_compute[252550]: 2025-11-25 23:56:01.971 252558 INFO nova.compute.manager [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Took 8.39 seconds to build instance.
Nov 25 23:56:01 compute-0 nova_compute[252550]: 2025-11-25 23:56:01.993 252558 DEBUG oslo_concurrency.lockutils [None req-4a090474-38c1-4fc0-b577-a646cf8b4cfb 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Lock "bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.597s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 23:56:02 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v803: 177 pgs: 177 active+clean; 42 MiB data, 166 MiB used, 60 GiB / 60 GiB avail; 131 KiB/s rd, 37 KiB/s wr, 184 op/s
Nov 25 23:56:02 compute-0 nova_compute[252550]: 2025-11-25 23:56:02.823 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 23:56:02 compute-0 nova_compute[252550]: 2025-11-25 23:56:02.824 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 25 23:56:02 compute-0 nova_compute[252550]: 2025-11-25 23:56:02.854 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 25 23:56:02 compute-0 nova_compute[252550]: 2025-11-25 23:56:02.855 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 23:56:02 compute-0 nova_compute[252550]: 2025-11-25 23:56:02.855 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 25 23:56:02 compute-0 nova_compute[252550]: 2025-11-25 23:56:02.874 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 23:56:02 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Nov 25 23:56:02 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Nov 25 23:56:02 compute-0 ceph-mon[75654]: osdmap e89: 3 total, 3 up, 3 in
Nov 25 23:56:02 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Nov 25 23:56:03 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Nov 25 23:56:03 compute-0 ceph-mon[75654]: pgmap v803: 177 pgs: 177 active+clean; 42 MiB data, 166 MiB used, 60 GiB / 60 GiB avail; 131 KiB/s rd, 37 KiB/s wr, 184 op/s
Nov 25 23:56:03 compute-0 ceph-mon[75654]: osdmap e90: 3 total, 3 up, 3 in
Nov 25 23:56:03 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Nov 25 23:56:03 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Nov 25 23:56:04 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v806: 177 pgs: 177 active+clean; 42 MiB data, 166 MiB used, 60 GiB / 60 GiB avail; 149 KiB/s rd, 48 KiB/s wr, 206 op/s
Nov 25 23:56:04 compute-0 nova_compute[252550]: 2025-11-25 23:56:04.883 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 23:56:04 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Nov 25 23:56:04 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Nov 25 23:56:04 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Nov 25 23:56:04 compute-0 ceph-mon[75654]: osdmap e91: 3 total, 3 up, 3 in
Nov 25 23:56:05 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:56:05 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Nov 25 23:56:05 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Nov 25 23:56:05 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Nov 25 23:56:05 compute-0 nova_compute[252550]: 2025-11-25 23:56:05.822 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 23:56:05 compute-0 ceph-mon[75654]: pgmap v806: 177 pgs: 177 active+clean; 42 MiB data, 166 MiB used, 60 GiB / 60 GiB avail; 149 KiB/s rd, 48 KiB/s wr, 206 op/s
Nov 25 23:56:05 compute-0 ceph-mon[75654]: osdmap e92: 3 total, 3 up, 3 in
Nov 25 23:56:05 compute-0 ceph-mon[75654]: osdmap e93: 3 total, 3 up, 3 in
Nov 25 23:56:06 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v809: 177 pgs: 177 active+clean; 41 MiB data, 188 MiB used, 60 GiB / 60 GiB avail; 139 KiB/s rd, 8.0 KiB/s wr, 190 op/s
Nov 25 23:56:06 compute-0 nova_compute[252550]: 2025-11-25 23:56:06.821 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 23:56:06 compute-0 nova_compute[252550]: 2025-11-25 23:56:06.822 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 23:56:06 compute-0 nova_compute[252550]: 2025-11-25 23:56:06.822 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 23:56:06 compute-0 nova_compute[252550]: 2025-11-25 23:56:06.823 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 23:56:06 compute-0 nova_compute[252550]: 2025-11-25 23:56:06.823 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 23:56:06 compute-0 nova_compute[252550]: 2025-11-25 23:56:06.855 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 23:56:06 compute-0 nova_compute[252550]: 2025-11-25 23:56:06.855 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 23:56:06 compute-0 nova_compute[252550]: 2025-11-25 23:56:06.856 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 23:56:06 compute-0 nova_compute[252550]: 2025-11-25 23:56:06.856 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 23:56:06 compute-0 nova_compute[252550]: 2025-11-25 23:56:06.856 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 23:56:06 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Nov 25 23:56:06 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Nov 25 23:56:06 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Nov 25 23:56:07 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 23:56:07 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3650025285' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 23:56:07 compute-0 nova_compute[252550]: 2025-11-25 23:56:07.341 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 23:56:07 compute-0 nova_compute[252550]: 2025-11-25 23:56:07.420 252558 DEBUG nova.virt.libvirt.driver [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 23:56:07 compute-0 nova_compute[252550]: 2025-11-25 23:56:07.421 252558 DEBUG nova.virt.libvirt.driver [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 25 23:56:07 compute-0 nova_compute[252550]: 2025-11-25 23:56:07.664 252558 WARNING nova.virt.libvirt.driver [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 23:56:07 compute-0 nova_compute[252550]: 2025-11-25 23:56:07.666 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5220MB free_disk=59.98813247680664GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 23:56:07 compute-0 nova_compute[252550]: 2025-11-25 23:56:07.666 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 23:56:07 compute-0 nova_compute[252550]: 2025-11-25 23:56:07.667 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 23:56:07 compute-0 ceph-mon[75654]: pgmap v809: 177 pgs: 177 active+clean; 41 MiB data, 188 MiB used, 60 GiB / 60 GiB avail; 139 KiB/s rd, 8.0 KiB/s wr, 190 op/s
Nov 25 23:56:07 compute-0 ceph-mon[75654]: osdmap e94: 3 total, 3 up, 3 in
Nov 25 23:56:07 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/3650025285' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 23:56:07 compute-0 nova_compute[252550]: 2025-11-25 23:56:07.964 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Instance bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 23:56:07 compute-0 nova_compute[252550]: 2025-11-25 23:56:07.965 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 23:56:07 compute-0 nova_compute[252550]: 2025-11-25 23:56:07.965 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 23:56:08 compute-0 nova_compute[252550]: 2025-11-25 23:56:08.026 252558 DEBUG nova.scheduler.client.report [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Refreshing inventories for resource provider 08547965-b35f-4b7b-95d8-902f06aa011c _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 25 23:56:08 compute-0 nova_compute[252550]: 2025-11-25 23:56:08.118 252558 DEBUG nova.scheduler.client.report [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Updating ProviderTree inventory for provider 08547965-b35f-4b7b-95d8-902f06aa011c from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 25 23:56:08 compute-0 nova_compute[252550]: 2025-11-25 23:56:08.119 252558 DEBUG nova.compute.provider_tree [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Updating inventory in ProviderTree for provider 08547965-b35f-4b7b-95d8-902f06aa011c with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 25 23:56:08 compute-0 nova_compute[252550]: 2025-11-25 23:56:08.151 252558 DEBUG nova.scheduler.client.report [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Refreshing aggregate associations for resource provider 08547965-b35f-4b7b-95d8-902f06aa011c, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 25 23:56:08 compute-0 nova_compute[252550]: 2025-11-25 23:56:08.179 252558 DEBUG nova.scheduler.client.report [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Refreshing trait associations for resource provider 08547965-b35f-4b7b-95d8-902f06aa011c, traits: COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_USB,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_FMA3,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_BMI,HW_CPU_X86_F16C,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NODE,HW_CPU_X86_AVX,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_FDC,COMPUTE_ACCELERATORS,HW_CPU_X86_AESNI,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_AVX2,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_MMX,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE4A,HW_CPU_X86_SHA,HW_CPU_X86_SSE41,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000E _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 25 23:56:08 compute-0 nova_compute[252550]: 2025-11-25 23:56:08.214 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 23:56:08 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v811: 177 pgs: 177 active+clean; 41 MiB data, 188 MiB used, 60 GiB / 60 GiB avail; 218 KiB/s rd, 13 KiB/s wr, 289 op/s
Nov 25 23:56:08 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 23:56:08 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/844459817' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 23:56:08 compute-0 nova_compute[252550]: 2025-11-25 23:56:08.644 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 23:56:08 compute-0 nova_compute[252550]: 2025-11-25 23:56:08.650 252558 DEBUG nova.compute.provider_tree [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Inventory has not changed in ProviderTree for provider: 08547965-b35f-4b7b-95d8-902f06aa011c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 23:56:08 compute-0 nova_compute[252550]: 2025-11-25 23:56:08.670 252558 DEBUG nova.scheduler.client.report [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Inventory has not changed for provider 08547965-b35f-4b7b-95d8-902f06aa011c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 23:56:08 compute-0 nova_compute[252550]: 2025-11-25 23:56:08.711 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 23:56:08 compute-0 nova_compute[252550]: 2025-11-25 23:56:08.711 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.045s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 23:56:08 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Nov 25 23:56:08 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/844459817' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 23:56:08 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Nov 25 23:56:08 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Nov 25 23:56:09 compute-0 nova_compute[252550]: 2025-11-25 23:56:09.711 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 23:56:09 compute-0 nova_compute[252550]: 2025-11-25 23:56:09.712 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 23:56:09 compute-0 nova_compute[252550]: 2025-11-25 23:56:09.712 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 23:56:09 compute-0 ceph-mon[75654]: pgmap v811: 177 pgs: 177 active+clean; 41 MiB data, 188 MiB used, 60 GiB / 60 GiB avail; 218 KiB/s rd, 13 KiB/s wr, 289 op/s
Nov 25 23:56:09 compute-0 ceph-mon[75654]: osdmap e95: 3 total, 3 up, 3 in
Nov 25 23:56:10 compute-0 nova_compute[252550]: 2025-11-25 23:56:10.233 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Acquiring lock "refresh_cache-bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 23:56:10 compute-0 nova_compute[252550]: 2025-11-25 23:56:10.233 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Acquired lock "refresh_cache-bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 23:56:10 compute-0 nova_compute[252550]: 2025-11-25 23:56:10.234 252558 DEBUG nova.network.neutron [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 23:56:10 compute-0 nova_compute[252550]: 2025-11-25 23:56:10.234 252558 DEBUG nova.objects.instance [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lazy-loading 'info_cache' on Instance uuid bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 23:56:10 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v813: 177 pgs: 177 active+clean; 41 MiB data, 188 MiB used, 60 GiB / 60 GiB avail; 179 KiB/s rd, 11 KiB/s wr, 238 op/s
Nov 25 23:56:10 compute-0 nova_compute[252550]: 2025-11-25 23:56:10.629 252558 DEBUG nova.network.neutron [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 23:56:10 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e95 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:56:10 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Nov 25 23:56:10 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Nov 25 23:56:10 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Nov 25 23:56:10 compute-0 nova_compute[252550]: 2025-11-25 23:56:10.961 252558 DEBUG nova.network.neutron [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 23:56:10 compute-0 nova_compute[252550]: 2025-11-25 23:56:10.983 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Releasing lock "refresh_cache-bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 23:56:10 compute-0 nova_compute[252550]: 2025-11-25 23:56:10.983 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 23:56:10 compute-0 nova_compute[252550]: 2025-11-25 23:56:10.983 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 23:56:10 compute-0 nova_compute[252550]: 2025-11-25 23:56:10.984 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 23:56:11 compute-0 ceph-mon[75654]: pgmap v813: 177 pgs: 177 active+clean; 41 MiB data, 188 MiB used, 60 GiB / 60 GiB avail; 179 KiB/s rd, 11 KiB/s wr, 238 op/s
Nov 25 23:56:11 compute-0 ceph-mon[75654]: osdmap e96: 3 total, 3 up, 3 in
Nov 25 23:56:12 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v815: 177 pgs: 177 active+clean; 41 MiB data, 188 MiB used, 60 GiB / 60 GiB avail; 135 KiB/s rd, 9.7 KiB/s wr, 178 op/s
Nov 25 23:56:12 compute-0 nova_compute[252550]: 2025-11-25 23:56:12.613 252558 DEBUG oslo_concurrency.lockutils [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Acquiring lock "861debf8-73c8-45fe-92d9-fbfa772d34eb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 23:56:12 compute-0 nova_compute[252550]: 2025-11-25 23:56:12.614 252558 DEBUG oslo_concurrency.lockutils [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Lock "861debf8-73c8-45fe-92d9-fbfa772d34eb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 23:56:12 compute-0 nova_compute[252550]: 2025-11-25 23:56:12.629 252558 DEBUG nova.compute.manager [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 23:56:12 compute-0 nova_compute[252550]: 2025-11-25 23:56:12.699 252558 DEBUG oslo_concurrency.lockutils [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 23:56:12 compute-0 nova_compute[252550]: 2025-11-25 23:56:12.699 252558 DEBUG oslo_concurrency.lockutils [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 23:56:12 compute-0 nova_compute[252550]: 2025-11-25 23:56:12.724 252558 DEBUG nova.virt.hardware [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 23:56:12 compute-0 nova_compute[252550]: 2025-11-25 23:56:12.724 252558 INFO nova.compute.claims [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Claim successful on node compute-0.ctlplane.example.com
Nov 25 23:56:12 compute-0 nova_compute[252550]: 2025-11-25 23:56:12.830 252558 DEBUG oslo_concurrency.processutils [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 23:56:13 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 23:56:13 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2865924001' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 23:56:13 compute-0 nova_compute[252550]: 2025-11-25 23:56:13.267 252558 DEBUG oslo_concurrency.processutils [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 23:56:13 compute-0 nova_compute[252550]: 2025-11-25 23:56:13.274 252558 DEBUG nova.compute.provider_tree [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Inventory has not changed in ProviderTree for provider: 08547965-b35f-4b7b-95d8-902f06aa011c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 23:56:13 compute-0 nova_compute[252550]: 2025-11-25 23:56:13.292 252558 DEBUG nova.scheduler.client.report [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Inventory has not changed for provider 08547965-b35f-4b7b-95d8-902f06aa011c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 23:56:13 compute-0 nova_compute[252550]: 2025-11-25 23:56:13.321 252558 DEBUG oslo_concurrency.lockutils [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.622s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 23:56:13 compute-0 nova_compute[252550]: 2025-11-25 23:56:13.324 252558 DEBUG nova.compute.manager [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 23:56:13 compute-0 nova_compute[252550]: 2025-11-25 23:56:13.397 252558 DEBUG nova.compute.manager [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 23:56:13 compute-0 nova_compute[252550]: 2025-11-25 23:56:13.398 252558 DEBUG nova.network.neutron [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 23:56:13 compute-0 nova_compute[252550]: 2025-11-25 23:56:13.428 252558 INFO nova.virt.libvirt.driver [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 23:56:13 compute-0 nova_compute[252550]: 2025-11-25 23:56:13.451 252558 DEBUG nova.compute.manager [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 23:56:13 compute-0 nova_compute[252550]: 2025-11-25 23:56:13.564 252558 DEBUG nova.compute.manager [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 23:56:13 compute-0 nova_compute[252550]: 2025-11-25 23:56:13.565 252558 DEBUG nova.virt.libvirt.driver [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 23:56:13 compute-0 nova_compute[252550]: 2025-11-25 23:56:13.566 252558 INFO nova.virt.libvirt.driver [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Creating image(s)
Nov 25 23:56:13 compute-0 nova_compute[252550]: 2025-11-25 23:56:13.601 252558 DEBUG nova.storage.rbd_utils [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] rbd image 861debf8-73c8-45fe-92d9-fbfa772d34eb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 23:56:13 compute-0 nova_compute[252550]: 2025-11-25 23:56:13.636 252558 DEBUG nova.storage.rbd_utils [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] rbd image 861debf8-73c8-45fe-92d9-fbfa772d34eb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 23:56:13 compute-0 nova_compute[252550]: 2025-11-25 23:56:13.669 252558 DEBUG nova.storage.rbd_utils [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] rbd image 861debf8-73c8-45fe-92d9-fbfa772d34eb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 23:56:13 compute-0 ceph-mon[75654]: pgmap v815: 177 pgs: 177 active+clean; 41 MiB data, 188 MiB used, 60 GiB / 60 GiB avail; 135 KiB/s rd, 9.7 KiB/s wr, 178 op/s
Nov 25 23:56:13 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/2865924001' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 23:56:13 compute-0 nova_compute[252550]: 2025-11-25 23:56:13.674 252558 DEBUG oslo_concurrency.lockutils [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Acquiring lock "6aa298c67176a6f202556fea602ab4a4483a8f4b" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 23:56:13 compute-0 nova_compute[252550]: 2025-11-25 23:56:13.676 252558 DEBUG oslo_concurrency.lockutils [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Lock "6aa298c67176a6f202556fea602ab4a4483a8f4b" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 23:56:13 compute-0 nova_compute[252550]: 2025-11-25 23:56:13.969 252558 DEBUG nova.network.neutron [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Nov 25 23:56:13 compute-0 nova_compute[252550]: 2025-11-25 23:56:13.969 252558 DEBUG nova.compute.manager [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 23:56:14 compute-0 nova_compute[252550]: 2025-11-25 23:56:14.032 252558 DEBUG nova.virt.libvirt.imagebackend [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Image locations are: [{'url': 'rbd://101922db-575f-58e2-980f-928050464f69/images/16d8485c-e81e-455c-b234-ffc2513a8236/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://101922db-575f-58e2-980f-928050464f69/images/16d8485c-e81e-455c-b234-ffc2513a8236/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Nov 25 23:56:14 compute-0 nova_compute[252550]: 2025-11-25 23:56:14.103 252558 DEBUG nova.virt.libvirt.imagebackend [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Selected location: {'url': 'rbd://101922db-575f-58e2-980f-928050464f69/images/16d8485c-e81e-455c-b234-ffc2513a8236/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Nov 25 23:56:14 compute-0 nova_compute[252550]: 2025-11-25 23:56:14.104 252558 DEBUG nova.storage.rbd_utils [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] cloning images/16d8485c-e81e-455c-b234-ffc2513a8236@snap to None/861debf8-73c8-45fe-92d9-fbfa772d34eb_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 25 23:56:14 compute-0 nova_compute[252550]: 2025-11-25 23:56:14.238 252558 DEBUG oslo_concurrency.lockutils [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Lock "6aa298c67176a6f202556fea602ab4a4483a8f4b" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.563s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 23:56:14 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:56:14.242 160725 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '32:82:13', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '36:f3:66:b7:57:d1'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 23:56:14 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:56:14.243 160725 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 23:56:14 compute-0 nova_compute[252550]: 2025-11-25 23:56:14.406 252558 DEBUG nova.storage.rbd_utils [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] resizing rbd image 861debf8-73c8-45fe-92d9-fbfa772d34eb_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 25 23:56:14 compute-0 nova_compute[252550]: 2025-11-25 23:56:14.502 252558 DEBUG nova.objects.instance [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Lazy-loading 'migration_context' on Instance uuid 861debf8-73c8-45fe-92d9-fbfa772d34eb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 23:56:14 compute-0 nova_compute[252550]: 2025-11-25 23:56:14.515 252558 DEBUG nova.virt.libvirt.driver [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 23:56:14 compute-0 nova_compute[252550]: 2025-11-25 23:56:14.516 252558 DEBUG nova.virt.libvirt.driver [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Ensure instance console log exists: /var/lib/nova/instances/861debf8-73c8-45fe-92d9-fbfa772d34eb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 23:56:14 compute-0 nova_compute[252550]: 2025-11-25 23:56:14.516 252558 DEBUG oslo_concurrency.lockutils [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 23:56:14 compute-0 nova_compute[252550]: 2025-11-25 23:56:14.517 252558 DEBUG oslo_concurrency.lockutils [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 23:56:14 compute-0 nova_compute[252550]: 2025-11-25 23:56:14.517 252558 DEBUG oslo_concurrency.lockutils [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 23:56:14 compute-0 nova_compute[252550]: 2025-11-25 23:56:14.520 252558 DEBUG nova.virt.libvirt.driver [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='df9355b10df7ff91027aeb7f3322435e',container_format='bare',created_at=2025-11-25T23:56:10Z,direct_url=<?>,disk_format='raw',id=16d8485c-e81e-455c-b234-ffc2513a8236,min_disk=0,min_ram=0,name='tempest-image-dependency-test-319602011',owner='cda2ac0afb334f238d6d956454314f3d',properties=ImageMetaProps,protected=<?>,size=1024,status='active',tags=<?>,updated_at=2025-11-25T23:56:10Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_options': None, 'disk_bus': 'virtio', 'size': 0, 'encryption_format': None, 'boot_index': 0, 'encrypted': False, 'device_name': '/dev/vda', 'guest_format': None, 'encryption_secret_uuid': None, 'image_id': '16d8485c-e81e-455c-b234-ffc2513a8236'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 23:56:14 compute-0 nova_compute[252550]: 2025-11-25 23:56:14.525 252558 WARNING nova.virt.libvirt.driver [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 23:56:14 compute-0 nova_compute[252550]: 2025-11-25 23:56:14.530 252558 DEBUG nova.virt.libvirt.host [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 23:56:14 compute-0 nova_compute[252550]: 2025-11-25 23:56:14.531 252558 DEBUG nova.virt.libvirt.host [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 23:56:14 compute-0 nova_compute[252550]: 2025-11-25 23:56:14.534 252558 DEBUG nova.virt.libvirt.host [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 23:56:14 compute-0 nova_compute[252550]: 2025-11-25 23:56:14.535 252558 DEBUG nova.virt.libvirt.host [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 23:56:14 compute-0 nova_compute[252550]: 2025-11-25 23:56:14.536 252558 DEBUG nova.virt.libvirt.driver [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 23:56:14 compute-0 nova_compute[252550]: 2025-11-25 23:56:14.536 252558 DEBUG nova.virt.hardware [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T23:54:39Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='897d55b7-f73b-41fe-b70f-d9aa95d4456d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='df9355b10df7ff91027aeb7f3322435e',container_format='bare',created_at=2025-11-25T23:56:10Z,direct_url=<?>,disk_format='raw',id=16d8485c-e81e-455c-b234-ffc2513a8236,min_disk=0,min_ram=0,name='tempest-image-dependency-test-319602011',owner='cda2ac0afb334f238d6d956454314f3d',properties=ImageMetaProps,protected=<?>,size=1024,status='active',tags=<?>,updated_at=2025-11-25T23:56:10Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 23:56:14 compute-0 nova_compute[252550]: 2025-11-25 23:56:14.537 252558 DEBUG nova.virt.hardware [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 23:56:14 compute-0 nova_compute[252550]: 2025-11-25 23:56:14.537 252558 DEBUG nova.virt.hardware [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 23:56:14 compute-0 nova_compute[252550]: 2025-11-25 23:56:14.538 252558 DEBUG nova.virt.hardware [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 23:56:14 compute-0 nova_compute[252550]: 2025-11-25 23:56:14.538 252558 DEBUG nova.virt.hardware [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 23:56:14 compute-0 nova_compute[252550]: 2025-11-25 23:56:14.538 252558 DEBUG nova.virt.hardware [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 23:56:14 compute-0 nova_compute[252550]: 2025-11-25 23:56:14.539 252558 DEBUG nova.virt.hardware [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 23:56:14 compute-0 nova_compute[252550]: 2025-11-25 23:56:14.539 252558 DEBUG nova.virt.hardware [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 23:56:14 compute-0 nova_compute[252550]: 2025-11-25 23:56:14.540 252558 DEBUG nova.virt.hardware [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 23:56:14 compute-0 nova_compute[252550]: 2025-11-25 23:56:14.540 252558 DEBUG nova.virt.hardware [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 23:56:14 compute-0 nova_compute[252550]: 2025-11-25 23:56:14.541 252558 DEBUG nova.virt.hardware [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 23:56:14 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v816: 177 pgs: 177 active+clean; 41 MiB data, 188 MiB used, 60 GiB / 60 GiB avail; 107 KiB/s rd, 7.6 KiB/s wr, 141 op/s
Nov 25 23:56:14 compute-0 nova_compute[252550]: 2025-11-25 23:56:14.545 252558 DEBUG oslo_concurrency.processutils [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 23:56:15 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 23:56:15 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2107091497' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 23:56:15 compute-0 nova_compute[252550]: 2025-11-25 23:56:15.050 252558 DEBUG oslo_concurrency.processutils [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 23:56:15 compute-0 nova_compute[252550]: 2025-11-25 23:56:15.073 252558 DEBUG nova.storage.rbd_utils [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] rbd image 861debf8-73c8-45fe-92d9-fbfa772d34eb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 23:56:15 compute-0 nova_compute[252550]: 2025-11-25 23:56:15.077 252558 DEBUG oslo_concurrency.processutils [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 23:56:15 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 25 23:56:15 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2135860034' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 23:56:15 compute-0 nova_compute[252550]: 2025-11-25 23:56:15.526 252558 DEBUG oslo_concurrency.processutils [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 23:56:15 compute-0 nova_compute[252550]: 2025-11-25 23:56:15.528 252558 DEBUG nova.objects.instance [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Lazy-loading 'pci_devices' on Instance uuid 861debf8-73c8-45fe-92d9-fbfa772d34eb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 23:56:15 compute-0 nova_compute[252550]: 2025-11-25 23:56:15.590 252558 DEBUG nova.virt.libvirt.driver [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] End _get_guest_xml xml=<domain type="kvm">
Nov 25 23:56:15 compute-0 nova_compute[252550]:   <uuid>861debf8-73c8-45fe-92d9-fbfa772d34eb</uuid>
Nov 25 23:56:15 compute-0 nova_compute[252550]:   <name>instance-00000002</name>
Nov 25 23:56:15 compute-0 nova_compute[252550]:   <memory>131072</memory>
Nov 25 23:56:15 compute-0 nova_compute[252550]:   <vcpu>1</vcpu>
Nov 25 23:56:15 compute-0 nova_compute[252550]:   <metadata>
Nov 25 23:56:15 compute-0 nova_compute[252550]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 23:56:15 compute-0 nova_compute[252550]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 23:56:15 compute-0 nova_compute[252550]:       <nova:name>instance-depend-image</nova:name>
Nov 25 23:56:15 compute-0 nova_compute[252550]:       <nova:creationTime>2025-11-25 23:56:14</nova:creationTime>
Nov 25 23:56:15 compute-0 nova_compute[252550]:       <nova:flavor name="m1.nano">
Nov 25 23:56:15 compute-0 nova_compute[252550]:         <nova:memory>128</nova:memory>
Nov 25 23:56:15 compute-0 nova_compute[252550]:         <nova:disk>1</nova:disk>
Nov 25 23:56:15 compute-0 nova_compute[252550]:         <nova:swap>0</nova:swap>
Nov 25 23:56:15 compute-0 nova_compute[252550]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 23:56:15 compute-0 nova_compute[252550]:         <nova:vcpus>1</nova:vcpus>
Nov 25 23:56:15 compute-0 nova_compute[252550]:       </nova:flavor>
Nov 25 23:56:15 compute-0 nova_compute[252550]:       <nova:owner>
Nov 25 23:56:15 compute-0 nova_compute[252550]:         <nova:user uuid="210f8faea4e1416ab82c35b428209415">tempest-ImageDependencyTests-795400484-project-member</nova:user>
Nov 25 23:56:15 compute-0 nova_compute[252550]:         <nova:project uuid="cda2ac0afb334f238d6d956454314f3d">tempest-ImageDependencyTests-795400484</nova:project>
Nov 25 23:56:15 compute-0 nova_compute[252550]:       </nova:owner>
Nov 25 23:56:15 compute-0 nova_compute[252550]:       <nova:root type="image" uuid="16d8485c-e81e-455c-b234-ffc2513a8236"/>
Nov 25 23:56:15 compute-0 nova_compute[252550]:       <nova:ports/>
Nov 25 23:56:15 compute-0 nova_compute[252550]:     </nova:instance>
Nov 25 23:56:15 compute-0 nova_compute[252550]:   </metadata>
Nov 25 23:56:15 compute-0 nova_compute[252550]:   <sysinfo type="smbios">
Nov 25 23:56:15 compute-0 nova_compute[252550]:     <system>
Nov 25 23:56:15 compute-0 nova_compute[252550]:       <entry name="manufacturer">RDO</entry>
Nov 25 23:56:15 compute-0 nova_compute[252550]:       <entry name="product">OpenStack Compute</entry>
Nov 25 23:56:15 compute-0 nova_compute[252550]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 23:56:15 compute-0 nova_compute[252550]:       <entry name="serial">861debf8-73c8-45fe-92d9-fbfa772d34eb</entry>
Nov 25 23:56:15 compute-0 nova_compute[252550]:       <entry name="uuid">861debf8-73c8-45fe-92d9-fbfa772d34eb</entry>
Nov 25 23:56:15 compute-0 nova_compute[252550]:       <entry name="family">Virtual Machine</entry>
Nov 25 23:56:15 compute-0 nova_compute[252550]:     </system>
Nov 25 23:56:15 compute-0 nova_compute[252550]:   </sysinfo>
Nov 25 23:56:15 compute-0 nova_compute[252550]:   <os>
Nov 25 23:56:15 compute-0 nova_compute[252550]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 23:56:15 compute-0 nova_compute[252550]:     <boot dev="hd"/>
Nov 25 23:56:15 compute-0 nova_compute[252550]:     <smbios mode="sysinfo"/>
Nov 25 23:56:15 compute-0 nova_compute[252550]:   </os>
Nov 25 23:56:15 compute-0 nova_compute[252550]:   <features>
Nov 25 23:56:15 compute-0 nova_compute[252550]:     <acpi/>
Nov 25 23:56:15 compute-0 nova_compute[252550]:     <apic/>
Nov 25 23:56:15 compute-0 nova_compute[252550]:     <vmcoreinfo/>
Nov 25 23:56:15 compute-0 nova_compute[252550]:   </features>
Nov 25 23:56:15 compute-0 nova_compute[252550]:   <clock offset="utc">
Nov 25 23:56:15 compute-0 nova_compute[252550]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 23:56:15 compute-0 nova_compute[252550]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 23:56:15 compute-0 nova_compute[252550]:     <timer name="hpet" present="no"/>
Nov 25 23:56:15 compute-0 nova_compute[252550]:   </clock>
Nov 25 23:56:15 compute-0 nova_compute[252550]:   <cpu mode="host-model" match="exact">
Nov 25 23:56:15 compute-0 nova_compute[252550]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 23:56:15 compute-0 nova_compute[252550]:   </cpu>
Nov 25 23:56:15 compute-0 nova_compute[252550]:   <devices>
Nov 25 23:56:15 compute-0 nova_compute[252550]:     <disk type="network" device="disk">
Nov 25 23:56:15 compute-0 nova_compute[252550]:       <driver type="raw" cache="none"/>
Nov 25 23:56:15 compute-0 nova_compute[252550]:       <source protocol="rbd" name="vms/861debf8-73c8-45fe-92d9-fbfa772d34eb_disk">
Nov 25 23:56:15 compute-0 nova_compute[252550]:         <host name="192.168.122.100" port="6789"/>
Nov 25 23:56:15 compute-0 nova_compute[252550]:       </source>
Nov 25 23:56:15 compute-0 nova_compute[252550]:       <auth username="openstack">
Nov 25 23:56:15 compute-0 nova_compute[252550]:         <secret type="ceph" uuid="101922db-575f-58e2-980f-928050464f69"/>
Nov 25 23:56:15 compute-0 nova_compute[252550]:       </auth>
Nov 25 23:56:15 compute-0 nova_compute[252550]:       <target dev="vda" bus="virtio"/>
Nov 25 23:56:15 compute-0 nova_compute[252550]:     </disk>
Nov 25 23:56:15 compute-0 nova_compute[252550]:     <disk type="network" device="cdrom">
Nov 25 23:56:15 compute-0 nova_compute[252550]:       <driver type="raw" cache="none"/>
Nov 25 23:56:15 compute-0 nova_compute[252550]:       <source protocol="rbd" name="vms/861debf8-73c8-45fe-92d9-fbfa772d34eb_disk.config">
Nov 25 23:56:15 compute-0 nova_compute[252550]:         <host name="192.168.122.100" port="6789"/>
Nov 25 23:56:15 compute-0 nova_compute[252550]:       </source>
Nov 25 23:56:15 compute-0 nova_compute[252550]:       <auth username="openstack">
Nov 25 23:56:15 compute-0 nova_compute[252550]:         <secret type="ceph" uuid="101922db-575f-58e2-980f-928050464f69"/>
Nov 25 23:56:15 compute-0 nova_compute[252550]:       </auth>
Nov 25 23:56:15 compute-0 nova_compute[252550]:       <target dev="sda" bus="sata"/>
Nov 25 23:56:15 compute-0 nova_compute[252550]:     </disk>
Nov 25 23:56:15 compute-0 nova_compute[252550]:     <serial type="pty">
Nov 25 23:56:15 compute-0 nova_compute[252550]:       <log file="/var/lib/nova/instances/861debf8-73c8-45fe-92d9-fbfa772d34eb/console.log" append="off"/>
Nov 25 23:56:15 compute-0 nova_compute[252550]:     </serial>
Nov 25 23:56:15 compute-0 nova_compute[252550]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 23:56:15 compute-0 nova_compute[252550]:     <video>
Nov 25 23:56:15 compute-0 nova_compute[252550]:       <model type="virtio"/>
Nov 25 23:56:15 compute-0 nova_compute[252550]:     </video>
Nov 25 23:56:15 compute-0 nova_compute[252550]:     <input type="tablet" bus="usb"/>
Nov 25 23:56:15 compute-0 nova_compute[252550]:     <rng model="virtio">
Nov 25 23:56:15 compute-0 nova_compute[252550]:       <backend model="random">/dev/urandom</backend>
Nov 25 23:56:15 compute-0 nova_compute[252550]:     </rng>
Nov 25 23:56:15 compute-0 nova_compute[252550]:     <controller type="pci" model="pcie-root"/>
Nov 25 23:56:15 compute-0 nova_compute[252550]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 23:56:15 compute-0 nova_compute[252550]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 23:56:15 compute-0 nova_compute[252550]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 23:56:15 compute-0 nova_compute[252550]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 23:56:15 compute-0 nova_compute[252550]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 23:56:15 compute-0 nova_compute[252550]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 23:56:15 compute-0 nova_compute[252550]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 23:56:15 compute-0 nova_compute[252550]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 23:56:15 compute-0 nova_compute[252550]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 23:56:15 compute-0 nova_compute[252550]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 23:56:15 compute-0 nova_compute[252550]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 23:56:15 compute-0 nova_compute[252550]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 23:56:15 compute-0 nova_compute[252550]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 23:56:15 compute-0 nova_compute[252550]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 23:56:15 compute-0 nova_compute[252550]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 23:56:15 compute-0 nova_compute[252550]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 23:56:15 compute-0 nova_compute[252550]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 23:56:15 compute-0 nova_compute[252550]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 23:56:15 compute-0 nova_compute[252550]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 23:56:15 compute-0 nova_compute[252550]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 23:56:15 compute-0 nova_compute[252550]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 23:56:15 compute-0 nova_compute[252550]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 23:56:15 compute-0 nova_compute[252550]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 23:56:15 compute-0 nova_compute[252550]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 23:56:15 compute-0 nova_compute[252550]:     <controller type="usb" index="0"/>
Nov 25 23:56:15 compute-0 nova_compute[252550]:     <memballoon model="virtio">
Nov 25 23:56:15 compute-0 nova_compute[252550]:       <stats period="10"/>
Nov 25 23:56:15 compute-0 nova_compute[252550]:     </memballoon>
Nov 25 23:56:15 compute-0 nova_compute[252550]:   </devices>
Nov 25 23:56:15 compute-0 nova_compute[252550]: </domain>
Nov 25 23:56:15 compute-0 nova_compute[252550]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 23:56:15 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e96 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:56:15 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Nov 25 23:56:15 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Nov 25 23:56:15 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Nov 25 23:56:15 compute-0 ceph-mon[75654]: pgmap v816: 177 pgs: 177 active+clean; 41 MiB data, 188 MiB used, 60 GiB / 60 GiB avail; 107 KiB/s rd, 7.6 KiB/s wr, 141 op/s
Nov 25 23:56:15 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/2107091497' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 23:56:15 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/2135860034' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 23:56:15 compute-0 ceph-mon[75654]: osdmap e97: 3 total, 3 up, 3 in
Nov 25 23:56:15 compute-0 nova_compute[252550]: 2025-11-25 23:56:15.768 252558 DEBUG nova.virt.libvirt.driver [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 23:56:15 compute-0 nova_compute[252550]: 2025-11-25 23:56:15.768 252558 DEBUG nova.virt.libvirt.driver [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 23:56:15 compute-0 nova_compute[252550]: 2025-11-25 23:56:15.769 252558 INFO nova.virt.libvirt.driver [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Using config drive
Nov 25 23:56:15 compute-0 nova_compute[252550]: 2025-11-25 23:56:15.806 252558 DEBUG nova.storage.rbd_utils [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] rbd image 861debf8-73c8-45fe-92d9-fbfa772d34eb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 23:56:16 compute-0 nova_compute[252550]: 2025-11-25 23:56:16.027 252558 INFO nova.virt.libvirt.driver [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Creating config drive at /var/lib/nova/instances/861debf8-73c8-45fe-92d9-fbfa772d34eb/disk.config
Nov 25 23:56:16 compute-0 nova_compute[252550]: 2025-11-25 23:56:16.036 252558 DEBUG oslo_concurrency.processutils [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/861debf8-73c8-45fe-92d9-fbfa772d34eb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpueh_vrze execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 23:56:16 compute-0 nova_compute[252550]: 2025-11-25 23:56:16.177 252558 DEBUG oslo_concurrency.processutils [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/861debf8-73c8-45fe-92d9-fbfa772d34eb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpueh_vrze" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 23:56:16 compute-0 nova_compute[252550]: 2025-11-25 23:56:16.215 252558 DEBUG nova.storage.rbd_utils [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] rbd image 861debf8-73c8-45fe-92d9-fbfa772d34eb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 25 23:56:16 compute-0 nova_compute[252550]: 2025-11-25 23:56:16.220 252558 DEBUG oslo_concurrency.processutils [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/861debf8-73c8-45fe-92d9-fbfa772d34eb/disk.config 861debf8-73c8-45fe-92d9-fbfa772d34eb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 23:56:16 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:56:16.245 160725 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=2ba84045-48af-49e3-86f7-35b32300977f, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 23:56:16 compute-0 nova_compute[252550]: 2025-11-25 23:56:16.422 252558 DEBUG oslo_concurrency.processutils [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/861debf8-73c8-45fe-92d9-fbfa772d34eb/disk.config 861debf8-73c8-45fe-92d9-fbfa772d34eb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.202s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 23:56:16 compute-0 nova_compute[252550]: 2025-11-25 23:56:16.423 252558 INFO nova.virt.libvirt.driver [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Deleting local config drive /var/lib/nova/instances/861debf8-73c8-45fe-92d9-fbfa772d34eb/disk.config because it was imported into RBD.
Nov 25 23:56:16 compute-0 systemd-machined[213892]: New machine qemu-2-instance-00000002.
Nov 25 23:56:16 compute-0 systemd[1]: Started Virtual Machine qemu-2-instance-00000002.
Nov 25 23:56:16 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v818: 177 pgs: 177 active+clean; 41 MiB data, 188 MiB used, 60 GiB / 60 GiB avail; 88 KiB/s rd, 5.3 KiB/s wr, 113 op/s
Nov 25 23:56:17 compute-0 nova_compute[252550]: 2025-11-25 23:56:17.075 252558 DEBUG nova.virt.driver [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Emitting event <LifecycleEvent: 1764114977.0745485, 861debf8-73c8-45fe-92d9-fbfa772d34eb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 23:56:17 compute-0 nova_compute[252550]: 2025-11-25 23:56:17.076 252558 INFO nova.compute.manager [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] VM Resumed (Lifecycle Event)
Nov 25 23:56:17 compute-0 nova_compute[252550]: 2025-11-25 23:56:17.079 252558 DEBUG nova.compute.manager [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 23:56:17 compute-0 nova_compute[252550]: 2025-11-25 23:56:17.079 252558 DEBUG nova.virt.libvirt.driver [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 23:56:17 compute-0 nova_compute[252550]: 2025-11-25 23:56:17.082 252558 INFO nova.virt.libvirt.driver [-] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Instance spawned successfully.
Nov 25 23:56:17 compute-0 nova_compute[252550]: 2025-11-25 23:56:17.083 252558 DEBUG nova.virt.libvirt.driver [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 23:56:17 compute-0 nova_compute[252550]: 2025-11-25 23:56:17.306 252558 DEBUG nova.compute.manager [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 23:56:17 compute-0 nova_compute[252550]: 2025-11-25 23:56:17.312 252558 DEBUG nova.compute.manager [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 23:56:17 compute-0 nova_compute[252550]: 2025-11-25 23:56:17.464 252558 DEBUG nova.virt.libvirt.driver [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 23:56:17 compute-0 nova_compute[252550]: 2025-11-25 23:56:17.465 252558 DEBUG nova.virt.libvirt.driver [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 23:56:17 compute-0 nova_compute[252550]: 2025-11-25 23:56:17.466 252558 DEBUG nova.virt.libvirt.driver [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 23:56:17 compute-0 nova_compute[252550]: 2025-11-25 23:56:17.467 252558 DEBUG nova.virt.libvirt.driver [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 23:56:17 compute-0 nova_compute[252550]: 2025-11-25 23:56:17.468 252558 DEBUG nova.virt.libvirt.driver [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 23:56:17 compute-0 nova_compute[252550]: 2025-11-25 23:56:17.469 252558 DEBUG nova.virt.libvirt.driver [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 23:56:17 compute-0 nova_compute[252550]: 2025-11-25 23:56:17.489 252558 INFO nova.compute.manager [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 23:56:17 compute-0 nova_compute[252550]: 2025-11-25 23:56:17.490 252558 DEBUG nova.virt.driver [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] Emitting event <LifecycleEvent: 1764114977.0760424, 861debf8-73c8-45fe-92d9-fbfa772d34eb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 23:56:17 compute-0 nova_compute[252550]: 2025-11-25 23:56:17.491 252558 INFO nova.compute.manager [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] VM Started (Lifecycle Event)
Nov 25 23:56:17 compute-0 nova_compute[252550]: 2025-11-25 23:56:17.658 252558 INFO nova.compute.manager [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Took 4.09 seconds to spawn the instance on the hypervisor.
Nov 25 23:56:17 compute-0 nova_compute[252550]: 2025-11-25 23:56:17.659 252558 DEBUG nova.compute.manager [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 23:56:17 compute-0 nova_compute[252550]: 2025-11-25 23:56:17.676 252558 DEBUG nova.compute.manager [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 23:56:17 compute-0 nova_compute[252550]: 2025-11-25 23:56:17.678 252558 DEBUG nova.compute.manager [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 23:56:17 compute-0 ceph-mon[75654]: pgmap v818: 177 pgs: 177 active+clean; 41 MiB data, 188 MiB used, 60 GiB / 60 GiB avail; 88 KiB/s rd, 5.3 KiB/s wr, 113 op/s
Nov 25 23:56:17 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 23:56:17 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4100619018' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 23:56:17 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 23:56:17 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4100619018' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 23:56:17 compute-0 nova_compute[252550]: 2025-11-25 23:56:17.751 252558 INFO nova.compute.manager [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Took 5.07 seconds to build instance.
Nov 25 23:56:17 compute-0 nova_compute[252550]: 2025-11-25 23:56:17.786 252558 INFO nova.compute.manager [None req-7ba79be2-f198-460b-a8f9-9fcaa03db1ae - - - - - -] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 23:56:17 compute-0 nova_compute[252550]: 2025-11-25 23:56:17.911 252558 DEBUG oslo_concurrency.lockutils [None req-d3353fa1-4c6b-4c54-93c0-c06f5e44d3b4 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Lock "861debf8-73c8-45fe-92d9-fbfa772d34eb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.296s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 23:56:18 compute-0 podman[258405]: 2025-11-25 23:56:18.294077359 +0000 UTC m=+0.111400281 container health_status b33bb0f8771548517ad62d4e0193b93d84fd686bd9113462f17aea9d51587084 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 25 23:56:18 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v819: 177 pgs: 177 active+clean; 41 MiB data, 188 MiB used, 60 GiB / 60 GiB avail; 87 KiB/s rd, 6.0 KiB/s wr, 116 op/s
Nov 25 23:56:18 compute-0 ceph-mon[75654]: from='client.? 192.168.122.10:0/4100619018' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 23:56:18 compute-0 ceph-mon[75654]: from='client.? 192.168.122.10:0/4100619018' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 23:56:19 compute-0 nova_compute[252550]: 2025-11-25 23:56:19.590 252558 DEBUG nova.compute.manager [None req-837ccfe9-25c7-4456-8597-879a90097a2e 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 23:56:19 compute-0 ceph-mon[75654]: pgmap v819: 177 pgs: 177 active+clean; 41 MiB data, 188 MiB used, 60 GiB / 60 GiB avail; 87 KiB/s rd, 6.0 KiB/s wr, 116 op/s
Nov 25 23:56:19 compute-0 nova_compute[252550]: 2025-11-25 23:56:19.812 252558 INFO nova.compute.manager [None req-837ccfe9-25c7-4456-8597-879a90097a2e 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] instance snapshotting
Nov 25 23:56:20 compute-0 nova_compute[252550]: 2025-11-25 23:56:20.542 252558 INFO nova.virt.libvirt.driver [None req-837ccfe9-25c7-4456-8597-879a90097a2e 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Beginning live snapshot process
Nov 25 23:56:20 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v820: 177 pgs: 177 active+clean; 41 MiB data, 188 MiB used, 60 GiB / 60 GiB avail; 71 KiB/s rd, 4.9 KiB/s wr, 94 op/s
Nov 25 23:56:20 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e97 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:56:20 compute-0 nova_compute[252550]: 2025-11-25 23:56:20.722 252558 DEBUG nova.storage.rbd_utils [None req-837ccfe9-25c7-4456-8597-879a90097a2e 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] creating snapshot(b26e7a6e5ea0472a9476b66aee2cf159) on rbd image(861debf8-73c8-45fe-92d9-fbfa772d34eb_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 25 23:56:21 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Nov 25 23:56:21 compute-0 ceph-mon[75654]: pgmap v820: 177 pgs: 177 active+clean; 41 MiB data, 188 MiB used, 60 GiB / 60 GiB avail; 71 KiB/s rd, 4.9 KiB/s wr, 94 op/s
Nov 25 23:56:21 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Nov 25 23:56:21 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Nov 25 23:56:21 compute-0 nova_compute[252550]: 2025-11-25 23:56:21.820 252558 DEBUG nova.storage.rbd_utils [None req-837ccfe9-25c7-4456-8597-879a90097a2e 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] cloning vms/861debf8-73c8-45fe-92d9-fbfa772d34eb_disk@b26e7a6e5ea0472a9476b66aee2cf159 to images/ae4d3bd2-86c2-4f7b-9fdd-0fc899a2923a clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 25 23:56:21 compute-0 nova_compute[252550]: 2025-11-25 23:56:21.971 252558 DEBUG nova.storage.rbd_utils [None req-837ccfe9-25c7-4456-8597-879a90097a2e 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] flattening images/ae4d3bd2-86c2-4f7b-9fdd-0fc899a2923a flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Nov 25 23:56:22 compute-0 nova_compute[252550]: 2025-11-25 23:56:22.153 252558 DEBUG nova.storage.rbd_utils [None req-837ccfe9-25c7-4456-8597-879a90097a2e 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] removing snapshot(b26e7a6e5ea0472a9476b66aee2cf159) on rbd image(861debf8-73c8-45fe-92d9-fbfa772d34eb_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Nov 25 23:56:22 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v822: 177 pgs: 177 active+clean; 42 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 80 KiB/s rd, 22 KiB/s wr, 103 op/s
Nov 25 23:56:22 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Nov 25 23:56:22 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Nov 25 23:56:22 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Nov 25 23:56:22 compute-0 ceph-mon[75654]: osdmap e98: 3 total, 3 up, 3 in
Nov 25 23:56:22 compute-0 nova_compute[252550]: 2025-11-25 23:56:22.820 252558 DEBUG nova.storage.rbd_utils [None req-837ccfe9-25c7-4456-8597-879a90097a2e 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] creating snapshot(snap) on rbd image(ae4d3bd2-86c2-4f7b-9fdd-0fc899a2923a) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 25 23:56:23 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Nov 25 23:56:23 compute-0 ceph-mon[75654]: pgmap v822: 177 pgs: 177 active+clean; 42 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 80 KiB/s rd, 22 KiB/s wr, 103 op/s
Nov 25 23:56:23 compute-0 ceph-mon[75654]: osdmap e99: 3 total, 3 up, 3 in
Nov 25 23:56:23 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Nov 25 23:56:23 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Nov 25 23:56:24 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v825: 177 pgs: 177 active+clean; 42 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 26 KiB/s wr, 64 op/s
Nov 25 23:56:24 compute-0 ceph-mon[75654]: osdmap e100: 3 total, 3 up, 3 in
Nov 25 23:56:25 compute-0 nova_compute[252550]: 2025-11-25 23:56:25.169 252558 INFO nova.virt.libvirt.driver [None req-837ccfe9-25c7-4456-8597-879a90097a2e 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Snapshot image upload complete
Nov 25 23:56:25 compute-0 nova_compute[252550]: 2025-11-25 23:56:25.170 252558 INFO nova.compute.manager [None req-837ccfe9-25c7-4456-8597-879a90097a2e 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Took 5.36 seconds to snapshot the instance on the hypervisor.
Nov 25 23:56:25 compute-0 podman[258569]: 2025-11-25 23:56:25.287956475 +0000 UTC m=+0.095245388 container health_status 9c33443d6f21f9126b440289b7d384a59ef92406382d0e40a2b11f1da5dc47d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 23:56:25 compute-0 podman[258568]: 2025-11-25 23:56:25.32667386 +0000 UTC m=+0.142335957 container health_status 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 23:56:25 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e100 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:56:25 compute-0 ceph-mon[75654]: pgmap v825: 177 pgs: 177 active+clean; 42 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 26 KiB/s wr, 64 op/s
Nov 25 23:56:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:56:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:56:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:56:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:56:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:56:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:56:26 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v826: 177 pgs: 177 active+clean; 42 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 122 KiB/s rd, 29 KiB/s wr, 153 op/s
Nov 25 23:56:26 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Nov 25 23:56:26 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Nov 25 23:56:26 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Nov 25 23:56:27 compute-0 nova_compute[252550]: 2025-11-25 23:56:27.559 252558 DEBUG oslo_concurrency.lockutils [None req-02538426-0138-416f-8f0f-24f69212ff9a 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Acquiring lock "861debf8-73c8-45fe-92d9-fbfa772d34eb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 23:56:27 compute-0 nova_compute[252550]: 2025-11-25 23:56:27.560 252558 DEBUG oslo_concurrency.lockutils [None req-02538426-0138-416f-8f0f-24f69212ff9a 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Lock "861debf8-73c8-45fe-92d9-fbfa772d34eb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 23:56:27 compute-0 nova_compute[252550]: 2025-11-25 23:56:27.561 252558 DEBUG oslo_concurrency.lockutils [None req-02538426-0138-416f-8f0f-24f69212ff9a 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Acquiring lock "861debf8-73c8-45fe-92d9-fbfa772d34eb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 23:56:27 compute-0 nova_compute[252550]: 2025-11-25 23:56:27.561 252558 DEBUG oslo_concurrency.lockutils [None req-02538426-0138-416f-8f0f-24f69212ff9a 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Lock "861debf8-73c8-45fe-92d9-fbfa772d34eb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 23:56:27 compute-0 nova_compute[252550]: 2025-11-25 23:56:27.562 252558 DEBUG oslo_concurrency.lockutils [None req-02538426-0138-416f-8f0f-24f69212ff9a 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Lock "861debf8-73c8-45fe-92d9-fbfa772d34eb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 23:56:27 compute-0 nova_compute[252550]: 2025-11-25 23:56:27.564 252558 INFO nova.compute.manager [None req-02538426-0138-416f-8f0f-24f69212ff9a 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Terminating instance
Nov 25 23:56:27 compute-0 nova_compute[252550]: 2025-11-25 23:56:27.565 252558 DEBUG oslo_concurrency.lockutils [None req-02538426-0138-416f-8f0f-24f69212ff9a 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Acquiring lock "refresh_cache-861debf8-73c8-45fe-92d9-fbfa772d34eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 23:56:27 compute-0 nova_compute[252550]: 2025-11-25 23:56:27.566 252558 DEBUG oslo_concurrency.lockutils [None req-02538426-0138-416f-8f0f-24f69212ff9a 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Acquired lock "refresh_cache-861debf8-73c8-45fe-92d9-fbfa772d34eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 23:56:27 compute-0 nova_compute[252550]: 2025-11-25 23:56:27.567 252558 DEBUG nova.network.neutron [None req-02538426-0138-416f-8f0f-24f69212ff9a 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 23:56:27 compute-0 ceph-mon[75654]: pgmap v826: 177 pgs: 177 active+clean; 42 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 122 KiB/s rd, 29 KiB/s wr, 153 op/s
Nov 25 23:56:27 compute-0 ceph-mon[75654]: osdmap e101: 3 total, 3 up, 3 in
Nov 25 23:56:28 compute-0 nova_compute[252550]: 2025-11-25 23:56:28.223 252558 DEBUG nova.network.neutron [None req-02538426-0138-416f-8f0f-24f69212ff9a 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 23:56:28 compute-0 nova_compute[252550]: 2025-11-25 23:56:28.448 252558 DEBUG nova.network.neutron [None req-02538426-0138-416f-8f0f-24f69212ff9a 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 23:56:28 compute-0 nova_compute[252550]: 2025-11-25 23:56:28.504 252558 DEBUG oslo_concurrency.lockutils [None req-02538426-0138-416f-8f0f-24f69212ff9a 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Releasing lock "refresh_cache-861debf8-73c8-45fe-92d9-fbfa772d34eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 23:56:28 compute-0 nova_compute[252550]: 2025-11-25 23:56:28.505 252558 DEBUG nova.compute.manager [None req-02538426-0138-416f-8f0f-24f69212ff9a 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 23:56:28 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v828: 177 pgs: 177 active+clean; 42 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 128 KiB/s rd, 6.7 KiB/s wr, 164 op/s
Nov 25 23:56:28 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Deactivated successfully.
Nov 25 23:56:28 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Consumed 1.131s CPU time.
Nov 25 23:56:28 compute-0 systemd-machined[213892]: Machine qemu-2-instance-00000002 terminated.
Nov 25 23:56:28 compute-0 nova_compute[252550]: 2025-11-25 23:56:28.732 252558 INFO nova.virt.libvirt.driver [-] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Instance destroyed successfully.
Nov 25 23:56:28 compute-0 nova_compute[252550]: 2025-11-25 23:56:28.732 252558 DEBUG nova.objects.instance [None req-02538426-0138-416f-8f0f-24f69212ff9a 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Lazy-loading 'resources' on Instance uuid 861debf8-73c8-45fe-92d9-fbfa772d34eb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 23:56:29 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Nov 25 23:56:29 compute-0 ceph-mon[75654]: pgmap v828: 177 pgs: 177 active+clean; 42 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 128 KiB/s rd, 6.7 KiB/s wr, 164 op/s
Nov 25 23:56:29 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Nov 25 23:56:29 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Nov 25 23:56:30 compute-0 nova_compute[252550]: 2025-11-25 23:56:30.223 252558 INFO nova.virt.libvirt.driver [None req-02538426-0138-416f-8f0f-24f69212ff9a 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Deleting instance files /var/lib/nova/instances/861debf8-73c8-45fe-92d9-fbfa772d34eb_del
Nov 25 23:56:30 compute-0 nova_compute[252550]: 2025-11-25 23:56:30.224 252558 INFO nova.virt.libvirt.driver [None req-02538426-0138-416f-8f0f-24f69212ff9a 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Deletion of /var/lib/nova/instances/861debf8-73c8-45fe-92d9-fbfa772d34eb_del complete
Nov 25 23:56:30 compute-0 nova_compute[252550]: 2025-11-25 23:56:30.377 252558 DEBUG nova.virt.libvirt.host [None req-02538426-0138-416f-8f0f-24f69212ff9a 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754
Nov 25 23:56:30 compute-0 nova_compute[252550]: 2025-11-25 23:56:30.378 252558 INFO nova.virt.libvirt.host [None req-02538426-0138-416f-8f0f-24f69212ff9a 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] UEFI support detected
Nov 25 23:56:30 compute-0 nova_compute[252550]: 2025-11-25 23:56:30.380 252558 INFO nova.compute.manager [None req-02538426-0138-416f-8f0f-24f69212ff9a 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Took 1.87 seconds to destroy the instance on the hypervisor.
Nov 25 23:56:30 compute-0 nova_compute[252550]: 2025-11-25 23:56:30.381 252558 DEBUG oslo.service.loopingcall [None req-02538426-0138-416f-8f0f-24f69212ff9a 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 23:56:30 compute-0 nova_compute[252550]: 2025-11-25 23:56:30.382 252558 DEBUG nova.compute.manager [-] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 23:56:30 compute-0 nova_compute[252550]: 2025-11-25 23:56:30.382 252558 DEBUG nova.network.neutron [-] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 23:56:30 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v830: 177 pgs: 177 active+clean; 42 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 113 KiB/s rd, 5.9 KiB/s wr, 145 op/s
Nov 25 23:56:30 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e102 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:56:30 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Nov 25 23:56:30 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Nov 25 23:56:30 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Nov 25 23:56:30 compute-0 ceph-mon[75654]: osdmap e102: 3 total, 3 up, 3 in
Nov 25 23:56:30 compute-0 ceph-mon[75654]: osdmap e103: 3 total, 3 up, 3 in
Nov 25 23:56:31 compute-0 nova_compute[252550]: 2025-11-25 23:56:31.220 252558 DEBUG nova.network.neutron [-] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 23:56:31 compute-0 nova_compute[252550]: 2025-11-25 23:56:31.307 252558 DEBUG nova.network.neutron [-] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 23:56:31 compute-0 nova_compute[252550]: 2025-11-25 23:56:31.402 252558 INFO nova.compute.manager [-] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Took 1.02 seconds to deallocate network for instance.
Nov 25 23:56:31 compute-0 nova_compute[252550]: 2025-11-25 23:56:31.612 252558 DEBUG oslo_concurrency.lockutils [None req-02538426-0138-416f-8f0f-24f69212ff9a 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 23:56:31 compute-0 nova_compute[252550]: 2025-11-25 23:56:31.612 252558 DEBUG oslo_concurrency.lockutils [None req-02538426-0138-416f-8f0f-24f69212ff9a 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 23:56:31 compute-0 nova_compute[252550]: 2025-11-25 23:56:31.712 252558 DEBUG oslo_concurrency.processutils [None req-02538426-0138-416f-8f0f-24f69212ff9a 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 23:56:31 compute-0 ceph-mon[75654]: pgmap v830: 177 pgs: 177 active+clean; 42 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 113 KiB/s rd, 5.9 KiB/s wr, 145 op/s
Nov 25 23:56:32 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 23:56:32 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2076522244' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 23:56:32 compute-0 nova_compute[252550]: 2025-11-25 23:56:32.196 252558 DEBUG oslo_concurrency.processutils [None req-02538426-0138-416f-8f0f-24f69212ff9a 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 23:56:32 compute-0 nova_compute[252550]: 2025-11-25 23:56:32.204 252558 DEBUG nova.compute.provider_tree [None req-02538426-0138-416f-8f0f-24f69212ff9a 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Inventory has not changed in ProviderTree for provider: 08547965-b35f-4b7b-95d8-902f06aa011c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 23:56:32 compute-0 nova_compute[252550]: 2025-11-25 23:56:32.261 252558 DEBUG nova.scheduler.client.report [None req-02538426-0138-416f-8f0f-24f69212ff9a 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Inventory has not changed for provider 08547965-b35f-4b7b-95d8-902f06aa011c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 23:56:32 compute-0 nova_compute[252550]: 2025-11-25 23:56:32.474 252558 DEBUG oslo_concurrency.lockutils [None req-02538426-0138-416f-8f0f-24f69212ff9a 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.862s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 23:56:32 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v832: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 101 KiB/s rd, 5.8 KiB/s wr, 137 op/s
Nov 25 23:56:32 compute-0 nova_compute[252550]: 2025-11-25 23:56:32.565 252558 INFO nova.scheduler.client.report [None req-02538426-0138-416f-8f0f-24f69212ff9a 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Deleted allocations for instance 861debf8-73c8-45fe-92d9-fbfa772d34eb
Nov 25 23:56:32 compute-0 nova_compute[252550]: 2025-11-25 23:56:32.856 252558 DEBUG oslo_concurrency.lockutils [None req-02538426-0138-416f-8f0f-24f69212ff9a 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Lock "861debf8-73c8-45fe-92d9-fbfa772d34eb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.295s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 23:56:32 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/2076522244' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 23:56:33 compute-0 ceph-mon[75654]: pgmap v832: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 101 KiB/s rd, 5.8 KiB/s wr, 137 op/s
Nov 25 23:56:34 compute-0 nova_compute[252550]: 2025-11-25 23:56:34.398 252558 DEBUG oslo_concurrency.lockutils [None req-a4131b98-4fbd-4af7-9064-d6f6f6087de8 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Acquiring lock "bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 23:56:34 compute-0 nova_compute[252550]: 2025-11-25 23:56:34.398 252558 DEBUG oslo_concurrency.lockutils [None req-a4131b98-4fbd-4af7-9064-d6f6f6087de8 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Lock "bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 23:56:34 compute-0 nova_compute[252550]: 2025-11-25 23:56:34.399 252558 DEBUG oslo_concurrency.lockutils [None req-a4131b98-4fbd-4af7-9064-d6f6f6087de8 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Acquiring lock "bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 23:56:34 compute-0 nova_compute[252550]: 2025-11-25 23:56:34.399 252558 DEBUG oslo_concurrency.lockutils [None req-a4131b98-4fbd-4af7-9064-d6f6f6087de8 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Lock "bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 23:56:34 compute-0 nova_compute[252550]: 2025-11-25 23:56:34.399 252558 DEBUG oslo_concurrency.lockutils [None req-a4131b98-4fbd-4af7-9064-d6f6f6087de8 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Lock "bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 23:56:34 compute-0 nova_compute[252550]: 2025-11-25 23:56:34.401 252558 INFO nova.compute.manager [None req-a4131b98-4fbd-4af7-9064-d6f6f6087de8 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Terminating instance
Nov 25 23:56:34 compute-0 nova_compute[252550]: 2025-11-25 23:56:34.403 252558 DEBUG oslo_concurrency.lockutils [None req-a4131b98-4fbd-4af7-9064-d6f6f6087de8 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Acquiring lock "refresh_cache-bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 23:56:34 compute-0 nova_compute[252550]: 2025-11-25 23:56:34.403 252558 DEBUG oslo_concurrency.lockutils [None req-a4131b98-4fbd-4af7-9064-d6f6f6087de8 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Acquired lock "refresh_cache-bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 23:56:34 compute-0 nova_compute[252550]: 2025-11-25 23:56:34.404 252558 DEBUG nova.network.neutron [None req-a4131b98-4fbd-4af7-9064-d6f6f6087de8 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 23:56:34 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v833: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 79 KiB/s rd, 4.5 KiB/s wr, 106 op/s
Nov 25 23:56:34 compute-0 nova_compute[252550]: 2025-11-25 23:56:34.645 252558 DEBUG nova.network.neutron [None req-a4131b98-4fbd-4af7-9064-d6f6f6087de8 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 23:56:34 compute-0 nova_compute[252550]: 2025-11-25 23:56:34.884 252558 DEBUG nova.network.neutron [None req-a4131b98-4fbd-4af7-9064-d6f6f6087de8 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 23:56:34 compute-0 nova_compute[252550]: 2025-11-25 23:56:34.900 252558 DEBUG oslo_concurrency.lockutils [None req-a4131b98-4fbd-4af7-9064-d6f6f6087de8 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Releasing lock "refresh_cache-bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 23:56:34 compute-0 nova_compute[252550]: 2025-11-25 23:56:34.901 252558 DEBUG nova.compute.manager [None req-a4131b98-4fbd-4af7-9064-d6f6f6087de8 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 23:56:34 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Nov 25 23:56:34 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 1.223s CPU time.
Nov 25 23:56:34 compute-0 systemd-machined[213892]: Machine qemu-1-instance-00000001 terminated.
Nov 25 23:56:35 compute-0 nova_compute[252550]: 2025-11-25 23:56:35.130 252558 INFO nova.virt.libvirt.driver [-] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Instance destroyed successfully.
Nov 25 23:56:35 compute-0 nova_compute[252550]: 2025-11-25 23:56:35.131 252558 DEBUG nova.objects.instance [None req-a4131b98-4fbd-4af7-9064-d6f6f6087de8 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Lazy-loading 'resources' on Instance uuid bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 23:56:35 compute-0 nova_compute[252550]: 2025-11-25 23:56:35.357 252558 INFO nova.virt.libvirt.driver [None req-a4131b98-4fbd-4af7-9064-d6f6f6087de8 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Deleting instance files /var/lib/nova/instances/bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91_del
Nov 25 23:56:35 compute-0 nova_compute[252550]: 2025-11-25 23:56:35.359 252558 INFO nova.virt.libvirt.driver [None req-a4131b98-4fbd-4af7-9064-d6f6f6087de8 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Deletion of /var/lib/nova/instances/bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91_del complete
Nov 25 23:56:35 compute-0 nova_compute[252550]: 2025-11-25 23:56:35.433 252558 INFO nova.compute.manager [None req-a4131b98-4fbd-4af7-9064-d6f6f6087de8 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Took 0.53 seconds to destroy the instance on the hypervisor.
Nov 25 23:56:35 compute-0 nova_compute[252550]: 2025-11-25 23:56:35.434 252558 DEBUG oslo.service.loopingcall [None req-a4131b98-4fbd-4af7-9064-d6f6f6087de8 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 23:56:35 compute-0 nova_compute[252550]: 2025-11-25 23:56:35.435 252558 DEBUG nova.compute.manager [-] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 23:56:35 compute-0 nova_compute[252550]: 2025-11-25 23:56:35.435 252558 DEBUG nova.network.neutron [-] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 23:56:35 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e103 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:56:35 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Nov 25 23:56:35 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Nov 25 23:56:35 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Nov 25 23:56:35 compute-0 sudo[258680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:56:35 compute-0 sudo[258680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:56:35 compute-0 sudo[258680]: pam_unix(sudo:session): session closed for user root
Nov 25 23:56:35 compute-0 ceph-mon[75654]: pgmap v833: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 79 KiB/s rd, 4.5 KiB/s wr, 106 op/s
Nov 25 23:56:35 compute-0 ceph-mon[75654]: osdmap e104: 3 total, 3 up, 3 in
Nov 25 23:56:35 compute-0 sudo[258705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:56:35 compute-0 sudo[258705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:56:35 compute-0 sudo[258705]: pam_unix(sudo:session): session closed for user root
Nov 25 23:56:35 compute-0 nova_compute[252550]: 2025-11-25 23:56:35.953 252558 DEBUG nova.network.neutron [-] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 23:56:35 compute-0 nova_compute[252550]: 2025-11-25 23:56:35.972 252558 DEBUG nova.network.neutron [-] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 23:56:35 compute-0 nova_compute[252550]: 2025-11-25 23:56:35.990 252558 INFO nova.compute.manager [-] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Took 0.56 seconds to deallocate network for instance.
Nov 25 23:56:36 compute-0 sudo[258730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:56:36 compute-0 sudo[258730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:56:36 compute-0 sudo[258730]: pam_unix(sudo:session): session closed for user root
Nov 25 23:56:36 compute-0 sudo[258755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 23:56:36 compute-0 sudo[258755]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:56:36 compute-0 nova_compute[252550]: 2025-11-25 23:56:36.266 252558 INFO nova.compute.manager [None req-a4131b98-4fbd-4af7-9064-d6f6f6087de8 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Took 0.28 seconds to detach 1 volumes for instance.
Nov 25 23:56:36 compute-0 nova_compute[252550]: 2025-11-25 23:56:36.268 252558 DEBUG nova.compute.manager [None req-a4131b98-4fbd-4af7-9064-d6f6f6087de8 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Deleting volume: 6a6b9d67-6cf8-4dcc-abf1-e7df17195818 _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217
Nov 25 23:56:36 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v835: 177 pgs: 177 active+clean; 41 MiB data, 188 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 2.5 KiB/s wr, 71 op/s
Nov 25 23:56:36 compute-0 sudo[258755]: pam_unix(sudo:session): session closed for user root
Nov 25 23:56:36 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 23:56:36 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:56:36 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 23:56:36 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 23:56:36 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 23:56:36 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:56:36 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev 3f68cd52-6cd8-48b9-960c-b98ee7b3714d does not exist
Nov 25 23:56:36 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev e802d9a8-3186-4530-b532-29bda0af174c does not exist
Nov 25 23:56:36 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev 172aa32d-adea-42f2-ada9-fbbf95b9179b does not exist
Nov 25 23:56:36 compute-0 nova_compute[252550]: 2025-11-25 23:56:36.761 252558 DEBUG oslo_concurrency.lockutils [None req-a4131b98-4fbd-4af7-9064-d6f6f6087de8 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 23:56:36 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 23:56:36 compute-0 nova_compute[252550]: 2025-11-25 23:56:36.762 252558 DEBUG oslo_concurrency.lockutils [None req-a4131b98-4fbd-4af7-9064-d6f6f6087de8 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 23:56:36 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 23:56:36 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 23:56:36 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 23:56:36 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 23:56:36 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:56:36 compute-0 nova_compute[252550]: 2025-11-25 23:56:36.825 252558 DEBUG oslo_concurrency.processutils [None req-a4131b98-4fbd-4af7-9064-d6f6f6087de8 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 23:56:36 compute-0 sudo[258810]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:56:36 compute-0 sudo[258810]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:56:36 compute-0 sudo[258810]: pam_unix(sudo:session): session closed for user root
Nov 25 23:56:36 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:56:36 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 23:56:36 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:56:36 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 23:56:36 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 23:56:36 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:56:36 compute-0 sudo[258836]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:56:36 compute-0 sudo[258836]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:56:36 compute-0 sudo[258836]: pam_unix(sudo:session): session closed for user root
Nov 25 23:56:37 compute-0 sudo[258865]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:56:37 compute-0 sudo[258865]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:56:37 compute-0 sudo[258865]: pam_unix(sudo:session): session closed for user root
Nov 25 23:56:37 compute-0 sudo[258905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 23:56:37 compute-0 sudo[258905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:56:37 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 23:56:37 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1502974040' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 23:56:37 compute-0 nova_compute[252550]: 2025-11-25 23:56:37.285 252558 DEBUG oslo_concurrency.processutils [None req-a4131b98-4fbd-4af7-9064-d6f6f6087de8 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 23:56:37 compute-0 nova_compute[252550]: 2025-11-25 23:56:37.294 252558 DEBUG nova.compute.provider_tree [None req-a4131b98-4fbd-4af7-9064-d6f6f6087de8 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Inventory has not changed in ProviderTree for provider: 08547965-b35f-4b7b-95d8-902f06aa011c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 23:56:37 compute-0 podman[258971]: 2025-11-25 23:56:37.463671668 +0000 UTC m=+0.045604770 container create 5735c4f5894fb708e46b7d10346bb23c5de40f5dd7e3f6fe91c78d7657eb2e96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_noyce, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:56:37 compute-0 systemd[1]: Started libpod-conmon-5735c4f5894fb708e46b7d10346bb23c5de40f5dd7e3f6fe91c78d7657eb2e96.scope.
Nov 25 23:56:37 compute-0 nova_compute[252550]: 2025-11-25 23:56:37.520 252558 DEBUG nova.scheduler.client.report [None req-a4131b98-4fbd-4af7-9064-d6f6f6087de8 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Inventory has not changed for provider 08547965-b35f-4b7b-95d8-902f06aa011c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 23:56:37 compute-0 podman[258971]: 2025-11-25 23:56:37.44205476 +0000 UTC m=+0.023987892 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:56:37 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:56:37 compute-0 podman[258971]: 2025-11-25 23:56:37.568654445 +0000 UTC m=+0.150587618 container init 5735c4f5894fb708e46b7d10346bb23c5de40f5dd7e3f6fe91c78d7657eb2e96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_noyce, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 25 23:56:37 compute-0 podman[258971]: 2025-11-25 23:56:37.580319097 +0000 UTC m=+0.162252219 container start 5735c4f5894fb708e46b7d10346bb23c5de40f5dd7e3f6fe91c78d7657eb2e96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_noyce, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:56:37 compute-0 podman[258971]: 2025-11-25 23:56:37.583672327 +0000 UTC m=+0.165605459 container attach 5735c4f5894fb708e46b7d10346bb23c5de40f5dd7e3f6fe91c78d7657eb2e96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_noyce, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 25 23:56:37 compute-0 magical_noyce[258985]: 167 167
Nov 25 23:56:37 compute-0 systemd[1]: libpod-5735c4f5894fb708e46b7d10346bb23c5de40f5dd7e3f6fe91c78d7657eb2e96.scope: Deactivated successfully.
Nov 25 23:56:37 compute-0 podman[258971]: 2025-11-25 23:56:37.589784351 +0000 UTC m=+0.171717483 container died 5735c4f5894fb708e46b7d10346bb23c5de40f5dd7e3f6fe91c78d7657eb2e96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_noyce, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 25 23:56:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-7772ce9b3e689a7de297b1374e900ceeb675d16a88472188871961456ffedb30-merged.mount: Deactivated successfully.
Nov 25 23:56:37 compute-0 podman[258971]: 2025-11-25 23:56:37.642289075 +0000 UTC m=+0.224222167 container remove 5735c4f5894fb708e46b7d10346bb23c5de40f5dd7e3f6fe91c78d7657eb2e96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_noyce, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:56:37 compute-0 systemd[1]: libpod-conmon-5735c4f5894fb708e46b7d10346bb23c5de40f5dd7e3f6fe91c78d7657eb2e96.scope: Deactivated successfully.
Nov 25 23:56:37 compute-0 nova_compute[252550]: 2025-11-25 23:56:37.855 252558 DEBUG oslo_concurrency.lockutils [None req-a4131b98-4fbd-4af7-9064-d6f6f6087de8 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.094s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 23:56:37 compute-0 podman[259012]: 2025-11-25 23:56:37.868239327 +0000 UTC m=+0.077458462 container create d9d68eadb5752a0a9a4f382ff4cf66635adbf215b89c0572cb662c15c0391c5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_perlman, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:56:37 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Nov 25 23:56:37 compute-0 ceph-mon[75654]: pgmap v835: 177 pgs: 177 active+clean; 41 MiB data, 188 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 2.5 KiB/s wr, 71 op/s
Nov 25 23:56:37 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/1502974040' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 23:56:37 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Nov 25 23:56:37 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Nov 25 23:56:37 compute-0 systemd[1]: Started libpod-conmon-d9d68eadb5752a0a9a4f382ff4cf66635adbf215b89c0572cb662c15c0391c5e.scope.
Nov 25 23:56:37 compute-0 podman[259012]: 2025-11-25 23:56:37.838756179 +0000 UTC m=+0.047975354 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:56:37 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:56:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4da2206a777cbdbfda9d3f20ef3ff1d1b062b5d97c9edc166b7f2c92cb2b9d0c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 23:56:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4da2206a777cbdbfda9d3f20ef3ff1d1b062b5d97c9edc166b7f2c92cb2b9d0c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:56:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4da2206a777cbdbfda9d3f20ef3ff1d1b062b5d97c9edc166b7f2c92cb2b9d0c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:56:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4da2206a777cbdbfda9d3f20ef3ff1d1b062b5d97c9edc166b7f2c92cb2b9d0c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:56:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4da2206a777cbdbfda9d3f20ef3ff1d1b062b5d97c9edc166b7f2c92cb2b9d0c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 23:56:37 compute-0 podman[259012]: 2025-11-25 23:56:37.998958643 +0000 UTC m=+0.208177748 container init d9d68eadb5752a0a9a4f382ff4cf66635adbf215b89c0572cb662c15c0391c5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_perlman, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:56:38 compute-0 podman[259012]: 2025-11-25 23:56:38.005317543 +0000 UTC m=+0.214536668 container start d9d68eadb5752a0a9a4f382ff4cf66635adbf215b89c0572cb662c15c0391c5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_perlman, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 23:56:38 compute-0 podman[259012]: 2025-11-25 23:56:38.010838261 +0000 UTC m=+0.220057386 container attach d9d68eadb5752a0a9a4f382ff4cf66635adbf215b89c0572cb662c15c0391c5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_perlman, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 25 23:56:38 compute-0 nova_compute[252550]: 2025-11-25 23:56:38.038 252558 INFO nova.scheduler.client.report [None req-a4131b98-4fbd-4af7-9064-d6f6f6087de8 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Deleted allocations for instance bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91
Nov 25 23:56:38 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v837: 177 pgs: 177 active+clean; 41 MiB data, 188 MiB used, 60 GiB / 60 GiB avail; 58 KiB/s rd, 3.3 KiB/s wr, 77 op/s
Nov 25 23:56:38 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 23:56:38 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2298838289' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 23:56:38 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 23:56:38 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2298838289' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 23:56:38 compute-0 nova_compute[252550]: 2025-11-25 23:56:38.755 252558 DEBUG oslo_concurrency.lockutils [None req-a4131b98-4fbd-4af7-9064-d6f6f6087de8 210f8faea4e1416ab82c35b428209415 cda2ac0afb334f238d6d956454314f3d - - default default] Lock "bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.356s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 23:56:38 compute-0 ceph-mon[75654]: osdmap e105: 3 total, 3 up, 3 in
Nov 25 23:56:38 compute-0 ceph-mon[75654]: from='client.? 192.168.122.10:0/2298838289' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 23:56:38 compute-0 ceph-mon[75654]: from='client.? 192.168.122.10:0/2298838289' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 23:56:39 compute-0 wizardly_perlman[259030]: --> passed data devices: 0 physical, 3 LVM
Nov 25 23:56:39 compute-0 wizardly_perlman[259030]: --> relative data size: 1.0
Nov 25 23:56:39 compute-0 wizardly_perlman[259030]: --> All data devices are unavailable
Nov 25 23:56:39 compute-0 systemd[1]: libpod-d9d68eadb5752a0a9a4f382ff4cf66635adbf215b89c0572cb662c15c0391c5e.scope: Deactivated successfully.
Nov 25 23:56:39 compute-0 podman[259012]: 2025-11-25 23:56:39.177295625 +0000 UTC m=+1.386514730 container died d9d68eadb5752a0a9a4f382ff4cf66635adbf215b89c0572cb662c15c0391c5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_perlman, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 23:56:39 compute-0 systemd[1]: libpod-d9d68eadb5752a0a9a4f382ff4cf66635adbf215b89c0572cb662c15c0391c5e.scope: Consumed 1.107s CPU time.
Nov 25 23:56:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-4da2206a777cbdbfda9d3f20ef3ff1d1b062b5d97c9edc166b7f2c92cb2b9d0c-merged.mount: Deactivated successfully.
Nov 25 23:56:39 compute-0 podman[259012]: 2025-11-25 23:56:39.248750396 +0000 UTC m=+1.457969491 container remove d9d68eadb5752a0a9a4f382ff4cf66635adbf215b89c0572cb662c15c0391c5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 25 23:56:39 compute-0 systemd[1]: libpod-conmon-d9d68eadb5752a0a9a4f382ff4cf66635adbf215b89c0572cb662c15c0391c5e.scope: Deactivated successfully.
Nov 25 23:56:39 compute-0 sudo[258905]: pam_unix(sudo:session): session closed for user root
Nov 25 23:56:39 compute-0 sudo[259073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:56:39 compute-0 sudo[259073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:56:39 compute-0 sudo[259073]: pam_unix(sudo:session): session closed for user root
Nov 25 23:56:39 compute-0 sudo[259098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:56:39 compute-0 sudo[259098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:56:39 compute-0 sudo[259098]: pam_unix(sudo:session): session closed for user root
Nov 25 23:56:39 compute-0 sudo[259123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:56:39 compute-0 sudo[259123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:56:39 compute-0 sudo[259123]: pam_unix(sudo:session): session closed for user root
Nov 25 23:56:39 compute-0 sudo[259148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 -- lvm list --format json
Nov 25 23:56:39 compute-0 sudo[259148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:56:39 compute-0 ceph-mon[75654]: pgmap v837: 177 pgs: 177 active+clean; 41 MiB data, 188 MiB used, 60 GiB / 60 GiB avail; 58 KiB/s rd, 3.3 KiB/s wr, 77 op/s
Nov 25 23:56:40 compute-0 podman[259217]: 2025-11-25 23:56:40.11638893 +0000 UTC m=+0.056043110 container create 58ee2e3617ae61acc6ae3839582db03192c89a306378d0a4ed4776fd620d8c88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_rosalind, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:56:40 compute-0 systemd[1]: Started libpod-conmon-58ee2e3617ae61acc6ae3839582db03192c89a306378d0a4ed4776fd620d8c88.scope.
Nov 25 23:56:40 compute-0 podman[259217]: 2025-11-25 23:56:40.091038522 +0000 UTC m=+0.030692772 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:56:40 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:56:40 compute-0 podman[259217]: 2025-11-25 23:56:40.21362859 +0000 UTC m=+0.153282800 container init 58ee2e3617ae61acc6ae3839582db03192c89a306378d0a4ed4776fd620d8c88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_rosalind, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:56:40 compute-0 podman[259217]: 2025-11-25 23:56:40.2233714 +0000 UTC m=+0.163025580 container start 58ee2e3617ae61acc6ae3839582db03192c89a306378d0a4ed4776fd620d8c88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_rosalind, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:56:40 compute-0 podman[259217]: 2025-11-25 23:56:40.226731331 +0000 UTC m=+0.166385551 container attach 58ee2e3617ae61acc6ae3839582db03192c89a306378d0a4ed4776fd620d8c88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_rosalind, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 25 23:56:40 compute-0 tender_rosalind[259233]: 167 167
Nov 25 23:56:40 compute-0 systemd[1]: libpod-58ee2e3617ae61acc6ae3839582db03192c89a306378d0a4ed4776fd620d8c88.scope: Deactivated successfully.
Nov 25 23:56:40 compute-0 podman[259217]: 2025-11-25 23:56:40.231846847 +0000 UTC m=+0.171501057 container died 58ee2e3617ae61acc6ae3839582db03192c89a306378d0a4ed4776fd620d8c88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_rosalind, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:56:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-05a30ce1c2f1dcdb84bf54f232b5af3cb0b73da11a2f17378c58339967abe0f8-merged.mount: Deactivated successfully.
Nov 25 23:56:40 compute-0 podman[259217]: 2025-11-25 23:56:40.283733914 +0000 UTC m=+0.223388124 container remove 58ee2e3617ae61acc6ae3839582db03192c89a306378d0a4ed4776fd620d8c88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_rosalind, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:56:40 compute-0 systemd[1]: libpod-conmon-58ee2e3617ae61acc6ae3839582db03192c89a306378d0a4ed4776fd620d8c88.scope: Deactivated successfully.
Nov 25 23:56:40 compute-0 podman[259259]: 2025-11-25 23:56:40.510101818 +0000 UTC m=+0.058351701 container create 24a70454e73085ca653bdec7848e553f915917f34110246ac85b105a38622093 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_mirzakhani, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 25 23:56:40 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v838: 177 pgs: 177 active+clean; 41 MiB data, 188 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.5 KiB/s wr, 30 op/s
Nov 25 23:56:40 compute-0 systemd[1]: Started libpod-conmon-24a70454e73085ca653bdec7848e553f915917f34110246ac85b105a38622093.scope.
Nov 25 23:56:40 compute-0 podman[259259]: 2025-11-25 23:56:40.491988434 +0000 UTC m=+0.040238347 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:56:40 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:56:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0333a495d56db7be6822c44007d40165e352ec317647609736f990d5301a815b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 23:56:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0333a495d56db7be6822c44007d40165e352ec317647609736f990d5301a815b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:56:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0333a495d56db7be6822c44007d40165e352ec317647609736f990d5301a815b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:56:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0333a495d56db7be6822c44007d40165e352ec317647609736f990d5301a815b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:56:40 compute-0 podman[259259]: 2025-11-25 23:56:40.616902654 +0000 UTC m=+0.165152537 container init 24a70454e73085ca653bdec7848e553f915917f34110246ac85b105a38622093 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_mirzakhani, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:56:40 compute-0 podman[259259]: 2025-11-25 23:56:40.634763162 +0000 UTC m=+0.183013045 container start 24a70454e73085ca653bdec7848e553f915917f34110246ac85b105a38622093 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_mirzakhani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 25 23:56:40 compute-0 podman[259259]: 2025-11-25 23:56:40.638817831 +0000 UTC m=+0.187067714 container attach 24a70454e73085ca653bdec7848e553f915917f34110246ac85b105a38622093 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 23:56:40 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e105 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:56:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:56:40.765 160725 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 23:56:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:56:40.767 160725 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 23:56:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:56:40.768 160725 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 23:56:41 compute-0 blissful_mirzakhani[259275]: {
Nov 25 23:56:41 compute-0 blissful_mirzakhani[259275]:     "0": [
Nov 25 23:56:41 compute-0 blissful_mirzakhani[259275]:         {
Nov 25 23:56:41 compute-0 blissful_mirzakhani[259275]:             "devices": [
Nov 25 23:56:41 compute-0 blissful_mirzakhani[259275]:                 "/dev/loop3"
Nov 25 23:56:41 compute-0 blissful_mirzakhani[259275]:             ],
Nov 25 23:56:41 compute-0 blissful_mirzakhani[259275]:             "lv_name": "ceph_lv0",
Nov 25 23:56:41 compute-0 blissful_mirzakhani[259275]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 23:56:41 compute-0 blissful_mirzakhani[259275]:             "lv_size": "21470642176",
Nov 25 23:56:41 compute-0 blissful_mirzakhani[259275]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c5ddf08a-6193-41e0-8332-60b5083aa62e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 23:56:41 compute-0 blissful_mirzakhani[259275]:             "lv_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 23:56:41 compute-0 blissful_mirzakhani[259275]:             "name": "ceph_lv0",
Nov 25 23:56:41 compute-0 blissful_mirzakhani[259275]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 23:56:41 compute-0 blissful_mirzakhani[259275]:             "tags": {
Nov 25 23:56:41 compute-0 blissful_mirzakhani[259275]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 23:56:41 compute-0 blissful_mirzakhani[259275]:                 "ceph.block_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 23:56:41 compute-0 blissful_mirzakhani[259275]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 23:56:41 compute-0 blissful_mirzakhani[259275]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:56:41 compute-0 blissful_mirzakhani[259275]:                 "ceph.cluster_name": "ceph",
Nov 25 23:56:41 compute-0 blissful_mirzakhani[259275]:                 "ceph.crush_device_class": "",
Nov 25 23:56:41 compute-0 blissful_mirzakhani[259275]:                 "ceph.encrypted": "0",
Nov 25 23:56:41 compute-0 blissful_mirzakhani[259275]:                 "ceph.osd_fsid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 23:56:41 compute-0 blissful_mirzakhani[259275]:                 "ceph.osd_id": "0",
Nov 25 23:56:41 compute-0 blissful_mirzakhani[259275]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 23:56:41 compute-0 blissful_mirzakhani[259275]:                 "ceph.type": "block",
Nov 25 23:56:41 compute-0 blissful_mirzakhani[259275]:                 "ceph.vdo": "0"
Nov 25 23:56:41 compute-0 blissful_mirzakhani[259275]:             },
Nov 25 23:56:41 compute-0 blissful_mirzakhani[259275]:             "type": "block",
Nov 25 23:56:41 compute-0 blissful_mirzakhani[259275]:             "vg_name": "ceph_vg0"
Nov 25 23:56:41 compute-0 blissful_mirzakhani[259275]:         }
Nov 25 23:56:41 compute-0 blissful_mirzakhani[259275]:     ],
Nov 25 23:56:41 compute-0 blissful_mirzakhani[259275]:     "1": [
Nov 25 23:56:41 compute-0 blissful_mirzakhani[259275]:         {
Nov 25 23:56:41 compute-0 blissful_mirzakhani[259275]:             "devices": [
Nov 25 23:56:41 compute-0 blissful_mirzakhani[259275]:                 "/dev/loop4"
Nov 25 23:56:41 compute-0 blissful_mirzakhani[259275]:             ],
Nov 25 23:56:41 compute-0 blissful_mirzakhani[259275]:             "lv_name": "ceph_lv1",
Nov 25 23:56:41 compute-0 blissful_mirzakhani[259275]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 23:56:41 compute-0 blissful_mirzakhani[259275]:             "lv_size": "21470642176",
Nov 25 23:56:41 compute-0 blissful_mirzakhani[259275]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21bbab34-bea3-466b-8bf7-812749fcef47,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 23:56:41 compute-0 blissful_mirzakhani[259275]:             "lv_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 23:56:41 compute-0 blissful_mirzakhani[259275]:             "name": "ceph_lv1",
Nov 25 23:56:41 compute-0 blissful_mirzakhani[259275]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 23:56:41 compute-0 blissful_mirzakhani[259275]:             "tags": {
Nov 25 23:56:41 compute-0 blissful_mirzakhani[259275]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 23:56:41 compute-0 blissful_mirzakhani[259275]:                 "ceph.block_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 23:56:41 compute-0 blissful_mirzakhani[259275]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 23:56:41 compute-0 blissful_mirzakhani[259275]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:56:41 compute-0 blissful_mirzakhani[259275]:                 "ceph.cluster_name": "ceph",
Nov 25 23:56:41 compute-0 blissful_mirzakhani[259275]:                 "ceph.crush_device_class": "",
Nov 25 23:56:41 compute-0 blissful_mirzakhani[259275]:                 "ceph.encrypted": "0",
Nov 25 23:56:41 compute-0 blissful_mirzakhani[259275]:                 "ceph.osd_fsid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 23:56:41 compute-0 blissful_mirzakhani[259275]:                 "ceph.osd_id": "1",
Nov 25 23:56:41 compute-0 blissful_mirzakhani[259275]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 23:56:41 compute-0 blissful_mirzakhani[259275]:                 "ceph.type": "block",
Nov 25 23:56:41 compute-0 blissful_mirzakhani[259275]:                 "ceph.vdo": "0"
Nov 25 23:56:41 compute-0 blissful_mirzakhani[259275]:             },
Nov 25 23:56:41 compute-0 blissful_mirzakhani[259275]:             "type": "block",
Nov 25 23:56:41 compute-0 blissful_mirzakhani[259275]:             "vg_name": "ceph_vg1"
Nov 25 23:56:41 compute-0 blissful_mirzakhani[259275]:         }
Nov 25 23:56:41 compute-0 blissful_mirzakhani[259275]:     ],
Nov 25 23:56:41 compute-0 blissful_mirzakhani[259275]:     "2": [
Nov 25 23:56:41 compute-0 blissful_mirzakhani[259275]:         {
Nov 25 23:56:41 compute-0 blissful_mirzakhani[259275]:             "devices": [
Nov 25 23:56:41 compute-0 blissful_mirzakhani[259275]:                 "/dev/loop5"
Nov 25 23:56:41 compute-0 blissful_mirzakhani[259275]:             ],
Nov 25 23:56:41 compute-0 blissful_mirzakhani[259275]:             "lv_name": "ceph_lv2",
Nov 25 23:56:41 compute-0 blissful_mirzakhani[259275]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 23:56:41 compute-0 blissful_mirzakhani[259275]:             "lv_size": "21470642176",
Nov 25 23:56:41 compute-0 blissful_mirzakhani[259275]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=019d967b-1a56-4e90-8682-a890da577e20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 23:56:41 compute-0 blissful_mirzakhani[259275]:             "lv_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 23:56:41 compute-0 blissful_mirzakhani[259275]:             "name": "ceph_lv2",
Nov 25 23:56:41 compute-0 blissful_mirzakhani[259275]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 23:56:41 compute-0 blissful_mirzakhani[259275]:             "tags": {
Nov 25 23:56:41 compute-0 blissful_mirzakhani[259275]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 23:56:41 compute-0 blissful_mirzakhani[259275]:                 "ceph.block_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 23:56:41 compute-0 blissful_mirzakhani[259275]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 23:56:41 compute-0 blissful_mirzakhani[259275]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:56:41 compute-0 blissful_mirzakhani[259275]:                 "ceph.cluster_name": "ceph",
Nov 25 23:56:41 compute-0 blissful_mirzakhani[259275]:                 "ceph.crush_device_class": "",
Nov 25 23:56:41 compute-0 blissful_mirzakhani[259275]:                 "ceph.encrypted": "0",
Nov 25 23:56:41 compute-0 blissful_mirzakhani[259275]:                 "ceph.osd_fsid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 23:56:41 compute-0 blissful_mirzakhani[259275]:                 "ceph.osd_id": "2",
Nov 25 23:56:41 compute-0 blissful_mirzakhani[259275]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 23:56:41 compute-0 blissful_mirzakhani[259275]:                 "ceph.type": "block",
Nov 25 23:56:41 compute-0 blissful_mirzakhani[259275]:                 "ceph.vdo": "0"
Nov 25 23:56:41 compute-0 blissful_mirzakhani[259275]:             },
Nov 25 23:56:41 compute-0 blissful_mirzakhani[259275]:             "type": "block",
Nov 25 23:56:41 compute-0 blissful_mirzakhani[259275]:             "vg_name": "ceph_vg2"
Nov 25 23:56:41 compute-0 blissful_mirzakhani[259275]:         }
Nov 25 23:56:41 compute-0 blissful_mirzakhani[259275]:     ]
Nov 25 23:56:41 compute-0 blissful_mirzakhani[259275]: }
Nov 25 23:56:41 compute-0 systemd[1]: libpod-24a70454e73085ca653bdec7848e553f915917f34110246ac85b105a38622093.scope: Deactivated successfully.
Nov 25 23:56:41 compute-0 podman[259259]: 2025-11-25 23:56:41.408626538 +0000 UTC m=+0.956876411 container died 24a70454e73085ca653bdec7848e553f915917f34110246ac85b105a38622093 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_mirzakhani, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 25 23:56:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-0333a495d56db7be6822c44007d40165e352ec317647609736f990d5301a815b-merged.mount: Deactivated successfully.
Nov 25 23:56:41 compute-0 podman[259259]: 2025-11-25 23:56:41.480160111 +0000 UTC m=+1.028409994 container remove 24a70454e73085ca653bdec7848e553f915917f34110246ac85b105a38622093 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_mirzakhani, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 25 23:56:41 compute-0 systemd[1]: libpod-conmon-24a70454e73085ca653bdec7848e553f915917f34110246ac85b105a38622093.scope: Deactivated successfully.
Nov 25 23:56:41 compute-0 sudo[259148]: pam_unix(sudo:session): session closed for user root
Nov 25 23:56:41 compute-0 sudo[259297]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:56:41 compute-0 sudo[259297]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:56:41 compute-0 sudo[259297]: pam_unix(sudo:session): session closed for user root
Nov 25 23:56:41 compute-0 sudo[259322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:56:41 compute-0 sudo[259322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:56:41 compute-0 sudo[259322]: pam_unix(sudo:session): session closed for user root
Nov 25 23:56:41 compute-0 sudo[259347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:56:41 compute-0 sudo[259347]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:56:41 compute-0 sudo[259347]: pam_unix(sudo:session): session closed for user root
Nov 25 23:56:41 compute-0 sudo[259372]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 -- raw list --format json
Nov 25 23:56:41 compute-0 sudo[259372]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:56:41 compute-0 ceph-mon[75654]: pgmap v838: 177 pgs: 177 active+clean; 41 MiB data, 188 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.5 KiB/s wr, 30 op/s
Nov 25 23:56:42 compute-0 podman[259438]: 2025-11-25 23:56:42.320016671 +0000 UTC m=+0.066921401 container create dc98286edf4925ef89efc0a2a62f5f4724109a160c067150336490f0506bdbe4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_joliot, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Nov 25 23:56:42 compute-0 systemd[1]: Started libpod-conmon-dc98286edf4925ef89efc0a2a62f5f4724109a160c067150336490f0506bdbe4.scope.
Nov 25 23:56:42 compute-0 podman[259438]: 2025-11-25 23:56:42.290565043 +0000 UTC m=+0.037469813 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:56:42 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:56:42 compute-0 podman[259438]: 2025-11-25 23:56:42.420833057 +0000 UTC m=+0.167737827 container init dc98286edf4925ef89efc0a2a62f5f4724109a160c067150336490f0506bdbe4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_joliot, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 25 23:56:42 compute-0 podman[259438]: 2025-11-25 23:56:42.432218022 +0000 UTC m=+0.179122742 container start dc98286edf4925ef89efc0a2a62f5f4724109a160c067150336490f0506bdbe4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_joliot, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 23:56:42 compute-0 podman[259438]: 2025-11-25 23:56:42.436591058 +0000 UTC m=+0.183495778 container attach dc98286edf4925ef89efc0a2a62f5f4724109a160c067150336490f0506bdbe4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_joliot, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:56:42 compute-0 unruffled_joliot[259454]: 167 167
Nov 25 23:56:42 compute-0 systemd[1]: libpod-dc98286edf4925ef89efc0a2a62f5f4724109a160c067150336490f0506bdbe4.scope: Deactivated successfully.
Nov 25 23:56:42 compute-0 podman[259438]: 2025-11-25 23:56:42.440632507 +0000 UTC m=+0.187537227 container died dc98286edf4925ef89efc0a2a62f5f4724109a160c067150336490f0506bdbe4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 23:56:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-c0199f5cc8f5026ea0e5979b791f59876af8fd8626d2e029a989bc121e307e24-merged.mount: Deactivated successfully.
Nov 25 23:56:42 compute-0 podman[259438]: 2025-11-25 23:56:42.488942538 +0000 UTC m=+0.235847268 container remove dc98286edf4925ef89efc0a2a62f5f4724109a160c067150336490f0506bdbe4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_joliot, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 23:56:42 compute-0 systemd[1]: libpod-conmon-dc98286edf4925ef89efc0a2a62f5f4724109a160c067150336490f0506bdbe4.scope: Deactivated successfully.
Nov 25 23:56:42 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v839: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 1.7 KiB/s wr, 52 op/s
Nov 25 23:56:42 compute-0 podman[259478]: 2025-11-25 23:56:42.766756258 +0000 UTC m=+0.064820325 container create 0c7055bc7008e5c5935e3734ad1a0ed674819a3bfd05cd10b4b7deef345aad3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_kapitsa, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 23:56:42 compute-0 systemd[1]: Started libpod-conmon-0c7055bc7008e5c5935e3734ad1a0ed674819a3bfd05cd10b4b7deef345aad3d.scope.
Nov 25 23:56:42 compute-0 podman[259478]: 2025-11-25 23:56:42.74552305 +0000 UTC m=+0.043587117 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:56:42 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:56:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27a24eacf50eab24a562d0dde8543e71bbdb6dc79c187210c25195cf25d2dfaf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 23:56:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27a24eacf50eab24a562d0dde8543e71bbdb6dc79c187210c25195cf25d2dfaf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:56:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27a24eacf50eab24a562d0dde8543e71bbdb6dc79c187210c25195cf25d2dfaf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:56:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27a24eacf50eab24a562d0dde8543e71bbdb6dc79c187210c25195cf25d2dfaf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:56:42 compute-0 podman[259478]: 2025-11-25 23:56:42.885099253 +0000 UTC m=+0.183163360 container init 0c7055bc7008e5c5935e3734ad1a0ed674819a3bfd05cd10b4b7deef345aad3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_kapitsa, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 25 23:56:42 compute-0 podman[259478]: 2025-11-25 23:56:42.900853173 +0000 UTC m=+0.198917240 container start 0c7055bc7008e5c5935e3734ad1a0ed674819a3bfd05cd10b4b7deef345aad3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_kapitsa, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:56:42 compute-0 podman[259478]: 2025-11-25 23:56:42.905271882 +0000 UTC m=+0.203335939 container attach 0c7055bc7008e5c5935e3734ad1a0ed674819a3bfd05cd10b4b7deef345aad3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_kapitsa, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 23:56:43 compute-0 nova_compute[252550]: 2025-11-25 23:56:43.730 252558 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764114988.729541, 861debf8-73c8-45fe-92d9-fbfa772d34eb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 23:56:43 compute-0 nova_compute[252550]: 2025-11-25 23:56:43.732 252558 INFO nova.compute.manager [-] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] VM Stopped (Lifecycle Event)
Nov 25 23:56:43 compute-0 nova_compute[252550]: 2025-11-25 23:56:43.918 252558 DEBUG nova.compute.manager [None req-700a83da-3630-416d-8345-ea562f9ad019 - - - - - -] [instance: 861debf8-73c8-45fe-92d9-fbfa772d34eb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 23:56:43 compute-0 ceph-mon[75654]: pgmap v839: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 1.7 KiB/s wr, 52 op/s
Nov 25 23:56:43 compute-0 crazy_kapitsa[259494]: {
Nov 25 23:56:43 compute-0 crazy_kapitsa[259494]:     "019d967b-1a56-4e90-8682-a890da577e20": {
Nov 25 23:56:43 compute-0 crazy_kapitsa[259494]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:56:43 compute-0 crazy_kapitsa[259494]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 23:56:43 compute-0 crazy_kapitsa[259494]:         "osd_id": 2,
Nov 25 23:56:43 compute-0 crazy_kapitsa[259494]:         "osd_uuid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 23:56:43 compute-0 crazy_kapitsa[259494]:         "type": "bluestore"
Nov 25 23:56:43 compute-0 crazy_kapitsa[259494]:     },
Nov 25 23:56:43 compute-0 crazy_kapitsa[259494]:     "21bbab34-bea3-466b-8bf7-812749fcef47": {
Nov 25 23:56:43 compute-0 crazy_kapitsa[259494]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:56:43 compute-0 crazy_kapitsa[259494]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 23:56:43 compute-0 crazy_kapitsa[259494]:         "osd_id": 1,
Nov 25 23:56:43 compute-0 crazy_kapitsa[259494]:         "osd_uuid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 23:56:43 compute-0 crazy_kapitsa[259494]:         "type": "bluestore"
Nov 25 23:56:43 compute-0 crazy_kapitsa[259494]:     },
Nov 25 23:56:43 compute-0 crazy_kapitsa[259494]:     "c5ddf08a-6193-41e0-8332-60b5083aa62e": {
Nov 25 23:56:43 compute-0 crazy_kapitsa[259494]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:56:43 compute-0 crazy_kapitsa[259494]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 23:56:43 compute-0 crazy_kapitsa[259494]:         "osd_id": 0,
Nov 25 23:56:43 compute-0 crazy_kapitsa[259494]:         "osd_uuid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 23:56:43 compute-0 crazy_kapitsa[259494]:         "type": "bluestore"
Nov 25 23:56:43 compute-0 crazy_kapitsa[259494]:     }
Nov 25 23:56:43 compute-0 crazy_kapitsa[259494]: }
Nov 25 23:56:44 compute-0 systemd[1]: libpod-0c7055bc7008e5c5935e3734ad1a0ed674819a3bfd05cd10b4b7deef345aad3d.scope: Deactivated successfully.
Nov 25 23:56:44 compute-0 systemd[1]: libpod-0c7055bc7008e5c5935e3734ad1a0ed674819a3bfd05cd10b4b7deef345aad3d.scope: Consumed 1.124s CPU time.
Nov 25 23:56:44 compute-0 podman[259478]: 2025-11-25 23:56:44.019048998 +0000 UTC m=+1.317113085 container died 0c7055bc7008e5c5935e3734ad1a0ed674819a3bfd05cd10b4b7deef345aad3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 25 23:56:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-27a24eacf50eab24a562d0dde8543e71bbdb6dc79c187210c25195cf25d2dfaf-merged.mount: Deactivated successfully.
Nov 25 23:56:44 compute-0 podman[259478]: 2025-11-25 23:56:44.099413457 +0000 UTC m=+1.397477494 container remove 0c7055bc7008e5c5935e3734ad1a0ed674819a3bfd05cd10b4b7deef345aad3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_kapitsa, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 23:56:44 compute-0 systemd[1]: libpod-conmon-0c7055bc7008e5c5935e3734ad1a0ed674819a3bfd05cd10b4b7deef345aad3d.scope: Deactivated successfully.
Nov 25 23:56:44 compute-0 sudo[259372]: pam_unix(sudo:session): session closed for user root
Nov 25 23:56:44 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 23:56:44 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:56:44 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 23:56:44 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:56:44 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev 11a5c889-aeac-446f-835f-78247efc7bf9 does not exist
Nov 25 23:56:44 compute-0 sudo[259539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:56:44 compute-0 sudo[259539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:56:44 compute-0 sudo[259539]: pam_unix(sudo:session): session closed for user root
Nov 25 23:56:44 compute-0 sudo[259564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 23:56:44 compute-0 sudo[259564]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:56:44 compute-0 sudo[259564]: pam_unix(sudo:session): session closed for user root
Nov 25 23:56:44 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v840: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 1.6 KiB/s wr, 47 op/s
Nov 25 23:56:45 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:56:45 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:56:45 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e105 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:56:45 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Nov 25 23:56:45 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Nov 25 23:56:45 compute-0 ceph-mon[75654]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Nov 25 23:56:46 compute-0 ceph-mon[75654]: pgmap v840: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 1.6 KiB/s wr, 47 op/s
Nov 25 23:56:46 compute-0 ceph-mon[75654]: osdmap e106: 3 total, 3 up, 3 in
Nov 25 23:56:46 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v842: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 236 B/s wr, 20 op/s
Nov 25 23:56:48 compute-0 ceph-mon[75654]: pgmap v842: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 236 B/s wr, 20 op/s
Nov 25 23:56:48 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v843: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 204 B/s wr, 18 op/s
Nov 25 23:56:49 compute-0 podman[259589]: 2025-11-25 23:56:49.280381572 +0000 UTC m=+0.093035480 container health_status b33bb0f8771548517ad62d4e0193b93d84fd686bd9113462f17aea9d51587084 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 25 23:56:50 compute-0 nova_compute[252550]: 2025-11-25 23:56:50.128 252558 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764114995.1264982, bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 23:56:50 compute-0 nova_compute[252550]: 2025-11-25 23:56:50.129 252558 INFO nova.compute.manager [-] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] VM Stopped (Lifecycle Event)
Nov 25 23:56:50 compute-0 ceph-mon[75654]: pgmap v843: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 204 B/s wr, 18 op/s
Nov 25 23:56:50 compute-0 nova_compute[252550]: 2025-11-25 23:56:50.271 252558 DEBUG nova.compute.manager [None req-0f65cb1f-472c-4d4b-afd3-b7c5f75861ca - - - - - -] [instance: bb6a7c0f-c85b-4e9c-8c22-c7ab1531ae91] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 23:56:50 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v844: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 204 B/s wr, 18 op/s
Nov 25 23:56:50 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:56:52 compute-0 ceph-mon[75654]: pgmap v844: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 204 B/s wr, 18 op/s
Nov 25 23:56:52 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v845: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:56:54 compute-0 ceph-mon[75654]: pgmap v845: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:56:54 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v846: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:56:55 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:56:56 compute-0 ceph-mgr[75954]: [balancer INFO root] Optimize plan auto_2025-11-25_23:56:56
Nov 25 23:56:56 compute-0 ceph-mgr[75954]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 23:56:56 compute-0 ceph-mgr[75954]: [balancer INFO root] do_upmap
Nov 25 23:56:56 compute-0 ceph-mgr[75954]: [balancer INFO root] pools ['backups', 'images', 'volumes', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr', 'vms']
Nov 25 23:56:56 compute-0 ceph-mgr[75954]: [balancer INFO root] prepared 0/10 changes
Nov 25 23:56:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:56:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:56:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:56:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:56:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:56:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:56:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 23:56:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 23:56:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 23:56:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 23:56:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 23:56:56 compute-0 podman[259610]: 2025-11-25 23:56:56.242168032 +0000 UTC m=+0.062727859 container health_status 9c33443d6f21f9126b440289b7d384a59ef92406382d0e40a2b11f1da5dc47d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Nov 25 23:56:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 23:56:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 23:56:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 23:56:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 23:56:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 23:56:56 compute-0 podman[259609]: 2025-11-25 23:56:56.303288017 +0000 UTC m=+0.119736574 container health_status 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 25 23:56:56 compute-0 ceph-mon[75654]: pgmap v846: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:56:56 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v847: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:56:57 compute-0 ceph-mon[75654]: pgmap v847: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:56:58 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v848: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:56:59 compute-0 ceph-mon[75654]: pgmap v848: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:57:00 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v849: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:57:00 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:57:01 compute-0 ceph-mon[75654]: pgmap v849: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:57:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 23:57:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:57:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 23:57:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:57:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 23:57:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:57:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 23:57:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:57:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:57:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:57:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 25 23:57:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:57:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 23:57:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:57:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:57:02 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v850: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:57:03 compute-0 ceph-mon[75654]: pgmap v850: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:57:04 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v851: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:57:05 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:57:05 compute-0 ceph-mon[75654]: pgmap v851: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:57:05 compute-0 nova_compute[252550]: 2025-11-25 23:57:05.823 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 23:57:05 compute-0 nova_compute[252550]: 2025-11-25 23:57:05.824 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 23:57:06 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v852: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:57:06 compute-0 nova_compute[252550]: 2025-11-25 23:57:06.822 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 23:57:06 compute-0 nova_compute[252550]: 2025-11-25 23:57:06.822 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 23:57:07 compute-0 ceph-mon[75654]: pgmap v852: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:57:07 compute-0 nova_compute[252550]: 2025-11-25 23:57:07.817 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 23:57:07 compute-0 nova_compute[252550]: 2025-11-25 23:57:07.871 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 23:57:07 compute-0 nova_compute[252550]: 2025-11-25 23:57:07.871 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 23:57:07 compute-0 nova_compute[252550]: 2025-11-25 23:57:07.871 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 23:57:07 compute-0 nova_compute[252550]: 2025-11-25 23:57:07.899 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 23:57:07 compute-0 nova_compute[252550]: 2025-11-25 23:57:07.900 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 23:57:07 compute-0 nova_compute[252550]: 2025-11-25 23:57:07.900 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 23:57:07 compute-0 nova_compute[252550]: 2025-11-25 23:57:07.942 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 23:57:07 compute-0 nova_compute[252550]: 2025-11-25 23:57:07.943 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 23:57:07 compute-0 nova_compute[252550]: 2025-11-25 23:57:07.943 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 23:57:07 compute-0 nova_compute[252550]: 2025-11-25 23:57:07.943 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 23:57:07 compute-0 nova_compute[252550]: 2025-11-25 23:57:07.944 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 23:57:08 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 23:57:08 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2988001353' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 23:57:08 compute-0 nova_compute[252550]: 2025-11-25 23:57:08.370 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 23:57:08 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v853: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:57:08 compute-0 nova_compute[252550]: 2025-11-25 23:57:08.606 252558 WARNING nova.virt.libvirt.driver [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 23:57:08 compute-0 nova_compute[252550]: 2025-11-25 23:57:08.607 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5238MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 23:57:08 compute-0 nova_compute[252550]: 2025-11-25 23:57:08.607 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 23:57:08 compute-0 nova_compute[252550]: 2025-11-25 23:57:08.608 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 23:57:08 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/2988001353' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 23:57:08 compute-0 nova_compute[252550]: 2025-11-25 23:57:08.784 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 23:57:08 compute-0 nova_compute[252550]: 2025-11-25 23:57:08.785 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 23:57:08 compute-0 nova_compute[252550]: 2025-11-25 23:57:08.807 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 23:57:09 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 23:57:09 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3820429378' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 23:57:09 compute-0 nova_compute[252550]: 2025-11-25 23:57:09.233 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 23:57:09 compute-0 nova_compute[252550]: 2025-11-25 23:57:09.240 252558 DEBUG nova.compute.provider_tree [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Inventory has not changed in ProviderTree for provider: 08547965-b35f-4b7b-95d8-902f06aa011c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 23:57:09 compute-0 nova_compute[252550]: 2025-11-25 23:57:09.315 252558 DEBUG nova.scheduler.client.report [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Inventory has not changed for provider 08547965-b35f-4b7b-95d8-902f06aa011c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 23:57:09 compute-0 nova_compute[252550]: 2025-11-25 23:57:09.482 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 23:57:09 compute-0 nova_compute[252550]: 2025-11-25 23:57:09.483 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.875s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 23:57:09 compute-0 ceph-mon[75654]: pgmap v853: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:57:09 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/3820429378' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 23:57:10 compute-0 nova_compute[252550]: 2025-11-25 23:57:10.406 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 23:57:10 compute-0 nova_compute[252550]: 2025-11-25 23:57:10.406 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 23:57:10 compute-0 nova_compute[252550]: 2025-11-25 23:57:10.407 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 23:57:10 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v854: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:57:10 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:57:11 compute-0 ceph-mon[75654]: pgmap v854: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:57:12 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v855: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:57:13 compute-0 ceph-mon[75654]: pgmap v855: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:57:14 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v856: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:57:15 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:57:15 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:57:15.681 160725 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '32:82:13', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '36:f3:66:b7:57:d1'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 23:57:15 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:57:15.682 160725 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 23:57:15 compute-0 ceph-mon[75654]: pgmap v856: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:57:16 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v857: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:57:17 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 23:57:17 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2157042791' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 23:57:17 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 23:57:17 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2157042791' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 23:57:17 compute-0 ceph-mon[75654]: pgmap v857: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:57:17 compute-0 ceph-mon[75654]: from='client.? 192.168.122.10:0/2157042791' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 23:57:17 compute-0 ceph-mon[75654]: from='client.? 192.168.122.10:0/2157042791' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 23:57:18 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v858: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:57:19 compute-0 ceph-mon[75654]: pgmap v858: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:57:20 compute-0 podman[259696]: 2025-11-25 23:57:20.256803675 +0000 UTC m=+0.079513298 container health_status b33bb0f8771548517ad62d4e0193b93d84fd686bd9113462f17aea9d51587084 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 23:57:20 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v859: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:57:20 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:57:21 compute-0 ceph-mon[75654]: pgmap v859: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:57:22 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v860: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:57:23 compute-0 ceph-mon[75654]: pgmap v860: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:57:24 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v861: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:57:24 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:57:24.684 160725 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=2ba84045-48af-49e3-86f7-35b32300977f, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 23:57:25 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:57:25 compute-0 ceph-mon[75654]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Nov 25 23:57:25 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:57:25.682094) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 23:57:25 compute-0 ceph-mon[75654]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Nov 25 23:57:25 compute-0 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764115045682130, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 1948, "num_deletes": 266, "total_data_size": 1977437, "memory_usage": 2020312, "flush_reason": "Manual Compaction"}
Nov 25 23:57:25 compute-0 ceph-mon[75654]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Nov 25 23:57:25 compute-0 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764115045696446, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 1377443, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 15723, "largest_seqno": 17670, "table_properties": {"data_size": 1370089, "index_size": 4172, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 17896, "raw_average_key_size": 21, "raw_value_size": 1354256, "raw_average_value_size": 1610, "num_data_blocks": 187, "num_entries": 841, "num_filter_entries": 841, "num_deletions": 266, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764114901, "oldest_key_time": 1764114901, "file_creation_time": 1764115045, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cf57a6b1-796f-4cfa-b350-53eb10a4554d", "db_session_id": "Q7VS70283MEZ1V621ZPR", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Nov 25 23:57:25 compute-0 ceph-mon[75654]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 14424 microseconds, and 7957 cpu microseconds.
Nov 25 23:57:25 compute-0 ceph-mon[75654]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 23:57:25 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:57:25.696508) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 1377443 bytes OK
Nov 25 23:57:25 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:57:25.696544) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Nov 25 23:57:25 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:57:25.698365) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Nov 25 23:57:25 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:57:25.698385) EVENT_LOG_v1 {"time_micros": 1764115045698379, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 23:57:25 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:57:25.698405) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 23:57:25 compute-0 ceph-mon[75654]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 1968935, prev total WAL file size 1968935, number of live WAL files 2.
Nov 25 23:57:25 compute-0 ceph-mon[75654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 23:57:25 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:57:25.699246) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353030' seq:72057594037927935, type:22 .. '6D67727374617400373531' seq:0, type:0; will stop at (end)
Nov 25 23:57:25 compute-0 ceph-mon[75654]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 23:57:25 compute-0 ceph-mon[75654]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(1345KB)], [38(5624KB)]
Nov 25 23:57:25 compute-0 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764115045699290, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 7136795, "oldest_snapshot_seqno": -1}
Nov 25 23:57:25 compute-0 ceph-mon[75654]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 3904 keys, 5598075 bytes, temperature: kUnknown
Nov 25 23:57:25 compute-0 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764115045741289, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 5598075, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 5570501, "index_size": 16712, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9797, "raw_key_size": 91755, "raw_average_key_size": 23, "raw_value_size": 5498850, "raw_average_value_size": 1408, "num_data_blocks": 720, "num_entries": 3904, "num_filter_entries": 3904, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764113467, "oldest_key_time": 0, "file_creation_time": 1764115045, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cf57a6b1-796f-4cfa-b350-53eb10a4554d", "db_session_id": "Q7VS70283MEZ1V621ZPR", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Nov 25 23:57:25 compute-0 ceph-mon[75654]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 23:57:25 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:57:25.741558) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 5598075 bytes
Nov 25 23:57:25 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:57:25.743151) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 169.6 rd, 133.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 5.5 +0.0 blob) out(5.3 +0.0 blob), read-write-amplify(9.2) write-amplify(4.1) OK, records in: 4370, records dropped: 466 output_compression: NoCompression
Nov 25 23:57:25 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:57:25.743224) EVENT_LOG_v1 {"time_micros": 1764115045743167, "job": 18, "event": "compaction_finished", "compaction_time_micros": 42078, "compaction_time_cpu_micros": 27829, "output_level": 6, "num_output_files": 1, "total_output_size": 5598075, "num_input_records": 4370, "num_output_records": 3904, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 23:57:25 compute-0 ceph-mon[75654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 23:57:25 compute-0 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764115045743909, "job": 18, "event": "table_file_deletion", "file_number": 40}
Nov 25 23:57:25 compute-0 ceph-mon[75654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 23:57:25 compute-0 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764115045746271, "job": 18, "event": "table_file_deletion", "file_number": 38}
Nov 25 23:57:25 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:57:25.699128) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 23:57:25 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:57:25.746323) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 23:57:25 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:57:25.746328) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 23:57:25 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:57:25.746330) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 23:57:25 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:57:25.746332) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 23:57:25 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:57:25.746334) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 23:57:25 compute-0 ceph-mon[75654]: pgmap v861: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:57:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:57:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:57:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:57:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:57:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:57:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:57:26 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v862: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:57:27 compute-0 podman[259717]: 2025-11-25 23:57:27.236036498 +0000 UTC m=+0.045288412 container health_status 9c33443d6f21f9126b440289b7d384a59ef92406382d0e40a2b11f1da5dc47d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 23:57:27 compute-0 podman[259716]: 2025-11-25 23:57:27.256052264 +0000 UTC m=+0.074326060 container health_status 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 23:57:27 compute-0 ceph-mon[75654]: pgmap v862: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:57:28 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v863: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:57:29 compute-0 ceph-mon[75654]: pgmap v863: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:57:30 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v864: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:57:30 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:57:31 compute-0 ceph-mon[75654]: pgmap v864: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:57:32 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v865: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:57:33 compute-0 ceph-mon[75654]: pgmap v865: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:57:34 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v866: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:57:35 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:57:35 compute-0 ceph-mon[75654]: pgmap v866: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:57:36 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v867: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:57:37 compute-0 sshd-session[259760]: Invalid user userb from 171.244.140.22 port 47130
Nov 25 23:57:37 compute-0 sshd-session[259760]: Received disconnect from 171.244.140.22 port 47130:11: Bye Bye [preauth]
Nov 25 23:57:37 compute-0 sshd-session[259760]: Disconnected from invalid user userb 171.244.140.22 port 47130 [preauth]
Nov 25 23:57:37 compute-0 ceph-mon[75654]: pgmap v867: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:57:38 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v868: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:57:39 compute-0 ceph-mon[75654]: pgmap v868: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:57:40 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v869: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:57:40 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:57:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:57:40.766 160725 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 23:57:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:57:40.766 160725 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 23:57:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:57:40.767 160725 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 23:57:41 compute-0 ceph-mon[75654]: pgmap v869: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:57:42 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v870: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:57:43 compute-0 ceph-mon[75654]: pgmap v870: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:57:44 compute-0 sudo[259762]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:57:44 compute-0 sudo[259762]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:57:44 compute-0 sudo[259762]: pam_unix(sudo:session): session closed for user root
Nov 25 23:57:44 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v871: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:57:44 compute-0 sudo[259787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:57:44 compute-0 sudo[259787]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:57:44 compute-0 sudo[259787]: pam_unix(sudo:session): session closed for user root
Nov 25 23:57:44 compute-0 sudo[259812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:57:44 compute-0 sudo[259812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:57:44 compute-0 sudo[259812]: pam_unix(sudo:session): session closed for user root
Nov 25 23:57:44 compute-0 sudo[259837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 23:57:44 compute-0 sudo[259837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:57:45 compute-0 sudo[259837]: pam_unix(sudo:session): session closed for user root
Nov 25 23:57:45 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 23:57:45 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:57:45 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 23:57:45 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 23:57:45 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 23:57:45 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:57:45 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev 52db2392-fb66-4d02-b889-2ef60b3c1fd8 does not exist
Nov 25 23:57:45 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev 094f1874-3121-42b2-8170-77abbe15cf52 does not exist
Nov 25 23:57:45 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev ff4a0a3e-4b59-424c-90c5-c5323c79707b does not exist
Nov 25 23:57:45 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 23:57:45 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 23:57:45 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 23:57:45 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 23:57:45 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 23:57:45 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:57:45 compute-0 sudo[259894]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:57:45 compute-0 sudo[259894]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:57:45 compute-0 sudo[259894]: pam_unix(sudo:session): session closed for user root
Nov 25 23:57:45 compute-0 sudo[259919]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:57:45 compute-0 sudo[259919]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:57:45 compute-0 sudo[259919]: pam_unix(sudo:session): session closed for user root
Nov 25 23:57:45 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:57:45 compute-0 sudo[259944]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:57:45 compute-0 sudo[259944]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:57:45 compute-0 sudo[259944]: pam_unix(sudo:session): session closed for user root
Nov 25 23:57:45 compute-0 sudo[259969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 23:57:45 compute-0 sudo[259969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:57:45 compute-0 ceph-mon[75654]: pgmap v871: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:57:45 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:57:45 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 23:57:45 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:57:45 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 23:57:45 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 23:57:45 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:57:46 compute-0 podman[260035]: 2025-11-25 23:57:46.28014763 +0000 UTC m=+0.055224019 container create 55462a34330c2a458f3700870543036b8e2ed8f1d89213c8f2cde45ef48100c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_lalande, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:57:46 compute-0 systemd[1]: Started libpod-conmon-55462a34330c2a458f3700870543036b8e2ed8f1d89213c8f2cde45ef48100c6.scope.
Nov 25 23:57:46 compute-0 podman[260035]: 2025-11-25 23:57:46.252456899 +0000 UTC m=+0.027533338 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:57:46 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:57:46 compute-0 podman[260035]: 2025-11-25 23:57:46.37629688 +0000 UTC m=+0.151373239 container init 55462a34330c2a458f3700870543036b8e2ed8f1d89213c8f2cde45ef48100c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_lalande, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 23:57:46 compute-0 podman[260035]: 2025-11-25 23:57:46.385792625 +0000 UTC m=+0.160869004 container start 55462a34330c2a458f3700870543036b8e2ed8f1d89213c8f2cde45ef48100c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 23:57:46 compute-0 podman[260035]: 2025-11-25 23:57:46.389615547 +0000 UTC m=+0.164691936 container attach 55462a34330c2a458f3700870543036b8e2ed8f1d89213c8f2cde45ef48100c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_lalande, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef)
Nov 25 23:57:46 compute-0 optimistic_lalande[260051]: 167 167
Nov 25 23:57:46 compute-0 systemd[1]: libpod-55462a34330c2a458f3700870543036b8e2ed8f1d89213c8f2cde45ef48100c6.scope: Deactivated successfully.
Nov 25 23:57:46 compute-0 conmon[260051]: conmon 55462a34330c2a458f37 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-55462a34330c2a458f3700870543036b8e2ed8f1d89213c8f2cde45ef48100c6.scope/container/memory.events
Nov 25 23:57:46 compute-0 podman[260035]: 2025-11-25 23:57:46.393430099 +0000 UTC m=+0.168506478 container died 55462a34330c2a458f3700870543036b8e2ed8f1d89213c8f2cde45ef48100c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_lalande, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 23:57:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-4010bdd78fb8fd8801a0c35b1219ebeddc60d0656e9b9ec13814843a59efc72e-merged.mount: Deactivated successfully.
Nov 25 23:57:46 compute-0 podman[260035]: 2025-11-25 23:57:46.43498627 +0000 UTC m=+0.210062659 container remove 55462a34330c2a458f3700870543036b8e2ed8f1d89213c8f2cde45ef48100c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_lalande, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 23:57:46 compute-0 systemd[1]: libpod-conmon-55462a34330c2a458f3700870543036b8e2ed8f1d89213c8f2cde45ef48100c6.scope: Deactivated successfully.
Nov 25 23:57:46 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v872: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:57:46 compute-0 podman[260075]: 2025-11-25 23:57:46.674865416 +0000 UTC m=+0.070898057 container create 1593dfdd22e6ed8a32f8a0b0f2d93f175534644e15e7d32a3b0518a01fb4e683 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mendeleev, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:57:46 compute-0 systemd[1]: Started libpod-conmon-1593dfdd22e6ed8a32f8a0b0f2d93f175534644e15e7d32a3b0518a01fb4e683.scope.
Nov 25 23:57:46 compute-0 podman[260075]: 2025-11-25 23:57:46.64808501 +0000 UTC m=+0.044117711 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:57:46 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:57:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc4dffd57be29662be3da65f05004965059fb7c2fbf58a94f4a58413abdb37a4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 23:57:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc4dffd57be29662be3da65f05004965059fb7c2fbf58a94f4a58413abdb37a4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:57:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc4dffd57be29662be3da65f05004965059fb7c2fbf58a94f4a58413abdb37a4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:57:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc4dffd57be29662be3da65f05004965059fb7c2fbf58a94f4a58413abdb37a4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:57:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc4dffd57be29662be3da65f05004965059fb7c2fbf58a94f4a58413abdb37a4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 23:57:46 compute-0 podman[260075]: 2025-11-25 23:57:46.771452129 +0000 UTC m=+0.167484840 container init 1593dfdd22e6ed8a32f8a0b0f2d93f175534644e15e7d32a3b0518a01fb4e683 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mendeleev, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:57:46 compute-0 podman[260075]: 2025-11-25 23:57:46.790056036 +0000 UTC m=+0.186088667 container start 1593dfdd22e6ed8a32f8a0b0f2d93f175534644e15e7d32a3b0518a01fb4e683 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mendeleev, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:57:46 compute-0 podman[260075]: 2025-11-25 23:57:46.79394431 +0000 UTC m=+0.189976981 container attach 1593dfdd22e6ed8a32f8a0b0f2d93f175534644e15e7d32a3b0518a01fb4e683 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mendeleev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:57:47 compute-0 cranky_mendeleev[260091]: --> passed data devices: 0 physical, 3 LVM
Nov 25 23:57:47 compute-0 cranky_mendeleev[260091]: --> relative data size: 1.0
Nov 25 23:57:47 compute-0 cranky_mendeleev[260091]: --> All data devices are unavailable
Nov 25 23:57:47 compute-0 systemd[1]: libpod-1593dfdd22e6ed8a32f8a0b0f2d93f175534644e15e7d32a3b0518a01fb4e683.scope: Deactivated successfully.
Nov 25 23:57:47 compute-0 podman[260075]: 2025-11-25 23:57:47.860440071 +0000 UTC m=+1.256472712 container died 1593dfdd22e6ed8a32f8a0b0f2d93f175534644e15e7d32a3b0518a01fb4e683 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mendeleev, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 23:57:47 compute-0 systemd[1]: libpod-1593dfdd22e6ed8a32f8a0b0f2d93f175534644e15e7d32a3b0518a01fb4e683.scope: Consumed 1.027s CPU time.
Nov 25 23:57:47 compute-0 ceph-mon[75654]: pgmap v872: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:57:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-dc4dffd57be29662be3da65f05004965059fb7c2fbf58a94f4a58413abdb37a4-merged.mount: Deactivated successfully.
Nov 25 23:57:47 compute-0 podman[260075]: 2025-11-25 23:57:47.924680609 +0000 UTC m=+1.320713200 container remove 1593dfdd22e6ed8a32f8a0b0f2d93f175534644e15e7d32a3b0518a01fb4e683 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mendeleev, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 25 23:57:47 compute-0 systemd[1]: libpod-conmon-1593dfdd22e6ed8a32f8a0b0f2d93f175534644e15e7d32a3b0518a01fb4e683.scope: Deactivated successfully.
Nov 25 23:57:47 compute-0 sudo[259969]: pam_unix(sudo:session): session closed for user root
Nov 25 23:57:48 compute-0 sudo[260136]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:57:48 compute-0 sudo[260136]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:57:48 compute-0 sudo[260136]: pam_unix(sudo:session): session closed for user root
Nov 25 23:57:48 compute-0 sudo[260161]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:57:48 compute-0 sudo[260161]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:57:48 compute-0 sudo[260161]: pam_unix(sudo:session): session closed for user root
Nov 25 23:57:48 compute-0 sudo[260186]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:57:48 compute-0 sudo[260186]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:57:48 compute-0 sudo[260186]: pam_unix(sudo:session): session closed for user root
Nov 25 23:57:48 compute-0 sudo[260211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 -- lvm list --format json
Nov 25 23:57:48 compute-0 sudo[260211]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:57:48 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v873: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:57:48 compute-0 podman[260276]: 2025-11-25 23:57:48.754223844 +0000 UTC m=+0.069058649 container create 39c0230d24041601a9d4ea407669eb1a67cb4991711f6ea14c4e9d97b4c94c35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_yalow, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 23:57:48 compute-0 systemd[1]: Started libpod-conmon-39c0230d24041601a9d4ea407669eb1a67cb4991711f6ea14c4e9d97b4c94c35.scope.
Nov 25 23:57:48 compute-0 podman[260276]: 2025-11-25 23:57:48.728794344 +0000 UTC m=+0.043629239 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:57:48 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:57:48 compute-0 podman[260276]: 2025-11-25 23:57:48.847294732 +0000 UTC m=+0.162129567 container init 39c0230d24041601a9d4ea407669eb1a67cb4991711f6ea14c4e9d97b4c94c35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_yalow, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 23:57:48 compute-0 podman[260276]: 2025-11-25 23:57:48.854482194 +0000 UTC m=+0.169316999 container start 39c0230d24041601a9d4ea407669eb1a67cb4991711f6ea14c4e9d97b4c94c35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_yalow, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:57:48 compute-0 podman[260276]: 2025-11-25 23:57:48.857506806 +0000 UTC m=+0.172341601 container attach 39c0230d24041601a9d4ea407669eb1a67cb4991711f6ea14c4e9d97b4c94c35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_yalow, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:57:48 compute-0 epic_yalow[260292]: 167 167
Nov 25 23:57:48 compute-0 systemd[1]: libpod-39c0230d24041601a9d4ea407669eb1a67cb4991711f6ea14c4e9d97b4c94c35.scope: Deactivated successfully.
Nov 25 23:57:48 compute-0 podman[260276]: 2025-11-25 23:57:48.859511639 +0000 UTC m=+0.174346434 container died 39c0230d24041601a9d4ea407669eb1a67cb4991711f6ea14c4e9d97b4c94c35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 25 23:57:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-f488e99b0b8ecdb08c6d440bf61de4d4cc326a40962490efc6ef3b8d2084b1db-merged.mount: Deactivated successfully.
Nov 25 23:57:48 compute-0 podman[260276]: 2025-11-25 23:57:48.890494478 +0000 UTC m=+0.205329283 container remove 39c0230d24041601a9d4ea407669eb1a67cb4991711f6ea14c4e9d97b4c94c35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_yalow, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef)
Nov 25 23:57:48 compute-0 systemd[1]: libpod-conmon-39c0230d24041601a9d4ea407669eb1a67cb4991711f6ea14c4e9d97b4c94c35.scope: Deactivated successfully.
Nov 25 23:57:49 compute-0 podman[260316]: 2025-11-25 23:57:49.043807558 +0000 UTC m=+0.042502268 container create a8c700ee8b428f1edc5a735525c5c31a228aa8589c1d00df8a933bff8236fc35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_taussig, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 25 23:57:49 compute-0 systemd[1]: Started libpod-conmon-a8c700ee8b428f1edc5a735525c5c31a228aa8589c1d00df8a933bff8236fc35.scope.
Nov 25 23:57:49 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:57:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee23609cfba62df47eb745b19f7a510c7f17cfb433cab7e745bc19c7b3fc8f6a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 23:57:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee23609cfba62df47eb745b19f7a510c7f17cfb433cab7e745bc19c7b3fc8f6a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:57:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee23609cfba62df47eb745b19f7a510c7f17cfb433cab7e745bc19c7b3fc8f6a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:57:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee23609cfba62df47eb745b19f7a510c7f17cfb433cab7e745bc19c7b3fc8f6a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:57:49 compute-0 podman[260316]: 2025-11-25 23:57:49.114041876 +0000 UTC m=+0.112736586 container init a8c700ee8b428f1edc5a735525c5c31a228aa8589c1d00df8a933bff8236fc35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_taussig, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:57:49 compute-0 podman[260316]: 2025-11-25 23:57:49.121112505 +0000 UTC m=+0.119807195 container start a8c700ee8b428f1edc5a735525c5c31a228aa8589c1d00df8a933bff8236fc35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_taussig, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 25 23:57:49 compute-0 podman[260316]: 2025-11-25 23:57:49.02629985 +0000 UTC m=+0.024994560 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:57:49 compute-0 podman[260316]: 2025-11-25 23:57:49.12427559 +0000 UTC m=+0.122970290 container attach a8c700ee8b428f1edc5a735525c5c31a228aa8589c1d00df8a933bff8236fc35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_taussig, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3)
Nov 25 23:57:49 compute-0 keen_taussig[260332]: {
Nov 25 23:57:49 compute-0 keen_taussig[260332]:     "0": [
Nov 25 23:57:49 compute-0 keen_taussig[260332]:         {
Nov 25 23:57:49 compute-0 keen_taussig[260332]:             "devices": [
Nov 25 23:57:49 compute-0 keen_taussig[260332]:                 "/dev/loop3"
Nov 25 23:57:49 compute-0 keen_taussig[260332]:             ],
Nov 25 23:57:49 compute-0 keen_taussig[260332]:             "lv_name": "ceph_lv0",
Nov 25 23:57:49 compute-0 keen_taussig[260332]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 23:57:49 compute-0 keen_taussig[260332]:             "lv_size": "21470642176",
Nov 25 23:57:49 compute-0 keen_taussig[260332]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c5ddf08a-6193-41e0-8332-60b5083aa62e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 23:57:49 compute-0 keen_taussig[260332]:             "lv_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 23:57:49 compute-0 keen_taussig[260332]:             "name": "ceph_lv0",
Nov 25 23:57:49 compute-0 keen_taussig[260332]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 23:57:49 compute-0 keen_taussig[260332]:             "tags": {
Nov 25 23:57:49 compute-0 keen_taussig[260332]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 23:57:49 compute-0 keen_taussig[260332]:                 "ceph.block_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 23:57:49 compute-0 keen_taussig[260332]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 23:57:49 compute-0 keen_taussig[260332]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:57:49 compute-0 keen_taussig[260332]:                 "ceph.cluster_name": "ceph",
Nov 25 23:57:49 compute-0 keen_taussig[260332]:                 "ceph.crush_device_class": "",
Nov 25 23:57:49 compute-0 keen_taussig[260332]:                 "ceph.encrypted": "0",
Nov 25 23:57:49 compute-0 keen_taussig[260332]:                 "ceph.osd_fsid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 23:57:49 compute-0 keen_taussig[260332]:                 "ceph.osd_id": "0",
Nov 25 23:57:49 compute-0 keen_taussig[260332]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 23:57:49 compute-0 keen_taussig[260332]:                 "ceph.type": "block",
Nov 25 23:57:49 compute-0 keen_taussig[260332]:                 "ceph.vdo": "0"
Nov 25 23:57:49 compute-0 keen_taussig[260332]:             },
Nov 25 23:57:49 compute-0 keen_taussig[260332]:             "type": "block",
Nov 25 23:57:49 compute-0 keen_taussig[260332]:             "vg_name": "ceph_vg0"
Nov 25 23:57:49 compute-0 keen_taussig[260332]:         }
Nov 25 23:57:49 compute-0 keen_taussig[260332]:     ],
Nov 25 23:57:49 compute-0 keen_taussig[260332]:     "1": [
Nov 25 23:57:49 compute-0 keen_taussig[260332]:         {
Nov 25 23:57:49 compute-0 keen_taussig[260332]:             "devices": [
Nov 25 23:57:49 compute-0 keen_taussig[260332]:                 "/dev/loop4"
Nov 25 23:57:49 compute-0 keen_taussig[260332]:             ],
Nov 25 23:57:49 compute-0 keen_taussig[260332]:             "lv_name": "ceph_lv1",
Nov 25 23:57:49 compute-0 keen_taussig[260332]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 23:57:49 compute-0 keen_taussig[260332]:             "lv_size": "21470642176",
Nov 25 23:57:49 compute-0 keen_taussig[260332]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21bbab34-bea3-466b-8bf7-812749fcef47,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 23:57:49 compute-0 keen_taussig[260332]:             "lv_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 23:57:49 compute-0 keen_taussig[260332]:             "name": "ceph_lv1",
Nov 25 23:57:49 compute-0 keen_taussig[260332]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 23:57:49 compute-0 keen_taussig[260332]:             "tags": {
Nov 25 23:57:49 compute-0 keen_taussig[260332]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 23:57:49 compute-0 keen_taussig[260332]:                 "ceph.block_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 23:57:49 compute-0 keen_taussig[260332]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 23:57:49 compute-0 keen_taussig[260332]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:57:49 compute-0 keen_taussig[260332]:                 "ceph.cluster_name": "ceph",
Nov 25 23:57:49 compute-0 keen_taussig[260332]:                 "ceph.crush_device_class": "",
Nov 25 23:57:49 compute-0 keen_taussig[260332]:                 "ceph.encrypted": "0",
Nov 25 23:57:49 compute-0 keen_taussig[260332]:                 "ceph.osd_fsid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 23:57:49 compute-0 keen_taussig[260332]:                 "ceph.osd_id": "1",
Nov 25 23:57:49 compute-0 keen_taussig[260332]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 23:57:49 compute-0 keen_taussig[260332]:                 "ceph.type": "block",
Nov 25 23:57:49 compute-0 keen_taussig[260332]:                 "ceph.vdo": "0"
Nov 25 23:57:49 compute-0 keen_taussig[260332]:             },
Nov 25 23:57:49 compute-0 keen_taussig[260332]:             "type": "block",
Nov 25 23:57:49 compute-0 keen_taussig[260332]:             "vg_name": "ceph_vg1"
Nov 25 23:57:49 compute-0 keen_taussig[260332]:         }
Nov 25 23:57:49 compute-0 keen_taussig[260332]:     ],
Nov 25 23:57:49 compute-0 keen_taussig[260332]:     "2": [
Nov 25 23:57:49 compute-0 keen_taussig[260332]:         {
Nov 25 23:57:49 compute-0 keen_taussig[260332]:             "devices": [
Nov 25 23:57:49 compute-0 keen_taussig[260332]:                 "/dev/loop5"
Nov 25 23:57:49 compute-0 keen_taussig[260332]:             ],
Nov 25 23:57:49 compute-0 keen_taussig[260332]:             "lv_name": "ceph_lv2",
Nov 25 23:57:49 compute-0 keen_taussig[260332]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 23:57:49 compute-0 keen_taussig[260332]:             "lv_size": "21470642176",
Nov 25 23:57:49 compute-0 keen_taussig[260332]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=019d967b-1a56-4e90-8682-a890da577e20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 23:57:49 compute-0 keen_taussig[260332]:             "lv_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 23:57:49 compute-0 keen_taussig[260332]:             "name": "ceph_lv2",
Nov 25 23:57:49 compute-0 keen_taussig[260332]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 23:57:49 compute-0 keen_taussig[260332]:             "tags": {
Nov 25 23:57:49 compute-0 keen_taussig[260332]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 23:57:49 compute-0 keen_taussig[260332]:                 "ceph.block_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 23:57:49 compute-0 keen_taussig[260332]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 23:57:49 compute-0 keen_taussig[260332]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:57:49 compute-0 keen_taussig[260332]:                 "ceph.cluster_name": "ceph",
Nov 25 23:57:49 compute-0 keen_taussig[260332]:                 "ceph.crush_device_class": "",
Nov 25 23:57:49 compute-0 keen_taussig[260332]:                 "ceph.encrypted": "0",
Nov 25 23:57:49 compute-0 keen_taussig[260332]:                 "ceph.osd_fsid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 23:57:49 compute-0 keen_taussig[260332]:                 "ceph.osd_id": "2",
Nov 25 23:57:49 compute-0 keen_taussig[260332]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 23:57:49 compute-0 keen_taussig[260332]:                 "ceph.type": "block",
Nov 25 23:57:49 compute-0 keen_taussig[260332]:                 "ceph.vdo": "0"
Nov 25 23:57:49 compute-0 keen_taussig[260332]:             },
Nov 25 23:57:49 compute-0 keen_taussig[260332]:             "type": "block",
Nov 25 23:57:49 compute-0 keen_taussig[260332]:             "vg_name": "ceph_vg2"
Nov 25 23:57:49 compute-0 keen_taussig[260332]:         }
Nov 25 23:57:49 compute-0 keen_taussig[260332]:     ]
Nov 25 23:57:49 compute-0 keen_taussig[260332]: }
Nov 25 23:57:49 compute-0 systemd[1]: libpod-a8c700ee8b428f1edc5a735525c5c31a228aa8589c1d00df8a933bff8236fc35.scope: Deactivated successfully.
Nov 25 23:57:49 compute-0 podman[260316]: 2025-11-25 23:57:49.869939471 +0000 UTC m=+0.868634191 container died a8c700ee8b428f1edc5a735525c5c31a228aa8589c1d00df8a933bff8236fc35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_taussig, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Nov 25 23:57:49 compute-0 ceph-mon[75654]: pgmap v873: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:57:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-ee23609cfba62df47eb745b19f7a510c7f17cfb433cab7e745bc19c7b3fc8f6a-merged.mount: Deactivated successfully.
Nov 25 23:57:49 compute-0 podman[260316]: 2025-11-25 23:57:49.939268685 +0000 UTC m=+0.937963385 container remove a8c700ee8b428f1edc5a735525c5c31a228aa8589c1d00df8a933bff8236fc35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_taussig, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 25 23:57:49 compute-0 systemd[1]: libpod-conmon-a8c700ee8b428f1edc5a735525c5c31a228aa8589c1d00df8a933bff8236fc35.scope: Deactivated successfully.
Nov 25 23:57:49 compute-0 sudo[260211]: pam_unix(sudo:session): session closed for user root
Nov 25 23:57:50 compute-0 sudo[260355]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:57:50 compute-0 sudo[260355]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:57:50 compute-0 sudo[260355]: pam_unix(sudo:session): session closed for user root
Nov 25 23:57:50 compute-0 sudo[260380]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:57:50 compute-0 sudo[260380]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:57:50 compute-0 sudo[260380]: pam_unix(sudo:session): session closed for user root
Nov 25 23:57:50 compute-0 sudo[260405]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:57:50 compute-0 sudo[260405]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:57:50 compute-0 sudo[260405]: pam_unix(sudo:session): session closed for user root
Nov 25 23:57:50 compute-0 sudo[260430]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 -- raw list --format json
Nov 25 23:57:50 compute-0 sudo[260430]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:57:50 compute-0 podman[260454]: 2025-11-25 23:57:50.437971742 +0000 UTC m=+0.083662708 container health_status b33bb0f8771548517ad62d4e0193b93d84fd686bd9113462f17aea9d51587084 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 23:57:50 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v874: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:57:50 compute-0 podman[260516]: 2025-11-25 23:57:50.676377828 +0000 UTC m=+0.047588324 container create b6f8aa3bf477b41e7a9ec26adcb0cdbdce1351570f9fba7660c182db9d41011c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_hypatia, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 25 23:57:50 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:57:50 compute-0 systemd[1]: Started libpod-conmon-b6f8aa3bf477b41e7a9ec26adcb0cdbdce1351570f9fba7660c182db9d41011c.scope.
Nov 25 23:57:50 compute-0 podman[260516]: 2025-11-25 23:57:50.652024216 +0000 UTC m=+0.023234772 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:57:50 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:57:50 compute-0 podman[260516]: 2025-11-25 23:57:50.785502376 +0000 UTC m=+0.156712892 container init b6f8aa3bf477b41e7a9ec26adcb0cdbdce1351570f9fba7660c182db9d41011c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_hypatia, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 25 23:57:50 compute-0 podman[260516]: 2025-11-25 23:57:50.796304494 +0000 UTC m=+0.167514980 container start b6f8aa3bf477b41e7a9ec26adcb0cdbdce1351570f9fba7660c182db9d41011c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_hypatia, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:57:50 compute-0 podman[260516]: 2025-11-25 23:57:50.800593219 +0000 UTC m=+0.171803775 container attach b6f8aa3bf477b41e7a9ec26adcb0cdbdce1351570f9fba7660c182db9d41011c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_hypatia, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 25 23:57:50 compute-0 frosty_hypatia[260532]: 167 167
Nov 25 23:57:50 compute-0 systemd[1]: libpod-b6f8aa3bf477b41e7a9ec26adcb0cdbdce1351570f9fba7660c182db9d41011c.scope: Deactivated successfully.
Nov 25 23:57:50 compute-0 podman[260516]: 2025-11-25 23:57:50.803545318 +0000 UTC m=+0.174755804 container died b6f8aa3bf477b41e7a9ec26adcb0cdbdce1351570f9fba7660c182db9d41011c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_hypatia, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 23:57:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-2cfe88d61f6e11fe4a2fc479d3e58e3d9d780e9d7762fdfde99705a92b1e9d32-merged.mount: Deactivated successfully.
Nov 25 23:57:50 compute-0 podman[260516]: 2025-11-25 23:57:50.854531232 +0000 UTC m=+0.225741718 container remove b6f8aa3bf477b41e7a9ec26adcb0cdbdce1351570f9fba7660c182db9d41011c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:57:50 compute-0 systemd[1]: libpod-conmon-b6f8aa3bf477b41e7a9ec26adcb0cdbdce1351570f9fba7660c182db9d41011c.scope: Deactivated successfully.
Nov 25 23:57:51 compute-0 podman[260557]: 2025-11-25 23:57:51.081865631 +0000 UTC m=+0.048233990 container create 25ed4f6e54d9ac386512f94bb4a63b4951a6faefa94940a81a1a6f54b3f009c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hamilton, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 25 23:57:51 compute-0 systemd[1]: Started libpod-conmon-25ed4f6e54d9ac386512f94bb4a63b4951a6faefa94940a81a1a6f54b3f009c9.scope.
Nov 25 23:57:51 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:57:51 compute-0 podman[260557]: 2025-11-25 23:57:51.067265321 +0000 UTC m=+0.033633670 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:57:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3737e2d3464d58113743d730ff818df3b46b8816af120d2209c3cacba139334f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 23:57:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3737e2d3464d58113743d730ff818df3b46b8816af120d2209c3cacba139334f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:57:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3737e2d3464d58113743d730ff818df3b46b8816af120d2209c3cacba139334f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:57:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3737e2d3464d58113743d730ff818df3b46b8816af120d2209c3cacba139334f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:57:51 compute-0 podman[260557]: 2025-11-25 23:57:51.204128621 +0000 UTC m=+0.170497010 container init 25ed4f6e54d9ac386512f94bb4a63b4951a6faefa94940a81a1a6f54b3f009c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hamilton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:57:51 compute-0 podman[260557]: 2025-11-25 23:57:51.215767072 +0000 UTC m=+0.182135441 container start 25ed4f6e54d9ac386512f94bb4a63b4951a6faefa94940a81a1a6f54b3f009c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hamilton, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 25 23:57:51 compute-0 podman[260557]: 2025-11-25 23:57:51.219697828 +0000 UTC m=+0.186066237 container attach 25ed4f6e54d9ac386512f94bb4a63b4951a6faefa94940a81a1a6f54b3f009c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hamilton, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507)
Nov 25 23:57:51 compute-0 ceph-mon[75654]: pgmap v874: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:57:52 compute-0 upbeat_hamilton[260573]: {
Nov 25 23:57:52 compute-0 upbeat_hamilton[260573]:     "019d967b-1a56-4e90-8682-a890da577e20": {
Nov 25 23:57:52 compute-0 upbeat_hamilton[260573]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:57:52 compute-0 upbeat_hamilton[260573]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 23:57:52 compute-0 upbeat_hamilton[260573]:         "osd_id": 2,
Nov 25 23:57:52 compute-0 upbeat_hamilton[260573]:         "osd_uuid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 23:57:52 compute-0 upbeat_hamilton[260573]:         "type": "bluestore"
Nov 25 23:57:52 compute-0 upbeat_hamilton[260573]:     },
Nov 25 23:57:52 compute-0 upbeat_hamilton[260573]:     "21bbab34-bea3-466b-8bf7-812749fcef47": {
Nov 25 23:57:52 compute-0 upbeat_hamilton[260573]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:57:52 compute-0 upbeat_hamilton[260573]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 23:57:52 compute-0 upbeat_hamilton[260573]:         "osd_id": 1,
Nov 25 23:57:52 compute-0 upbeat_hamilton[260573]:         "osd_uuid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 23:57:52 compute-0 upbeat_hamilton[260573]:         "type": "bluestore"
Nov 25 23:57:52 compute-0 upbeat_hamilton[260573]:     },
Nov 25 23:57:52 compute-0 upbeat_hamilton[260573]:     "c5ddf08a-6193-41e0-8332-60b5083aa62e": {
Nov 25 23:57:52 compute-0 upbeat_hamilton[260573]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:57:52 compute-0 upbeat_hamilton[260573]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 23:57:52 compute-0 upbeat_hamilton[260573]:         "osd_id": 0,
Nov 25 23:57:52 compute-0 upbeat_hamilton[260573]:         "osd_uuid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 23:57:52 compute-0 upbeat_hamilton[260573]:         "type": "bluestore"
Nov 25 23:57:52 compute-0 upbeat_hamilton[260573]:     }
Nov 25 23:57:52 compute-0 upbeat_hamilton[260573]: }
Nov 25 23:57:52 compute-0 systemd[1]: libpod-25ed4f6e54d9ac386512f94bb4a63b4951a6faefa94940a81a1a6f54b3f009c9.scope: Deactivated successfully.
Nov 25 23:57:52 compute-0 podman[260557]: 2025-11-25 23:57:52.313490919 +0000 UTC m=+1.279859248 container died 25ed4f6e54d9ac386512f94bb4a63b4951a6faefa94940a81a1a6f54b3f009c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hamilton, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:57:52 compute-0 systemd[1]: libpod-25ed4f6e54d9ac386512f94bb4a63b4951a6faefa94940a81a1a6f54b3f009c9.scope: Consumed 1.108s CPU time.
Nov 25 23:57:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-3737e2d3464d58113743d730ff818df3b46b8816af120d2209c3cacba139334f-merged.mount: Deactivated successfully.
Nov 25 23:57:52 compute-0 podman[260557]: 2025-11-25 23:57:52.38385499 +0000 UTC m=+1.350223329 container remove 25ed4f6e54d9ac386512f94bb4a63b4951a6faefa94940a81a1a6f54b3f009c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hamilton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 23:57:52 compute-0 systemd[1]: libpod-conmon-25ed4f6e54d9ac386512f94bb4a63b4951a6faefa94940a81a1a6f54b3f009c9.scope: Deactivated successfully.
Nov 25 23:57:52 compute-0 sudo[260430]: pam_unix(sudo:session): session closed for user root
Nov 25 23:57:52 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 23:57:52 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:57:52 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 23:57:52 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:57:52 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev f621be23-c7be-4453-8439-22026dc0a501 does not exist
Nov 25 23:57:52 compute-0 sudo[260620]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:57:52 compute-0 sudo[260620]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:57:52 compute-0 sudo[260620]: pam_unix(sudo:session): session closed for user root
Nov 25 23:57:52 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v875: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:57:52 compute-0 sudo[260645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 23:57:52 compute-0 sudo[260645]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:57:52 compute-0 sudo[260645]: pam_unix(sudo:session): session closed for user root
Nov 25 23:57:53 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:57:53 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:57:53 compute-0 ceph-mon[75654]: pgmap v875: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:57:54 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v876: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:57:55 compute-0 ceph-mon[75654]: pgmap v876: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:57:55 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:57:56 compute-0 ceph-mgr[75954]: [balancer INFO root] Optimize plan auto_2025-11-25_23:57:56
Nov 25 23:57:56 compute-0 ceph-mgr[75954]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 23:57:56 compute-0 ceph-mgr[75954]: [balancer INFO root] do_upmap
Nov 25 23:57:56 compute-0 ceph-mgr[75954]: [balancer INFO root] pools ['backups', 'vms', 'images', 'cephfs.cephfs.meta', 'volumes', 'cephfs.cephfs.data', '.mgr']
Nov 25 23:57:56 compute-0 ceph-mgr[75954]: [balancer INFO root] prepared 0/10 changes
Nov 25 23:57:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:57:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:57:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:57:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:57:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:57:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:57:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 23:57:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 23:57:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 23:57:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 23:57:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 23:57:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 23:57:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 23:57:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 23:57:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 23:57:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 23:57:56 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v877: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:57:57 compute-0 ceph-mon[75654]: pgmap v877: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:57:58 compute-0 podman[260671]: 2025-11-25 23:57:58.279694953 +0000 UTC m=+0.095939097 container health_status 9c33443d6f21f9126b440289b7d384a59ef92406382d0e40a2b11f1da5dc47d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118)
Nov 25 23:57:58 compute-0 podman[260670]: 2025-11-25 23:57:58.358351457 +0000 UTC m=+0.176254985 container health_status 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 23:57:58 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v878: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:57:59 compute-0 ceph-mon[75654]: pgmap v878: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:58:00 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v879: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:58:00 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:58:01 compute-0 ceph-mon[75654]: pgmap v879: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:58:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 23:58:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:58:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 23:58:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:58:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 23:58:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:58:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 23:58:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:58:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:58:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:58:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 25 23:58:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:58:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 23:58:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:58:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:58:02 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v880: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:58:03 compute-0 ceph-mon[75654]: pgmap v880: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:58:04 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v881: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:58:05 compute-0 ceph-mon[75654]: pgmap v881: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:58:05 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:58:05 compute-0 ceph-mon[75654]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Nov 25 23:58:05 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:58:05.693632) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 23:58:05 compute-0 ceph-mon[75654]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Nov 25 23:58:05 compute-0 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764115085693669, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 572, "num_deletes": 251, "total_data_size": 421533, "memory_usage": 431840, "flush_reason": "Manual Compaction"}
Nov 25 23:58:05 compute-0 ceph-mon[75654]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Nov 25 23:58:05 compute-0 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764115085698964, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 415976, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17671, "largest_seqno": 18242, "table_properties": {"data_size": 412833, "index_size": 1115, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7278, "raw_average_key_size": 19, "raw_value_size": 406539, "raw_average_value_size": 1069, "num_data_blocks": 50, "num_entries": 380, "num_filter_entries": 380, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764115045, "oldest_key_time": 1764115045, "file_creation_time": 1764115085, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cf57a6b1-796f-4cfa-b350-53eb10a4554d", "db_session_id": "Q7VS70283MEZ1V621ZPR", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Nov 25 23:58:05 compute-0 ceph-mon[75654]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 5390 microseconds, and 2847 cpu microseconds.
Nov 25 23:58:05 compute-0 ceph-mon[75654]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 23:58:05 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:58:05.699020) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 415976 bytes OK
Nov 25 23:58:05 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:58:05.699041) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Nov 25 23:58:05 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:58:05.700384) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Nov 25 23:58:05 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:58:05.700405) EVENT_LOG_v1 {"time_micros": 1764115085700399, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 23:58:05 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:58:05.700423) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 23:58:05 compute-0 ceph-mon[75654]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 418358, prev total WAL file size 418358, number of live WAL files 2.
Nov 25 23:58:05 compute-0 ceph-mon[75654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 23:58:05 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:58:05.700989) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Nov 25 23:58:05 compute-0 ceph-mon[75654]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 23:58:05 compute-0 ceph-mon[75654]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(406KB)], [41(5466KB)]
Nov 25 23:58:05 compute-0 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764115085701040, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 6014051, "oldest_snapshot_seqno": -1}
Nov 25 23:58:05 compute-0 ceph-mon[75654]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 3771 keys, 4825966 bytes, temperature: kUnknown
Nov 25 23:58:05 compute-0 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764115085738105, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 4825966, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 4800393, "index_size": 15032, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9477, "raw_key_size": 89630, "raw_average_key_size": 23, "raw_value_size": 4732153, "raw_average_value_size": 1254, "num_data_blocks": 641, "num_entries": 3771, "num_filter_entries": 3771, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764113467, "oldest_key_time": 0, "file_creation_time": 1764115085, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cf57a6b1-796f-4cfa-b350-53eb10a4554d", "db_session_id": "Q7VS70283MEZ1V621ZPR", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Nov 25 23:58:05 compute-0 ceph-mon[75654]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 23:58:05 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:58:05.738450) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 4825966 bytes
Nov 25 23:58:05 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:58:05.740233) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 161.6 rd, 129.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 5.3 +0.0 blob) out(4.6 +0.0 blob), read-write-amplify(26.1) write-amplify(11.6) OK, records in: 4284, records dropped: 513 output_compression: NoCompression
Nov 25 23:58:05 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:58:05.740266) EVENT_LOG_v1 {"time_micros": 1764115085740252, "job": 20, "event": "compaction_finished", "compaction_time_micros": 37217, "compaction_time_cpu_micros": 24411, "output_level": 6, "num_output_files": 1, "total_output_size": 4825966, "num_input_records": 4284, "num_output_records": 3771, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 23:58:05 compute-0 ceph-mon[75654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 23:58:05 compute-0 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764115085740524, "job": 20, "event": "table_file_deletion", "file_number": 43}
Nov 25 23:58:05 compute-0 ceph-mon[75654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 23:58:05 compute-0 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764115085742384, "job": 20, "event": "table_file_deletion", "file_number": 41}
Nov 25 23:58:05 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:58:05.700871) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 23:58:05 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:58:05.742525) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 23:58:05 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:58:05.742534) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 23:58:05 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:58:05.742537) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 23:58:05 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:58:05.742540) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 23:58:05 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:58:05.742542) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 23:58:05 compute-0 nova_compute[252550]: 2025-11-25 23:58:05.823 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 23:58:06 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v882: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:58:06 compute-0 nova_compute[252550]: 2025-11-25 23:58:06.817 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 23:58:06 compute-0 nova_compute[252550]: 2025-11-25 23:58:06.821 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 23:58:06 compute-0 nova_compute[252550]: 2025-11-25 23:58:06.821 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 23:58:07 compute-0 ceph-mon[75654]: pgmap v882: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:58:07 compute-0 nova_compute[252550]: 2025-11-25 23:58:07.821 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 23:58:07 compute-0 nova_compute[252550]: 2025-11-25 23:58:07.822 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 23:58:07 compute-0 nova_compute[252550]: 2025-11-25 23:58:07.822 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 23:58:07 compute-0 nova_compute[252550]: 2025-11-25 23:58:07.879 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 23:58:07 compute-0 nova_compute[252550]: 2025-11-25 23:58:07.880 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 23:58:08 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v883: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:58:08 compute-0 nova_compute[252550]: 2025-11-25 23:58:08.822 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 23:58:09 compute-0 sshd-session[260718]: Accepted publickey for zuul from 192.168.122.10 port 57878 ssh2: ECDSA SHA256:7fdySOwLsSW4/MY3qn2D8coSBpPDcuftesbV+ivPlmY
Nov 25 23:58:09 compute-0 systemd-logind[789]: New session 52 of user zuul.
Nov 25 23:58:09 compute-0 systemd[1]: Started Session 52 of User zuul.
Nov 25 23:58:09 compute-0 sshd-session[260718]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 23:58:09 compute-0 sudo[260722]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Nov 25 23:58:09 compute-0 sudo[260722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:58:09 compute-0 nova_compute[252550]: 2025-11-25 23:58:09.534 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 23:58:09 compute-0 nova_compute[252550]: 2025-11-25 23:58:09.535 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 23:58:09 compute-0 nova_compute[252550]: 2025-11-25 23:58:09.535 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 23:58:09 compute-0 nova_compute[252550]: 2025-11-25 23:58:09.536 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 23:58:09 compute-0 nova_compute[252550]: 2025-11-25 23:58:09.536 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 23:58:09 compute-0 ceph-mon[75654]: pgmap v883: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:58:09 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 23:58:09 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1847819789' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 23:58:09 compute-0 nova_compute[252550]: 2025-11-25 23:58:09.998 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 23:58:10 compute-0 nova_compute[252550]: 2025-11-25 23:58:10.168 252558 WARNING nova.virt.libvirt.driver [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 23:58:10 compute-0 nova_compute[252550]: 2025-11-25 23:58:10.169 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5218MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 23:58:10 compute-0 nova_compute[252550]: 2025-11-25 23:58:10.170 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 23:58:10 compute-0 nova_compute[252550]: 2025-11-25 23:58:10.170 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 23:58:10 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v884: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:58:10 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:58:10 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/1847819789' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 23:58:10 compute-0 nova_compute[252550]: 2025-11-25 23:58:10.944 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 23:58:10 compute-0 nova_compute[252550]: 2025-11-25 23:58:10.945 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 23:58:10 compute-0 nova_compute[252550]: 2025-11-25 23:58:10.965 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 23:58:11 compute-0 sshd-session[260716]: Connection closed by 45.78.216.132 port 50694 [preauth]
Nov 25 23:58:11 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 23:58:11 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1465709058' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 23:58:11 compute-0 nova_compute[252550]: 2025-11-25 23:58:11.479 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 23:58:11 compute-0 nova_compute[252550]: 2025-11-25 23:58:11.486 252558 DEBUG nova.compute.provider_tree [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Inventory has not changed in ProviderTree for provider: 08547965-b35f-4b7b-95d8-902f06aa011c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 23:58:11 compute-0 nova_compute[252550]: 2025-11-25 23:58:11.541 252558 DEBUG nova.scheduler.client.report [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Inventory has not changed for provider 08547965-b35f-4b7b-95d8-902f06aa011c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 23:58:11 compute-0 nova_compute[252550]: 2025-11-25 23:58:11.544 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 23:58:11 compute-0 nova_compute[252550]: 2025-11-25 23:58:11.544 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.374s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 23:58:11 compute-0 ceph-mon[75654]: pgmap v884: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:58:11 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/1465709058' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 23:58:12 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14704 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 23:58:12 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v885: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:58:12 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14706 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 23:58:12 compute-0 ceph-mon[75654]: from='client.14704 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 23:58:13 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Nov 25 23:58:13 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3065359790' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 25 23:58:13 compute-0 nova_compute[252550]: 2025-11-25 23:58:13.545 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 23:58:13 compute-0 nova_compute[252550]: 2025-11-25 23:58:13.546 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 23:58:13 compute-0 nova_compute[252550]: 2025-11-25 23:58:13.546 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 23:58:13 compute-0 ceph-mon[75654]: pgmap v885: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:58:13 compute-0 ceph-mon[75654]: from='client.14706 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 23:58:13 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/3065359790' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 25 23:58:14 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v886: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:58:15 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:58:15 compute-0 ceph-mon[75654]: pgmap v886: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:58:16 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v887: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:58:17 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 23:58:17 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2133630908' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 23:58:17 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 23:58:17 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2133630908' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 23:58:17 compute-0 ceph-mon[75654]: pgmap v887: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:58:17 compute-0 ceph-mon[75654]: from='client.? 192.168.122.10:0/2133630908' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 23:58:17 compute-0 ceph-mon[75654]: from='client.? 192.168.122.10:0/2133630908' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 23:58:18 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v888: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:58:18 compute-0 ovs-vsctl[261070]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Nov 25 23:58:19 compute-0 ceph-mon[75654]: pgmap v888: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:58:19 compute-0 virtqemud[251995]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Nov 25 23:58:19 compute-0 virtqemud[251995]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Nov 25 23:58:19 compute-0 virtqemud[251995]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 25 23:58:20 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v889: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:58:20 compute-0 ceph-mds[99641]: mds.cephfs.compute-0.bgauhq asok_command: cache status {prefix=cache status} (starting...)
Nov 25 23:58:20 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:58:20 compute-0 ceph-mds[99641]: mds.cephfs.compute-0.bgauhq asok_command: client ls {prefix=client ls} (starting...)
Nov 25 23:58:20 compute-0 podman[261333]: 2025-11-25 23:58:20.953730204 +0000 UTC m=+0.339690756 container health_status b33bb0f8771548517ad62d4e0193b93d84fd686bd9113462f17aea9d51587084 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 25 23:58:21 compute-0 lvm[261432]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 23:58:21 compute-0 lvm[261432]: VG ceph_vg0 finished
Nov 25 23:58:21 compute-0 lvm[261462]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 25 23:58:21 compute-0 lvm[261462]: VG ceph_vg2 finished
Nov 25 23:58:21 compute-0 lvm[261465]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 25 23:58:21 compute-0 lvm[261465]: VG ceph_vg1 finished
Nov 25 23:58:21 compute-0 ceph-mds[99641]: mds.cephfs.compute-0.bgauhq asok_command: damage ls {prefix=damage ls} (starting...)
Nov 25 23:58:21 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14714 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 23:58:21 compute-0 ceph-mds[99641]: mds.cephfs.compute-0.bgauhq asok_command: dump loads {prefix=dump loads} (starting...)
Nov 25 23:58:21 compute-0 ceph-mds[99641]: mds.cephfs.compute-0.bgauhq asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Nov 25 23:58:21 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14716 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 23:58:21 compute-0 ceph-mds[99641]: mds.cephfs.compute-0.bgauhq asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Nov 25 23:58:21 compute-0 ceph-mon[75654]: pgmap v889: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:58:21 compute-0 ceph-mds[99641]: mds.cephfs.compute-0.bgauhq asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Nov 25 23:58:22 compute-0 ceph-mds[99641]: mds.cephfs.compute-0.bgauhq asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Nov 25 23:58:22 compute-0 ceph-mds[99641]: mds.cephfs.compute-0.bgauhq asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Nov 25 23:58:22 compute-0 ceph-mds[99641]: mds.cephfs.compute-0.bgauhq asok_command: get subtrees {prefix=get subtrees} (starting...)
Nov 25 23:58:22 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0) v1
Nov 25 23:58:22 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/168829886' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 25 23:58:22 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14722 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 23:58:22 compute-0 ceph-mgr[75954]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 25 23:58:22 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:58:22.505+0000 7f36737f5640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 25 23:58:22 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v890: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:58:22 compute-0 ceph-mds[99641]: mds.cephfs.compute-0.bgauhq asok_command: ops {prefix=ops} (starting...)
Nov 25 23:58:22 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 23:58:22 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/166352582' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:58:22 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Nov 25 23:58:22 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2784088767' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 25 23:58:22 compute-0 ceph-mon[75654]: from='client.14714 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 23:58:22 compute-0 ceph-mon[75654]: from='client.14716 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 23:58:22 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/168829886' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 25 23:58:22 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/166352582' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:58:22 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/2784088767' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 25 23:58:23 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0) v1
Nov 25 23:58:23 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3608204187' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 25 23:58:23 compute-0 ceph-mds[99641]: mds.cephfs.compute-0.bgauhq asok_command: session ls {prefix=session ls} (starting...)
Nov 25 23:58:23 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 25 23:58:23 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2655172804' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 25 23:58:23 compute-0 ceph-mds[99641]: mds.cephfs.compute-0.bgauhq asok_command: status {prefix=status} (starting...)
Nov 25 23:58:23 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Nov 25 23:58:23 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2040537186' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 25 23:58:23 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 25 23:58:23 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1944917165' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 25 23:58:23 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14736 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 23:58:23 compute-0 ceph-mon[75654]: from='client.14722 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 23:58:23 compute-0 ceph-mon[75654]: pgmap v890: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:58:23 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/3608204187' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 25 23:58:23 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/2655172804' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 25 23:58:23 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/2040537186' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 25 23:58:23 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/1944917165' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 25 23:58:24 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 25 23:58:24 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/402274012' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 25 23:58:24 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14740 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 23:58:24 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 25 23:58:24 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3424918935' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 25 23:58:24 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v891: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:58:24 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0) v1
Nov 25 23:58:24 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2302812068' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 25 23:58:24 compute-0 ceph-mon[75654]: from='client.14736 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 23:58:24 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/402274012' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 25 23:58:24 compute-0 ceph-mon[75654]: from='client.14740 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 23:58:24 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/3424918935' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 25 23:58:24 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/2302812068' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 25 23:58:24 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Nov 25 23:58:24 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1401400288' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 25 23:58:25 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Nov 25 23:58:25 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/35714324' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 25 23:58:25 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Nov 25 23:58:25 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2183257717' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 25 23:58:25 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14752 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 23:58:25 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:58:25.526+0000 7f36737f5640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 25 23:58:25 compute-0 ceph-mgr[75954]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 25 23:58:25 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:58:25 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14754 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 23:58:25 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Nov 25 23:58:25 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2611995556' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 25 23:58:25 compute-0 ceph-mon[75654]: pgmap v891: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:58:25 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/1401400288' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 25 23:58:25 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/35714324' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 25 23:58:25 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/2183257717' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 25 23:58:26 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14758 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 23:58:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:58:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:58:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:58:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:58:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:58:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:58:26 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Nov 25 23:58:26 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3045823816' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 25 23:58:26 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14762 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 23:58:26 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v892: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:58:26 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14766 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.12( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.16( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.016459 3 0.000043
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.14( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.16( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.14( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.16( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.16( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.17( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.016459 3 0.000038
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.15( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.016457 3 0.000068
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.17( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.15( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.17( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.1e( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.17( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.15( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.8( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.016375 3 0.000096
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.1e( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.8( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.15( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.8( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.8( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.1c( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.016436 3 0.000100
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.1c( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.7( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.016215 3 0.000106
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.7( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.7( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.1c( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.7( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.a( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.016356 3 0.000112
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.5( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.015997 3 0.000072
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.1c( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.0( empty local-lis/les=37/40 n=0 ec=20/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.016193 3 0.000089
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.a( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.5( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.0( empty local-lis/les=37/40 n=0 ec=20/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.0( empty local-lis/les=37/40 n=0 ec=20/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.a( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000010 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.5( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.0( empty local-lis/les=37/40 n=0 ec=20/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.a( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.5( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.2( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.015494 3 0.000254
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.2( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.1( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.015293 3 0.000099
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.2( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.2( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.1( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.9( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.016498 3 0.000076
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.b( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.015072 3 0.000711
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.1( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.9( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.3( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.013990 3 0.001576
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.b( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.1( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.3( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.9( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.3( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.b( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.9( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.3( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.4( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.015737 3 0.002123
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.b( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.4( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.4( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.4( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.c( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.013893 3 0.000061
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.f( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.015233 3 0.000122
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.c( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.f( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.e( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.013842 3 0.000971
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.c( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.1b( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.013766 3 0.000184
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.c( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.1b( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.f( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.e( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.1b( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.f( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.1b( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.e( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.19( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.013657 3 0.000141
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.e( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.19( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.6( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.013660 3 0.002636
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.6( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.19( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.6( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.19( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.6( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.1a( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.013714 3 0.000152
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.d( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.013461 3 0.001285
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.1a( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.d( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.18( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.013556 3 0.000261
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.d( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.d( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.18( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.1a( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000015 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.18( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.1a( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 40 pg[5.18( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/20 les/c/f=40/21/0 sis=37) [2] r=0 lpr=37 pi=[20,37)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:33:34.109434+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 56000512 unmapped: 2662400 heap: 58662912 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 40 heartbeat osd_stat(store_statfs(0x4fe165000/0x0/0x4ffc00000, data 0x28d03/0x67000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 40 handle_osd_map epochs [41,41], i have 40, src has [1,41]
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:33:35.109625+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 56016896 unmapped: 2646016 heap: 58662912 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 41 heartbeat osd_stat(store_statfs(0x4fe161000/0x0/0x4ffc00000, data 0x2a173/0x6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:33:36.109774+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 56041472 unmapped: 2621440 heap: 58662912 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 320659 data_alloc: 218103808 data_used: 0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:33:37.109946+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 56147968 unmapped: 2514944 heap: 58662912 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 41 handle_osd_map epochs [42,42], i have 41, src has [1,42]
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.485052109s of 11.599431038s, submitted: 234
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.18(unlocked)] enter Initial
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.18( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000066 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.18( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.18( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000015 1 0.000038
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.18( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.18( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.18( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.18( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000011 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.18( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.18( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.18( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.18( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000179 1 0.000073
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.18( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1b(unlocked)] enter Initial
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1b( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000053 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1b( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1b( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000015 1 0.000026
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1b( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1b( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1b( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1b( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000012 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1b( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1b( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1b( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1b( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000246 1 0.000068
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1b( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1a(unlocked)] enter Initial
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1a( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000063 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1a( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1a( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000017 1 0.000035
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1a( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1a( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1a( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1a( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000013 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1a( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1a( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1a( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1a( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000156 1 0.000073
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1a( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.e(unlocked)] enter Initial
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.e( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000074 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.e( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.e( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000010 1 0.000026
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.e( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.e( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.e( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.e( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000007 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.e( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.e( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.e( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.e( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000142 1 0.000047
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.e( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1(unlocked)] enter Initial
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000052 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000013 1 0.000037
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000009 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000128 1 0.000073
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.a(unlocked)] enter Initial
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.a( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000208 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.a( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.a( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000083 1 0.000067
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.a( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.a( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.a( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.a( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000014 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.a( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.a( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.a( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.a( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.001609 1 0.000150
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.a( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.13(unlocked)] enter Initial
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.13( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000025 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.13( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.13( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000019
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.13( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.13( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.13( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.13( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.13( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.13( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.13( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.13( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000054 1 0.000027
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.13( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.11(unlocked)] enter Initial
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.11( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000012 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.11( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.11( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000009
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.11( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.11( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.11( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.11( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.11( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.11( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.11( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.11( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000056 1 0.000030
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.11( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1c(unlocked)] enter Initial
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1c( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000017 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1c( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1c( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000008
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1c( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1c( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1c( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1c( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1c( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1c( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1c( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1c( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000049 1 0.000036
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1c( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 8.543269 13 0.000091
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 8.553523 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 8.553617 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 8.553666 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.1b] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.456751823s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 82.465843201s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.1b] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.456731796s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465843201s@ mbc={}] exit Reset 0.000033 1 0.000052
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.456731796s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465843201s@ mbc={}] enter Started
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.456731796s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465843201s@ mbc={}] enter Start
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.456731796s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465843201s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.456731796s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465843201s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.456731796s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465843201s@ mbc={}] enter Started/Stray
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1d( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 4.478674 4 0.000026
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1d( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 4.495312 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1d( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.505432 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1d( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.505516 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1d( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.1d] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1d( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.520897865s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 78.530090332s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.1d] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1d( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.520885468s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530090332s@ mbc={}] exit Reset 0.000023 1 0.000036
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1d( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.520885468s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530090332s@ mbc={}] enter Started
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1d( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.520885468s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530090332s@ mbc={}] enter Start
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1d( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.520885468s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530090332s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1d( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.520885468s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530090332s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1d( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.520885468s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530090332s@ mbc={}] enter Started/Stray
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1e( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 4.478663 4 0.000109
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1e( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 4.495329 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1e( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.506386 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1e( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.506419 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1e( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.1e] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1e( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.520854950s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 78.530136108s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.1e] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1e( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.520842552s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530136108s@ mbc={}] exit Reset 0.000021 1 0.000032
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1e( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.520842552s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530136108s@ mbc={}] enter Started
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1e( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.520842552s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530136108s@ mbc={}] enter Start
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1e( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.520842552s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530136108s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1e( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.520842552s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530136108s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1e( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.520842552s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530136108s@ mbc={}] enter Started/Stray
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 8.543760 13 0.000051
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 8.554367 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 8.554459 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 8.554536 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.17] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.456146240s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 82.465744019s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.17] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.456072807s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465744019s@ mbc={}] exit Reset 0.000123 1 0.000214
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.456072807s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465744019s@ mbc={}] enter Started
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.456072807s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465744019s@ mbc={}] enter Start
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.456072807s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465744019s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.456072807s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465744019s@ mbc={}] exit Start 0.000014 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.456072807s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465744019s@ mbc={}] enter Started/Stray
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 8.544096 13 0.000050
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 8.554573 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 8.554816 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 8.554853 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.11( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 4.479349 4 0.000039
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.11( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 4.495994 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.11( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.504659 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.16] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.11( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.504692 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.11( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.455876350s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 82.465759277s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.16] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.11] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.11( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.520113945s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 78.530021667s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.455842018s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465759277s@ mbc={}] exit Reset 0.000062 1 0.000088
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.455842018s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465759277s@ mbc={}] enter Started
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.11] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.455842018s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465759277s@ mbc={}] enter Start
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.455842018s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465759277s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.455842018s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465759277s@ mbc={}] exit Start 0.000010 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.11( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.520085335s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530021667s@ mbc={}] exit Reset 0.000054 1 0.000084
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.455842018s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465759277s@ mbc={}] enter Started/Stray
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.11( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.520085335s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530021667s@ mbc={}] enter Started
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.11( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.520085335s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530021667s@ mbc={}] enter Start
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.11( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.520085335s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530021667s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.11( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.520085335s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530021667s@ mbc={}] exit Start 0.000012 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.11( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.520085335s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530021667s@ mbc={}] enter Started/Stray
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 8.544084 13 0.000386
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 8.555031 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 8.555066 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 8.555080 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.12( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 4.479517 4 0.000093
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.15] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.455184937s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 82.465225220s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.12( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 4.496119 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.12( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.504630 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.15] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.12( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.504670 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.12( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.12] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.455161095s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465225220s@ mbc={}] exit Reset 0.000060 1 0.000062
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.12( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.519988060s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 78.530075073s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.455161095s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465225220s@ mbc={}] enter Started
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.455161095s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465225220s@ mbc={}] enter Start
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.12] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.455161095s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465225220s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.455161095s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465225220s@ mbc={}] exit Start 0.000008 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.455161095s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465225220s@ mbc={}] enter Started/Stray
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.12( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.519961357s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530075073s@ mbc={}] exit Reset 0.000048 1 0.000078
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.12( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.519961357s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530075073s@ mbc={}] enter Started
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.12( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.519961357s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530075073s@ mbc={}] enter Start
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.12( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.519961357s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530075073s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.12( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.519961357s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530075073s@ mbc={}] exit Start 0.000012 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.12( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.519961357s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530075073s@ mbc={}] enter Started/Stray
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.13( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 4.479829 4 0.000049
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.13( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 4.496254 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.13( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.504737 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.13( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.504777 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.13( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 8.544674 13 0.000044
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.13] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 8.555360 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.13( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.519800186s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 78.530029297s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 8.555459 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.13] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 8.555488 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.13( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.519779205s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530029297s@ mbc={}] exit Reset 0.000039 1 0.000063
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.13( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.519779205s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530029297s@ mbc={}] enter Started
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.13( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.519779205s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530029297s@ mbc={}] enter Start
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.13( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.519779205s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530029297s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.13( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.519779205s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530029297s@ mbc={}] exit Start 0.000008 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.13] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.13( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.519779205s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530029297s@ mbc={}] enter Started/Stray
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.454953194s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 82.465225220s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.13] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.454927444s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465225220s@ mbc={}] exit Reset 0.000054 1 0.000082
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.454927444s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465225220s@ mbc={}] enter Started
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.454927444s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465225220s@ mbc={}] enter Start
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.454927444s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465225220s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.14( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 4.479790 4 0.000048
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.14( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 4.496837 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.14( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.505215 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.14( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.505244 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.14( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.14] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.14( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.519281387s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 78.530158997s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.14] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.14( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.519255638s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530158997s@ mbc={}] exit Reset 0.000047 1 0.000578
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.14( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.519255638s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530158997s@ mbc={}] enter Started
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.14( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.519255638s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530158997s@ mbc={}] enter Start
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.14( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.519255638s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530158997s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.454927444s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465225220s@ mbc={}] exit Start 0.000013 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.14( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.519255638s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530158997s@ mbc={}] exit Start 0.000008 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.454927444s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465225220s@ mbc={}] enter Started/Stray
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.14( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.519255638s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530158997s@ mbc={}] enter Started/Stray
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 8.545584 13 0.000049
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 8.556397 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 8.556472 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 8.556501 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.16( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 4.480510 4 0.000060
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.16( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 4.497037 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.16( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.505627 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.16( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.505655 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.11] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.454054832s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 82.465164185s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.11] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.454019547s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465164185s@ mbc={}] exit Reset 0.000066 1 0.000095
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.454019547s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465164185s@ mbc={}] enter Started
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.454019547s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465164185s@ mbc={}] enter Start
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.454019547s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465164185s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.454019547s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465164185s@ mbc={}] exit Start 0.000014 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.454019547s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465164185s@ mbc={}] enter Started/Stray
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.15( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 4.479984 4 0.000075
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.15( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 4.497165 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.15( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.505501 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.15( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.505542 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.15( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.15] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.15( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518911362s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 78.530174255s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.15] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.15( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518884659s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530174255s@ mbc={}] exit Reset 0.000049 1 0.000703
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.15( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518884659s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530174255s@ mbc={}] enter Started
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.15( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518884659s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530174255s@ mbc={}] enter Start
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.15( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518884659s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530174255s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 8.545773 13 0.000080
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.15( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518884659s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530174255s@ mbc={}] exit Start 0.000014 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.15( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518884659s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530174255s@ mbc={}] enter Started/Stray
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.16( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 8.556813 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 8.556890 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.16] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 8.556920 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.16( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518821716s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 78.530166626s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.16] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.16( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518773079s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530166626s@ mbc={}] exit Reset 0.000295 1 0.000316
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.16( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518773079s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530166626s@ mbc={}] enter Started
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.9( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 4.480691 4 0.000031
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.16( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518773079s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530166626s@ mbc={}] enter Start
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.9( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 4.497227 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.16( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518773079s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530166626s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.9( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.506993 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.f] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 8.543632 13 0.000036
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.453702927s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 82.465126038s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.16( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518773079s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530166626s@ mbc={}] exit Start 0.000010 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.f] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.16( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518773079s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530166626s@ mbc={}] enter Started/Stray
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 8.556117 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.453660965s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465126038s@ mbc={}] exit Reset 0.000079 1 0.000160
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 8.556273 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.453660965s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465126038s@ mbc={}] enter Started
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 8.556362 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.453660965s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465126038s@ mbc={}] enter Start
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.453660965s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465126038s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.453660965s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465126038s@ mbc={}] exit Start 0.000021 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.453660965s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465126038s@ mbc={}] enter Started/Stray
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.18] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.9( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.507018 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.454210281s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 82.465759277s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.9( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.18] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.9] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.9( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518668175s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 78.530258179s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.9] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.9( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518647194s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530258179s@ mbc={}] exit Reset 0.000040 1 0.000199
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.9( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518647194s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530258179s@ mbc={}] enter Started
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.9( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518647194s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530258179s@ mbc={}] enter Start
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.454153061s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465759277s@ mbc={}] exit Reset 0.000109 1 0.002219
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.9( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518647194s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530258179s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.9( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518647194s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530258179s@ mbc={}] exit Start 0.000007 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 8.546190 13 0.000165
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.9( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518647194s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530258179s@ mbc={}] enter Started/Stray
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.454153061s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465759277s@ mbc={}] enter Started
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 8.557340 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.454153061s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465759277s@ mbc={}] enter Start
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 8.557382 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.454153061s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465759277s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 8.557404 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.454153061s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465759277s@ mbc={}] exit Start 0.000021 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.454153061s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465759277s@ mbc={}] enter Started/Stray
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.d] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.453030586s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 82.464744568s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.7( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 4.481078 4 0.000027
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.7( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 4.497335 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.7( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.507508 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.7( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.507537 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.7( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.d] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.7] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.7( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518463135s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 78.530212402s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.7] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 8.546294 13 0.000071
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.452983856s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464744568s@ mbc={}] exit Reset 0.000083 1 0.000130
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.7( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518447876s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530212402s@ mbc={}] exit Reset 0.000029 1 0.000044
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 8.557214 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 8.557299 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 8.557328 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.452983856s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464744568s@ mbc={}] enter Started
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.452983856s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464744568s@ mbc={}] enter Start
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.452983856s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464744568s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 8.546533 13 0.000082
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.452983856s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464744568s@ mbc={}] exit Start 0.000042 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 8.557708 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.452983856s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464744568s@ mbc={}] enter Started/Stray
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 8.557789 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 8.557822 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.7( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518447876s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530212402s@ mbc={}] enter Started
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.7( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518447876s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530212402s@ mbc={}] enter Start
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.7( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518447876s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530212402s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.7( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518447876s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530212402s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.7( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518447876s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530212402s@ mbc={}] enter Started/Stray
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.2] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.452710152s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 82.464645386s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.2] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.452659607s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464645386s@ mbc={}] exit Reset 0.000093 1 0.000140
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 8.546757 13 0.000055
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 8.557940 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.452659607s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464645386s@ mbc={}] enter Started
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 8.558011 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.452659607s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464645386s@ mbc={}] enter Start
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.452659607s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464645386s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 8.558051 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.452659607s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464645386s@ mbc={}] exit Start 0.000021 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.452659607s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464645386s@ mbc={}] enter Started/Stray
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.3] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.452497482s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 82.464584351s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.3] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.5( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 4.481346 4 0.000037
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.5( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 4.497511 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.5( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.505855 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.5( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.505886 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.452450752s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464584351s@ mbc={}] exit Reset 0.000086 1 0.000143
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.5( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.452450752s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464584351s@ mbc={}] enter Started
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.5] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.5( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518067360s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 78.530242920s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.452450752s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464584351s@ mbc={}] enter Start
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.5] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.452450752s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464584351s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.5( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518048286s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530242920s@ mbc={}] exit Reset 0.000037 1 0.000169
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.5( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518048286s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530242920s@ mbc={}] enter Started
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.4( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 4.481448 4 0.000026
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.452450752s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464584351s@ mbc={}] exit Start 0.000020 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.4( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 4.497257 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.4( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.506056 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.452450752s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464584351s@ mbc={}] enter Started/Stray
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.4( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.506088 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.4( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.5( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518048286s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530242920s@ mbc={}] enter Start
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.5( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518048286s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530242920s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.5( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518048286s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530242920s@ mbc={}] exit Start 0.000067 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.5( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518048286s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530242920s@ mbc={}] enter Started/Stray
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.4] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.4( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518050194s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 78.530319214s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.4] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 8.544370 13 0.000256
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 8.557009 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 8.557232 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 8.557279 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.4( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518003464s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530319214s@ mbc={}] exit Reset 0.000086 1 0.000135
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.19] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.453290939s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 82.465820312s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.4( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518003464s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530319214s@ mbc={}] enter Started
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.4( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518003464s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530319214s@ mbc={}] enter Start
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.4( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518003464s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530319214s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.19] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.4( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518003464s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530319214s@ mbc={}] exit Start 0.000021 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.453218460s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465820312s@ mbc={}] exit Reset 0.000120 1 0.002195
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.453218460s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465820312s@ mbc={}] enter Started
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.4( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.518003464s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530319214s@ mbc={}] enter Started/Stray
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.453218460s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465820312s@ mbc={}] enter Start
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.453218460s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465820312s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.453218460s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465820312s@ mbc={}] exit Start 0.000024 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.453218460s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465820312s@ mbc={}] enter Started/Stray
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 8.547125 13 0.000070
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 8.558759 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 8.558797 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 8.558810 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.4] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.451832771s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 82.464576721s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.4] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.3( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 4.482033 4 0.000026
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.451805115s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464576721s@ mbc={}] exit Reset 0.000044 1 0.000434
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.451805115s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464576721s@ mbc={}] enter Started
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.3( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 4.496093 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.451805115s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464576721s@ mbc={}] enter Start
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.451805115s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464576721s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.451805115s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464576721s@ mbc={}] exit Start 0.000006 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.451805115s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464576721s@ mbc={}] enter Started/Stray
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.3( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.508694 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.7] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 8.547497 13 0.000062
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.3( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.508752 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.452794075s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 82.465126038s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 8.558814 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.3( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 8.558889 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.7] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 8.558920 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.452293396s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465126038s@ mbc={}] exit Reset 0.001051 1 0.001076
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.2( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 4.482146 4 0.000031
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.452293396s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465126038s@ mbc={}] enter Started
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.2( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 4.497702 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.3] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.2( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.508923 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.452293396s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465126038s@ mbc={}] enter Start
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.2( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.508946 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.452293396s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465126038s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.2( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.452293396s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465126038s@ mbc={}] exit Start 0.000010 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.3( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.517469406s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 78.530319214s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.452293396s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465126038s@ mbc={}] enter Started/Stray
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.3] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.5] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.451684952s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 82.464576721s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.3( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.517417908s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530319214s@ mbc={}] exit Reset 0.000094 1 0.000147
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.5] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.3( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.517417908s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530319214s@ mbc={}] enter Started
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.3( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.517417908s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530319214s@ mbc={}] enter Start
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.3( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.517417908s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530319214s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.451636314s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464576721s@ mbc={}] exit Reset 0.000101 1 0.000144
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.3( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.517417908s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530319214s@ mbc={}] exit Start 0.000020 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.451636314s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464576721s@ mbc={}] enter Started
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.3( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.517417908s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530319214s@ mbc={}] enter Started/Stray
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.451636314s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464576721s@ mbc={}] enter Start
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.451636314s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464576721s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.451636314s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464576721s@ mbc={}] exit Start 0.000021 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.451636314s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464576721s@ mbc={}] enter Started/Stray
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.2] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.2( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.517217636s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 78.530265808s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.2] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.2( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.517190933s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530265808s@ mbc={}] exit Reset 0.000211 1 0.000228
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.2( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.517190933s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530265808s@ mbc={}] enter Started
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.2( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.517190933s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530265808s@ mbc={}] enter Start
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.2( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.517190933s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530265808s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.2( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.517190933s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530265808s@ mbc={}] exit Start 0.000006 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.2( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.517190933s) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530265808s@ mbc={}] enter Started/Stray
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 8.548825 13 0.000061
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 4.482376 4 0.000040
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 8.559288 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 4.497760 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 8.559387 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.509336 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 8.559428 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.509416 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 8.547416 13 0.000053
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 8.558419 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 8.559457 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 8.559526 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.6] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.451214790s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 82.464385986s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.8] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.452562332s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 82.465751648s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.6] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.451136589s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464385986s@ mbc={}] exit Reset 0.000120 1 0.000161
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.451136589s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464385986s@ mbc={}] enter Started
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.451136589s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464385986s@ mbc={}] enter Start
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.8] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.451136589s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464385986s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.452430725s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465751648s@ mbc={}] exit Reset 0.000155 1 0.000174
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.451136589s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464385986s@ mbc={}] exit Start 0.000043 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.452430725s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465751648s@ mbc={}] enter Started
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.452430725s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465751648s@ mbc={}] enter Start
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.452430725s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465751648s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.452430725s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465751648s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.451136589s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464385986s@ mbc={}] enter Started/Stray
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.452430725s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.465751648s@ mbc={}] enter Started/Stray
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 8.548172 13 0.000060
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 8.559280 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 8.559365 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 8.559392 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.9] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.451215744s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 82.464645386s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.9] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.451201439s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464645386s@ mbc={}] exit Reset 0.000028 1 0.000040
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.f( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 4.482664 4 0.000039
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.451201439s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464645386s@ mbc={}] enter Started
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.451201439s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464645386s@ mbc={}] enter Start
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.f( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 4.497951 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.451201439s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464645386s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.451201439s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464645386s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.f( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.508379 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.451201439s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464645386s@ mbc={}] enter Started/Stray
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.f( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.508413 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.f( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.f] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 8.549232 13 0.000072
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.f( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.516825676s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 78.530311584s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 8.559987 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 8.560076 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 8.560101 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.f] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.a] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.450861931s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 82.464378357s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.f( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.516798973s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530311584s@ mbc={}] exit Reset 0.000049 1 0.000286
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.a] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.f( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.516798973s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530311584s@ mbc={}] enter Started
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.f( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.516798973s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530311584s@ mbc={}] enter Start
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.450849533s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464378357s@ mbc={}] exit Reset 0.000023 1 0.000036
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.450849533s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464378357s@ mbc={}] enter Started
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.450849533s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464378357s@ mbc={}] enter Start
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.450849533s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464378357s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.450849533s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464378357s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.450849533s) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464378357s@ mbc={}] enter Started/Stray
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 8.549286 13 0.000083
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 8.560085 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 8.560222 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.c( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 4.482814 4 0.000026
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.c( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 4.496744 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.c( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.508812 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 8.560257 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.c( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.508837 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.c( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.f( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.516798973s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530311584s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.c] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.c( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.516772270s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 78.530380249s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.f( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.516798973s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530311584s@ mbc={}] exit Start 0.000078 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.c] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.f( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.516798973s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530311584s@ mbc={}] enter Started/Stray
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.c( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.516757965s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530380249s@ mbc={}] exit Reset 0.000024 1 0.000038
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.c( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.516757965s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530380249s@ mbc={}] enter Started
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.c( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.516757965s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530380249s@ mbc={}] enter Start
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.c( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.516757965s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530380249s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.c( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.516757965s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530380249s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.c( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.516757965s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530380249s@ mbc={}] enter Started/Stray
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.b] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.450617790s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 82.464271545s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.b] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 8.549364 13 0.000060
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 8.549294 13 0.000216
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 8.560242 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 8.559846 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 8.560471 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 8.559893 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 8.560521 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.450569153s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464271545s@ mbc={}] exit Reset 0.000105 1 0.000152
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 8.559916 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.1d] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.450549126s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 82.464279175s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.450569153s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464271545s@ mbc={}] enter Started
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.1c] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.450569153s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464271545s@ mbc={}] enter Start
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.1d] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.450786591s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 82.464523315s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.450569153s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464271545s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.1c] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.450569153s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464271545s@ mbc={}] exit Start 0.000031 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.450569153s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464271545s@ mbc={}] enter Started/Stray
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.1] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.516482353s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 78.530273438s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.1] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.516411781s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530273438s@ mbc={}] exit Reset 0.000724 1 0.000766
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.516411781s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530273438s@ mbc={}] enter Started
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.516411781s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530273438s@ mbc={}] enter Start
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1a( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 4.483061 4 0.000047
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.516411781s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530273438s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1a( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 4.496920 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.516411781s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530273438s@ mbc={}] exit Start 0.000025 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.516411781s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530273438s@ mbc={}] enter Started/Stray
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.450525284s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464279175s@ mbc={}] exit Reset 0.000037 1 0.000052
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.450525284s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464279175s@ mbc={}] enter Started
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.450525284s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464279175s@ mbc={}] enter Start
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.450525284s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464279175s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.450525284s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464279175s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.450525284s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464279175s@ mbc={}] enter Started/Stray
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.19( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 4.483269 4 0.000030
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.19( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 4.496961 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.19( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.508042 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.19( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.508066 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.19( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.19] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.19( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.516325951s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 78.530418396s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 8.549770 13 0.000061
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.19] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.19( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.516311646s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530418396s@ mbc={}] exit Reset 0.000027 1 0.000043
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.19( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.516311646s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530418396s@ mbc={}] enter Started
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.19( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.516311646s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530418396s@ mbc={}] enter Start
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.19( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.516311646s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530418396s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.19( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.516311646s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530418396s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.19( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.516311646s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530418396s@ mbc={}] enter Started/Stray
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.18( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 4.483314 4 0.000044
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.18( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 4.496927 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.18( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.508368 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.18( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.508404 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.18( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.18] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.18( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.516283035s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 78.530479431s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.18] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.18( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.516271591s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530479431s@ mbc={}] exit Reset 0.000022 1 0.000045
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.18( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.516271591s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530479431s@ mbc={}] enter Started
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.18( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.516271591s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530479431s@ mbc={}] enter Start
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.18( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.516271591s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530479431s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.18( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.516271591s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530479431s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.18( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.516271591s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530479431s@ mbc={}] enter Started/Stray
Nov 25 23:58:26 compute-0 ceph-osd[91111]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.010290 2 0.000119
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:26 compute-0 ceph-osd[91111]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.009259 2 0.000071
Nov 25 23:58:26 compute-0 ceph-osd[91111]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:26 compute-0 ceph-osd[91111]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.450763702s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464523315s@ mbc={}] exit Reset 0.000045 1 0.000068
Nov 25 23:58:26 compute-0 ceph-osd[91111]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.450763702s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464523315s@ mbc={}] enter Started
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.008987 2 0.000077
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.450763702s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464523315s@ mbc={}] enter Start
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.009755 2 0.000074
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.450763702s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464523315s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.450763702s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464523315s@ mbc={}] exit Start 0.000011 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.450763702s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464523315s@ mbc={}] enter Started/Stray
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000015 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.008522 2 0.000049
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:26 compute-0 ceph-osd[91111]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.005929 2 0.000067
Nov 25 23:58:26 compute-0 ceph-osd[91111]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.005793 2 0.000022
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:26 compute-0 ceph-osd[91111]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.005535 2 0.000039
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.005686 2 0.000020
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.1e] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000007 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1a( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 5.511394 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.16] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1a( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] exit Started 5.511446 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.1e] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1a( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.13] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.14] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.1a] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.13] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.14] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1a( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.515798569s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active pruub 78.530448914s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.16] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.1a] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1a( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.515743256s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530448914s@ mbc={}] exit Reset 0.000101 1 0.000799
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1a( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.515743256s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530448914s@ mbc={}] enter Started
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1a( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.515743256s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530448914s@ mbc={}] enter Start
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1a( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.515743256s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530448914s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1a( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.515743256s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530448914s@ mbc={}] exit Start 0.000019 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[5.1a( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42 pruub=11.515743256s) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.530448914s@ mbc={}] enter Started/Stray
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 8.560875 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 8.561880 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 8.562452 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.1f] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.448849678s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 82.464324951s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.1f] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.448781013s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464324951s@ mbc={}] exit Reset 0.000154 1 0.001453
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.448781013s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464324951s@ mbc={}] enter Started
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.448781013s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464324951s@ mbc={}] enter Start
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.448781013s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464324951s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.448781013s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464324951s@ mbc={}] exit Start 0.000027 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.448781013s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.464324951s@ mbc={}] enter Started/Stray
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1c(unlocked)] enter Initial
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1c( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000024 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1c( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1c( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000011
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1c( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1c( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1c( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1c( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1c( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1c( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1c( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1c( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000088 1 0.000036
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1c( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.18(unlocked)] enter Initial
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.18( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000020 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.18( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.18( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000011
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.18( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.18( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.18( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.18( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.18( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.18( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.18( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.18( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000074 1 0.000023
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.18( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.11(unlocked)] enter Initial
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.11( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000015 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.11( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.11( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000021 1 0.000025
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.11( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.11( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.11( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.11( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.11( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.11( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.11( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.11( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000045 1 0.000028
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.11( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 42 handle_osd_map epochs [42,42], i have 42, src has [1,42]
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.16(unlocked)] enter Initial
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.16( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000026 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.16( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.16( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000011
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.16( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.16( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.16( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.16( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.16( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.16( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.16( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.16( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000047 1 0.000028
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.16( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.a(unlocked)] enter Initial
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.a( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000029 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.a( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.a( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000013
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.a( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.a( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.a( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.a( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.a( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.a( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.a( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.a( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000065 1 0.000043
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:33:38.110073+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.a( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.11(unlocked)] enter Initial
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.11( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000065 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.11( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.11( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000015 1 0.000026
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.11( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.11( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.11( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.11( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000011 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.11( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.11( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.11( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.11( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000132 1 0.000081
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.11( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.8(unlocked)] enter Initial
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.8( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000030 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.8( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.8( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000011
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.8( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.8( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.8( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.8( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.8( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.8( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.8( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.8( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000053 1 0.000028
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.8( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.15(unlocked)] enter Initial
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.15( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000022 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.15( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.15( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000009
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.15( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.15( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.15( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.15( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.15( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.15( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.15( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.15( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000048 1 0.000038
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.15( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.e(unlocked)] enter Initial
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.e( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000022 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.e( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.e( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000010
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.e( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.e( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.e( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.e( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.e( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.e( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.e( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.e( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000046 1 0.000025
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.e( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.5(unlocked)] enter Initial
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.5( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000061 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.5( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.5( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000017 1 0.000025
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.5( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.5( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.5( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.5( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.5( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.5( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.5( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.5( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000224 1 0.000039
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.5( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.5(unlocked)] enter Initial
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.5( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000028 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.5( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.5( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000015
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.5( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.5( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.5( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.5( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.5( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.5( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.5( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.5( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000171 1 0.000035
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.5( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1(unlocked)] enter Initial
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000032 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000011
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000062 1 0.000027
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.7(unlocked)] enter Initial
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.7( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000030 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.7( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.7( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000009 1 0.000017
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.7( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.7( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.7( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.7( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.7( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.7( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.7( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.7( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000195 1 0.000036
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.7( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.2(unlocked)] enter Initial
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.2( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000026 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.2( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.2( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000011
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.2( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.2( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.2( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.2( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.2( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.2( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.2( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.2( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000177 1 0.000034
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.2( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.c(unlocked)] enter Initial
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.c( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000025 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.c( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.c( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000017
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.c( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.c( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.c( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.c( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.c( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.c( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.c( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.c( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000123 1 0.000031
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.c( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.8(unlocked)] enter Initial
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.8( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000068 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.8( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.8( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000019 1 0.000033
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.8( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.8( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.8( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.8( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000010 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.8( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.8( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.8( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.8( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000329 1 0.000065
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.8( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.e(unlocked)] enter Initial
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.e( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000030 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.e( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.e( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000009 1 0.000017
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.e( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.e( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.e( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.e( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.e( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.e( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.e( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.e( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000244 1 0.000036
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.e( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.1d(unlocked)] enter Initial
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.1d( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000027 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.1d( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.1d( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000008 1 0.000015
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.1d( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.1d( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.1d( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.1d( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.1d( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.1d( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.1d( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.1d( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000056 1 0.000033
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.1d( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.1e(unlocked)] enter Initial
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.1e( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000028 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.1e( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.1e( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000008 1 0.000014
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.1e( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.1e( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.1e( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.1e( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.1e( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.1e( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.1e( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.1e( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000124 1 0.000040
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.1e( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1a(unlocked)] enter Initial
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1a( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000026 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1a( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1a( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000009 1 0.000014
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1a( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1a( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1a( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1a( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1a( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1a( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1a( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1a( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000132 1 0.000034
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1a( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.1b] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.1d] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.17] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.1d] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.17] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.1b] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.11] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.11] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.15] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.12] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.15] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.12] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.13] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.9] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.13] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.9] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.16] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.d] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.3] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.16] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.3] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.4] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.4] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.7] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.6] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.7] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.9] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.d] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.6] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.a] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.c] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.a] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.f] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.c] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.1] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.f] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.5] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.19] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.9] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.5] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.18] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.1] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.1a] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.18] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.1a] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.19] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.11] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.11] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.15] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.f] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.15] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.18] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.f] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.7] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.18] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.2] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.7] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.2] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.5] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.4] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.5] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.4] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.19] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.3] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.19] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.2] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.3] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.8] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.2] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.b] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.8] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.1d] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.b] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.1c] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.1d] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.1f] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.1c] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.1f] failed. State was: unregistering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1c( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.023451 2 0.000045
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1c( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1c( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1c( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:26 compute-0 ceph-osd[91111]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.18( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.022038 2 0.000024
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.18( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.18( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.18( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:26 compute-0 ceph-osd[91111]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.16( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.020212 2 0.000024
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.16( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.11( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.021885 2 0.000023
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.16( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.11( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:26 compute-0 ceph-osd[91111]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.16( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.11( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000010 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.11( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:26 compute-0 ceph-osd[91111]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.a( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.018810 2 0.000027
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.11( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.017354 2 0.000056
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.11( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.a( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.11( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.11( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.a( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000010 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.a( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:26 compute-0 ceph-osd[91111]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.8( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.016929 2 0.000021
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.8( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.8( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.8( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:26 compute-0 ceph-osd[91111]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.15( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.016264 2 0.000023
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.15( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.15( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.15( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:26 compute-0 ceph-osd[91111]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.5( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.014475 2 0.000100
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.5( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.5( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000008 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.5( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:26 compute-0 ceph-osd[91111]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.e( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.016357 2 0.000024
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.e( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.e( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.e( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:26 compute-0 ceph-osd[91111]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.5( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.015188 2 0.000041
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.5( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.5( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.5( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:26 compute-0 ceph-osd[91111]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.c( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.012612 2 0.000032
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.c( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.c( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000007 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.c( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:26 compute-0 ceph-osd[91111]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.8( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.011874 2 0.000081
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.8( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.8( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.8( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:26 compute-0 ceph-osd[91111]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.e( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.011340 2 0.000099
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.e( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.e( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.e( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:26 compute-0 ceph-osd[91111]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.1d( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.010950 2 0.000026
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.1d( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.1d( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.1d( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:26 compute-0 ceph-osd[91111]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.2( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.013448 2 0.000037
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.2( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.2( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.2( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:26 compute-0 ceph-osd[91111]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.1e( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.010413 2 0.000037
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.1e( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.1e( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.1e( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:26 compute-0 ceph-osd[91111]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.014610 2 0.000021
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:26 compute-0 ceph-osd[91111]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1a( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.010093 2 0.000057
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1a( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:26 compute-0 ceph-osd[91111]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1a( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000007 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.7( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.014152 2 0.000041
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.7( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[7.1a( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.7( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 42 pg[3.7( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 57114624 unmapped: 1548288 heap: 58662912 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 2.c scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 2.c scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 42 handle_osd_map epochs [42,43], i have 42, src has [1,43]
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 42 handle_osd_map epochs [42,43], i have 43, src has [1,43]
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.18( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.966562 2 0.000043
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.18( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.988730 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.991974 2 0.000026
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.18( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.1c( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.966673 2 0.000034
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.997609 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.1c( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.990255 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.1c( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.18( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.1c( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.1c( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.16( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.966633 2 0.000080
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.16( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.986948 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.16( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.16( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.992159 2 0.000057
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.997952 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.11( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.992303 2 0.000017
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.998176 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.13( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.11( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.966827 2 0.000111
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.11( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.988832 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.11( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.11( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.15( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.966898 2 0.000026
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.15( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.983245 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.15( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.15( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.11( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.967003 2 0.000032
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.e( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.966488 2 0.000036
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.e( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.982924 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.11( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.984546 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.e( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.11( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.e( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.a( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.967038 2 0.000062
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.a( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.985963 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.a( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.a( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.11( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.992808 2 0.000039
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.000410 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.5( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.966658 2 0.000025
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.5( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.982108 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.5( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.5( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.8( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.967266 2 0.000020
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.8( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.984271 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.8( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.8( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.2( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.966194 2 0.000029
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.2( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.979873 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.2( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.a( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.2( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.993048 2 0.000028
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.001744 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.1( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.1( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.966205 2 0.000040
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.1( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.980901 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.1( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.5( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.967040 2 0.000077
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.1( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.5( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.981752 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.5( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.5( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.7( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.966245 2 0.000070
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.7( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.980630 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.7( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.993416 2 0.000032
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.002590 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.e( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.c( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.967084 2 0.000056
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.c( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.979873 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.c( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.c( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.8( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.967063 2 0.000043
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.8( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.979382 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.8( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.8( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.e( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.967218 2 0.000034
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.e( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.978860 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.e( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.1d( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.966922 2 0.000027
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.1d( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.977958 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.1d( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.1d( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.993911 2 0.000025
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.003400 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.1a( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.993854 2 0.000130
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.004229 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.994055 2 0.000023
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.004851 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.7( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.18( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.1a( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.967186 2 0.000078
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.1a( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.977479 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.1e( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.967375 2 0.000025
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.1e( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.977948 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.1a( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.1e( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.1a( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.1e( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.e( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.1b( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 43 handle_osd_map epochs [43,43], i have 43, src has [1,43]
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.18( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.1c( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.18( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007613 3 0.000132
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.18( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.18( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.18( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 43 handle_osd_map epochs [43,43], i have 43, src has [1,43]
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 43 handle_osd_map epochs [43,43], i have 43, src has [1,43]
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.1c( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007651 3 0.000131
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.1c( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.1c( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000010 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.1c( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:33:39.110220+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  log_queue is 2 last_log 7 sent 5 num 2 unsent 2 sending 2
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:34:08.928447+0000 osd.2 (osd.2) 6 : cluster [DBG] 2.c scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:34:08.942533+0000 osd.2 (osd.2) 7 : cluster [DBG] 2.c scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.1c( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.16( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.11( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.11( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.15( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.e( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.a( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.11( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.5( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.1c( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.017206 4 0.000114
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.1c( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.8( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.1c( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000015 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.1c( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.13( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.a( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.2( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.1( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.1( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.5( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.11( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.017262 4 0.000093
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.16( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.017309 4 0.000112
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.16( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.11( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.16( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.16( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.11( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000012 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.11( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.c( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.8( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.e( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.1d( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.1a( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.1e( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.7( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.1a( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.18( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.e( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.1b( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.15( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.017193 4 0.000066
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.15( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.15( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.15( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.e( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.017169 4 0.000063
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.a( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.017156 4 0.000059
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.e( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.a( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.e( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.e( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.a( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.11( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.017177 4 0.000096
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.a( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.11( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.11( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000025 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.11( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.11( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.017389 4 0.000085
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.5( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.016982 4 0.000051
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.5( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.8( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.016970 4 0.000049
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.8( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.5( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.11( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.5( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.8( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.8( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.a( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.017065 4 0.000189
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.a( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.11( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000022 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.a( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.2( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.016950 4 0.000060
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.11( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.a( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.2( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.2( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.2( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.13( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.017626 4 0.000138
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.13( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.1( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.016919 4 0.000098
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.13( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.1( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.13( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.1( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.1( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.5( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.016882 4 0.000058
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.5( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.5( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.5( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.c( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.016623 4 0.000046
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.c( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.1( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.016924 4 0.000101
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.8( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.016562 4 0.000117
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.c( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000011 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.8( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.1( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.e( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.016691 4 0.000112
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.c( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.8( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.e( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.8( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.e( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.1( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000016 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.e( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.1( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.1d( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.016430 4 0.000067
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.1d( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.1d( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.1a( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.016303 4 0.000092
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.1d( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.1e( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.015884 4 0.000071
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.1a( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.1e( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.1e( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.1a( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.1e( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.1a( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.1a( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.015939 4 0.000092
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.1a( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.7( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.016409 4 0.000943
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.18( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.016036 4 0.000354
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.1a( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.18( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.7( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.18( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.1a( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.e( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.016526 4 0.000732
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.18( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.e( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.7( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000015 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.e( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[7.e( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [2] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[3.7( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [2] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.1b( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.015912 4 0.000571
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.1b( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.1b( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[4.1b( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [2] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.18( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.022272 7 0.000054
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.18( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.19( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.022363 7 0.000123
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.18( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.19( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.19( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.1a( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.021802 7 0.000114
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.1a( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.1a( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.c( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.022935 7 0.000042
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.c( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.c( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.18( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000061 1 0.000063
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.18( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.18] failed. State was: not registered w/ OSD
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.025149 7 0.000107
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.19( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000528 1 0.000081
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.19( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.19] failed. State was: not registered w/ OSD
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.1a( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000566 1 0.000020
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.1a( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.1a] failed. State was: not registered w/ OSD
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.c( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000713 1 0.000033
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.c( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.c] failed. State was: not registered w/ OSD
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000530 1 0.000106
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.18] failed. State was: not registered w/ OSD
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.1d( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.034639 7 0.000096
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.1d( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.1d( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.030956 7 0.000097
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.030642 7 0.000129
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.030424 7 0.000059
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.f( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.030402 7 0.000125
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.f( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.f( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.1d( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000125 1 0.000105
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.1d( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.1( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.030127 7 0.000152
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.1d] failed. State was: not registered w/ OSD
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.1( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.1( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.030562 7 0.000065
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.12( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.033983 7 0.000069
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.12( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.12( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.9( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.032532 7 0.000088
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.9( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.9( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.032327 7 0.000207
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.13( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.033934 7 0.000086
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.13( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.034496 7 0.000117
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.13( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.11( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.034254 7 0.000113
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.035110 7 0.000074
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.11( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.11( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.034136 7 0.000119
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.031401 7 0.000063
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.031498 7 0.000051
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.16( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.032837 7 0.000101
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.032083 7 0.000114
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.16( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.16( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000343 1 0.000032
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.5] failed. State was: not registered w/ OSD
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000443 1 0.000020
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.6] failed. State was: not registered w/ OSD
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000481 1 0.000150
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.9] failed. State was: not registered w/ OSD
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.4( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.032008 7 0.000341
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.4( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.4( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.030605 7 0.000291
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.5( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.032369 7 0.000107
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.5( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.5( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.030249 7 0.000709
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.1e( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.035322 7 0.000054
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.1e( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.1e( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.033144 7 0.000098
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.032620 7 0.000119
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.029031 7 0.000178
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.f( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000672 1 0.000028
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.f( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.030934 7 0.000113
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.f] failed. State was: not registered w/ OSD
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.1( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000704 1 0.000043
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.1( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.1] failed. State was: not registered w/ OSD
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000705 1 0.000021
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.a] failed. State was: not registered w/ OSD
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.9( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000684 1 0.000019
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.9( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.9] failed. State was: not registered w/ OSD
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000693 1 0.000017
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.d] failed. State was: not registered w/ OSD
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000852 1 0.000017
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.17] failed. State was: not registered w/ OSD
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.13( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000922 1 0.000024
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.13( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.13] failed. State was: not registered w/ OSD
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.11( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.001066 1 0.000015
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.11( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.11] failed. State was: not registered w/ OSD
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.001161 1 0.000013
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.1b] failed. State was: not registered w/ OSD
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.001204 1 0.000015
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.15] failed. State was: not registered w/ OSD
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.12( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.001459 1 0.000145
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.12( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.12] failed. State was: not registered w/ OSD
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.001406 1 0.000016
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.7] failed. State was: not registered w/ OSD
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.001426 1 0.000024
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.4] failed. State was: not registered w/ OSD
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.16( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.001497 1 0.000021
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.16( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.16] failed. State was: not registered w/ OSD
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.001552 1 0.000020
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.3] failed. State was: not registered w/ OSD
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.4( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.001370 1 0.000022
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.4( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.4] failed. State was: not registered w/ OSD
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.001411 1 0.000015
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.1d] failed. State was: not registered w/ OSD
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.18( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.009527 1 0.000046
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.18( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.009607 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.18( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.031920 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.5( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.001496 1 0.000014
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.5( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.5] failed. State was: not registered w/ OSD
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.001535 1 0.000014
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.1c] failed. State was: not registered w/ OSD
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.1e( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.001640 1 0.000053
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.1e( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.1e] failed. State was: not registered w/ OSD
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.001711 1 0.000016
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.f] failed. State was: not registered w/ OSD
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.001847 1 0.000017
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.2] failed. State was: not registered w/ OSD
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.001900 1 0.000016
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.1f] failed. State was: not registered w/ OSD
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.001971 1 0.000064
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.3( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.033730 7 0.000101
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.b] failed. State was: not registered w/ OSD
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.3( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.3( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.14( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.035788 7 0.000128
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.14( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.14( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.033466 7 0.000050
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.034143 7 0.000175
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.036886 7 0.000077
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.035902 7 0.000693
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.3( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000137 1 0.000074
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.3( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.3] failed. State was: not registered w/ OSD
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.2( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.033777 7 0.000055
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.2( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.2( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.14( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000098 1 0.000015
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.14( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.14] failed. State was: not registered w/ OSD
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000126 1 0.000014
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.19] failed. State was: not registered w/ OSD
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000177 1 0.000028
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.8] failed. State was: not registered w/ OSD
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000258 1 0.000014
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.16] failed. State was: not registered w/ OSD
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000302 1 0.000020
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.13] failed. State was: not registered w/ OSD
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.2( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000311 1 0.000027
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.2( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.2] failed. State was: not registered w/ OSD
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.036367 7 0.000103
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.7( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.035653 7 0.000239
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.7( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.7( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.15( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.036310 7 0.000060
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.15( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.15( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000076 1 0.000043
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.11] failed. State was: not registered w/ OSD
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.7( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000078 1 0.000053
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.7( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.7] failed. State was: not registered w/ OSD
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.15( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000139 1 0.000016
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.15( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.15] failed. State was: not registered w/ OSD
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.18] failed. State was: not registered w/ OSD
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.19( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.017604 1 0.000036
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.19( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.018176 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.19( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.040594 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.1a( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.022298 1 0.000018
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.1a( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.022894 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.19] failed. State was: not registered w/ OSD
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.1a( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.044747 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.1a] failed. State was: not registered w/ OSD
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.c( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.027884 1 0.000036
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.c( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.028626 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.c( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.051584 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.18( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.035704 1 0.000019
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.18( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.036269 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.18( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.061518 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.c] failed. State was: not registered w/ OSD
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.18] failed. State was: not registered w/ OSD
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.1d( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.036046 1 0.000061
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.1d( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.036214 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.1d( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.070897 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.1d] failed. State was: not registered w/ OSD
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.5( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.043142 1 0.000063
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.5( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.043509 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.5( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.074525 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.5] failed. State was: not registered w/ OSD
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.6( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.050399 1 0.000044
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.6( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.050866 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.6( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.081587 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.6] failed. State was: not registered w/ OSD
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.9( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.057770 1 0.000032
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.9( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.058286 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.9( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.088849 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.9] failed. State was: not registered w/ OSD
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.f( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.064964 1 0.000047
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.f( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.065654 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.f( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.096165 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.f] failed. State was: not registered w/ OSD
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.1( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.072290 1 0.000033
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.1( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.073016 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.1( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.103208 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.1] failed. State was: not registered w/ OSD
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.a( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.079670 1 0.000025
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.a( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.080392 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.a( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.110978 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.a] failed. State was: not registered w/ OSD
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.9( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.087045 1 0.000023
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.9( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.087745 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.9( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.120301 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.9] failed. State was: not registered w/ OSD
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.d( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.094396 1 0.000044
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.d( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.095119 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.d( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.127528 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.d] failed. State was: not registered w/ OSD
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.17( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.101848 1 0.000049
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.17( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.102738 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.17( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.137276 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.17] failed. State was: not registered w/ OSD
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.13( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.109002 1 0.000035
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.13( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.109954 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.13( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.143919 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.13] failed. State was: not registered w/ OSD
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.11( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.116227 1 0.000084
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.11( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.117323 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.11( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.151618 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.11] failed. State was: not registered w/ OSD
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.1b( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.123587 1 0.000036
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.1b( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.124783 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.1b( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.159915 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.1b] failed. State was: not registered w/ OSD
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.15( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.130970 1 0.000027
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.15( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.132205 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.15( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.166374 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.15] failed. State was: not registered w/ OSD
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.12( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.138292 1 0.000053
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.12( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.139800 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.12( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.173825 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.12] failed. State was: not registered w/ OSD
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.7( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.145593 1 0.000028
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.7( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.147026 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.7( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.178460 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.7] failed. State was: not registered w/ OSD
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.4( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.152924 1 0.000027
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.4( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.154375 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.4( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.185899 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 57352192 unmapped: 1310720 heap: 58662912 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.4] failed. State was: not registered w/ OSD
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.16( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.166205 1 0.000065
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.16( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.167842 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.16( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.200753 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.16] failed. State was: not registered w/ OSD
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.4( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.171217 1 0.000039
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.4( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.172696 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.4( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.204788 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 43 heartbeat osd_stat(store_statfs(0x4fe160000/0x0/0x4ffc00000, data 0x2be25/0x6d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.3( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.177086 1 0.000164
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.3( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.178671 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.3( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.210823 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.4] failed. State was: not registered w/ OSD
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.3] failed. State was: not registered w/ OSD
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.1d( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.182231 1 0.000023
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.1d( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.183671 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.1d( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.214300 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.1d] failed. State was: not registered w/ OSD
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.5( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.189456 1 0.000027
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.5( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.190976 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.5( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.223429 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.5] failed. State was: not registered w/ OSD
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.1c( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.197043 1 0.000098
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.1c( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.198598 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.1c( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.228882 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.1c] failed. State was: not registered w/ OSD
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.1e( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.204104 1 0.000039
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.1e( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.205772 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.1e( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.241152 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.f( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.212105 1 0.000125
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.f( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.213867 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.1e] failed. State was: not registered w/ OSD
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.f( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.247068 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.f] failed. State was: not registered w/ OSD
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.2( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.218835 1 0.000047
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.2( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.220719 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.2( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.253417 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.2] failed. State was: not registered w/ OSD
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.b( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.226103 1 0.000024
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.b( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.228110 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.b( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.259111 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.b] failed. State was: not registered w/ OSD
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.3( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.233199 1 0.000040
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.3( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.233356 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.3( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.267144 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.3] failed. State was: not registered w/ OSD
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.14( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.240534 1 0.000027
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.14( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.240653 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.14( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.276547 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.14] failed. State was: not registered w/ OSD
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.1f( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.248224 1 0.000032
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.1f( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.250148 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.1f( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.279246 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.1f] failed. State was: not registered w/ OSD
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.19( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.255497 1 0.000033
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.19( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.255660 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.19( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.289865 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.19] failed. State was: not registered w/ OSD
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.8( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.262991 1 0.000026
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.8( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.263208 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.8( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.296707 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.8] failed. State was: not registered w/ OSD
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.16( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.270473 1 0.000032
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.16( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.270783 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.16( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.307703 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.16] failed. State was: not registered w/ OSD
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.13( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.277626 1 0.000028
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.13( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.277994 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.13( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.314556 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.13] failed. State was: not registered w/ OSD
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.2( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.284916 1 0.000020
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.2( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.285279 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.2( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.319091 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.2] failed. State was: not registered w/ OSD
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.11( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.291848 1 0.000039
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.11( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.291975 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[2.11( empty lb MIN local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.328585 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[2.11] failed. State was: not registered w/ OSD
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.7( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.299565 1 0.000052
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.7( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.299718 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.7( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.335470 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.7] failed. State was: not registered w/ OSD
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.15( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.307367 1 0.000029
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.15( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.307603 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 43 pg[5.15( empty lb MIN local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.343964 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 scrub-queue::remove_from_osd_queue removing pg[5.15] failed. State was: not registered w/ OSD
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client handle_log_ack log(last 7) v1
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:34:08.928447+0000 osd.2 (osd.2) 6 : cluster [DBG] 2.c scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:34:08.942533+0000 osd.2 (osd.2) 7 : cluster [DBG] 2.c scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:33:40.110369+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 57360384 unmapped: 1302528 heap: 58662912 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:33:41.110518+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 57360384 unmapped: 1302528 heap: 58662912 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 306714 data_alloc: 218103808 data_used: 8192
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:33:42.110658+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 57393152 unmapped: 1269760 heap: 58662912 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:33:43.110810+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 57393152 unmapped: 1269760 heap: 58662912 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 2.e scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 2.e scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:33:44.110929+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  log_queue is 2 last_log 9 sent 7 num 2 unsent 2 sending 2
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:34:13.956769+0000 osd.2 (osd.2) 8 : cluster [DBG] 2.e scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:34:13.970942+0000 osd.2 (osd.2) 9 : cluster [DBG] 2.e scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client handle_log_ack log(last 9) v1
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:34:13.956769+0000 osd.2 (osd.2) 8 : cluster [DBG] 2.e scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:34:13.970942+0000 osd.2 (osd.2) 9 : cluster [DBG] 2.e scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 57393152 unmapped: 1269760 heap: 58662912 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 43 heartbeat osd_stat(store_statfs(0x4fe15d000/0x0/0x4ffc00000, data 0x2d2a5/0x70000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 43 handle_osd_map epochs [44,47], i have 43, src has [1,47]
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 43 handle_osd_map epochs [44,47], i have 47, src has [1,47]
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:33:45.111085+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 57573376 unmapped: 1089536 heap: 58662912 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 2.10 scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 2.10 scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:33:46.111224+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  log_queue is 2 last_log 11 sent 9 num 2 unsent 2 sending 2
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:34:15.943300+0000 osd.2 (osd.2) 10 : cluster [DBG] 2.10 scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:34:15.957320+0000 osd.2 (osd.2) 11 : cluster [DBG] 2.10 scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client handle_log_ack log(last 11) v1
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:34:15.943300+0000 osd.2 (osd.2) 10 : cluster [DBG] 2.10 scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:34:15.957320+0000 osd.2 (osd.2) 11 : cluster [DBG] 2.10 scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 57606144 unmapped: 1056768 heap: 58662912 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 322801 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:33:47.111454+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 57606144 unmapped: 1056768 heap: 58662912 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 47 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x3278b/0x7c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:33:48.111593+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 57606144 unmapped: 1056768 heap: 58662912 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:33:49.111751+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 57606144 unmapped: 1056768 heap: 58662912 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 47 handle_osd_map epochs [48,48], i have 47, src has [1,48]
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.924494743s of 11.253636360s, submitted: 326
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 48 heartbeat osd_stat(store_statfs(0x4fe151000/0x0/0x4ffc00000, data 0x3278b/0x7c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 48 heartbeat osd_stat(store_statfs(0x4fe14d000/0x0/0x4ffc00000, data 0x33d8e/0x7f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 57679872 unmapped: 983040 heap: 58662912 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:33:50.914116+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 48 handle_osd_map epochs [49,49], i have 48, src has [1,49]
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 57753600 unmapped: 909312 heap: 58662912 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 327833 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 49 heartbeat osd_stat(store_statfs(0x4fe14d000/0x0/0x4ffc00000, data 0x33d8e/0x7f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:33:51.914324+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 49 handle_osd_map epochs [50,50], i have 49, src has [1,50]
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 57769984 unmapped: 892928 heap: 58662912 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:33:52.914522+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 57769984 unmapped: 892928 heap: 58662912 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:33:53.914710+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 2.12 scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 2.12 scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 57794560 unmapped: 868352 heap: 58662912 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:33:54.914890+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  log_queue is 2 last_log 13 sent 11 num 2 unsent 2 sending 2
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:34:23.930816+0000 osd.2 (osd.2) 12 : cluster [DBG] 2.12 scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:34:23.944817+0000 osd.2 (osd.2) 13 : cluster [DBG] 2.12 scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 2.14 deep-scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 2.14 deep-scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client handle_log_ack log(last 13) v1
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:34:23.930816+0000 osd.2 (osd.2) 12 : cluster [DBG] 2.12 scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:34:23.944817+0000 osd.2 (osd.2) 13 : cluster [DBG] 2.12 scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 50 handle_osd_map epochs [51,52], i have 50, src has [1,52]
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 57868288 unmapped: 794624 heap: 58662912 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:33:55.915106+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  log_queue is 2 last_log 15 sent 13 num 2 unsent 2 sending 2
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:34:24.941302+0000 osd.2 (osd.2) 14 : cluster [DBG] 2.14 deep-scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:34:24.955457+0000 osd.2 (osd.2) 15 : cluster [DBG] 2.14 deep-scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client handle_log_ack log(last 15) v1
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:34:24.941302+0000 osd.2 (osd.2) 14 : cluster [DBG] 2.14 deep-scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:34:24.955457+0000 osd.2 (osd.2) 15 : cluster [DBG] 2.14 deep-scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 52 heartbeat osd_stat(store_statfs(0x4fe142000/0x0/0x4ffc00000, data 0x3940f/0x8b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 57876480 unmapped: 786432 heap: 58662912 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 339221 data_alloc: 218103808 data_used: 24576
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:33:56.915272+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 52 handle_osd_map epochs [52,53], i have 52, src has [1,53]
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 57925632 unmapped: 737280 heap: 58662912 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:33:57.915454+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  log_queue is 1 last_log 16 sent 15 num 1 unsent 1 sending 1
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:34:27.904081+0000 osd.2 (osd.2) 16 : cluster [DBG] 2.1a scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client handle_log_ack log(last 16) v1
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:34:27.904081+0000 osd.2 (osd.2) 16 : cluster [DBG] 2.1a scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 57966592 unmapped: 696320 heap: 58662912 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:33:58.915687+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  log_queue is 1 last_log 17 sent 16 num 1 unsent 1 sending 1
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:34:27.918163+0000 osd.2 (osd.2) 17 : cluster [DBG] 2.1a scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client handle_log_ack log(last 17) v1
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:34:27.918163+0000 osd.2 (osd.2) 17 : cluster [DBG] 2.1a scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 53 handle_osd_map epochs [53,54], i have 53, src has [1,54]
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 54 pg[6.8(unlocked)] enter Initial
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 54 pg[6.8( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=53) [2] r=0 lpr=0 pi=[39,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000076 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 54 pg[6.8( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=53) [2] r=0 lpr=0 pi=[39,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 54 pg[6.8( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=53) [2] r=0 lpr=54 pi=[39,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000018 1 0.000039
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 54 pg[6.8( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=53) [2] r=0 lpr=54 pi=[39,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 54 pg[6.8( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=53) [2] r=0 lpr=54 pi=[39,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 54 pg[6.8( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=53) [2] r=0 lpr=54 pi=[39,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 54 pg[6.8( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=53) [2] r=0 lpr=54 pi=[39,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000012 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 54 pg[6.8( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=53) [2] r=0 lpr=54 pi=[39,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 54 pg[6.8( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=53) [2] r=0 lpr=54 pi=[39,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 54 pg[6.8( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=53) [2] r=0 lpr=54 pi=[39,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 54 pg[6.8( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=53) [2] r=0 lpr=54 pi=[39,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000188 1 0.000069
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 54 pg[6.8( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=53) [2] r=0 lpr=54 pi=[39,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 54 pg[6.8( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=53) [2] r=0 lpr=54 pi=[39,53)/1 crt=32'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001479 2 0.000067
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 54 pg[6.8( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=53) [2] r=0 lpr=54 pi=[39,53)/1 crt=32'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 54 pg[6.8( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=53) [2] r=0 lpr=54 pi=[39,53)/1 crt=32'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000009 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 54 pg[6.8( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=53) [2] r=0 lpr=54 pi=[39,53)/1 crt=32'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 57974784 unmapped: 688128 heap: 58662912 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:33:59.915906+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 54 handle_osd_map epochs [54,55], i have 54, src has [1,55]
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.640701294s of 10.707428932s, submitted: 14
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 54 handle_osd_map epochs [54,55], i have 55, src has [1,55]
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 55 pg[6.8( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=53) [2] r=0 lpr=54 pi=[39,53)/1 crt=32'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.999131 2 0.000149
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 55 pg[6.8( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=53) [2] r=0 lpr=54 pi=[39,53)/1 crt=32'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.001026 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 55 pg[6.8( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=53) [2] r=0 lpr=54 pi=[39,53)/1 crt=32'39 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 55 pg[6.8( v 32'39 (0'0,32'39] local-lis/les=53/55 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=53) [2] r=0 lpr=54 pi=[39,53)/1 crt=32'39 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 55 handle_osd_map epochs [55,55], i have 55, src has [1,55]
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 55 pg[6.8( v 32'39 (0'0,32'39] local-lis/les=53/55 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=53) [2] r=0 lpr=54 pi=[39,53)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 55 pg[6.8( v 32'39 (0'0,32'39] local-lis/les=53/55 n=1 ec=39/22 lis/c=53/39 les/c/f=55/41/0 sis=53) [2] r=0 lpr=54 pi=[39,53)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004944 4 0.001076
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 55 pg[6.8( v 32'39 (0'0,32'39] local-lis/les=53/55 n=1 ec=39/22 lis/c=53/39 les/c/f=55/41/0 sis=53) [2] r=0 lpr=54 pi=[39,53)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 55 pg[6.8( v 32'39 (0'0,32'39] local-lis/les=53/55 n=1 ec=39/22 lis/c=53/39 les/c/f=55/41/0 sis=53) [2] r=0 lpr=54 pi=[39,53)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000022 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 55 pg[6.8( v 32'39 (0'0,32'39] local-lis/les=53/55 n=1 ec=39/22 lis/c=53/39 les/c/f=55/41/0 sis=53) [2] r=0 lpr=54 pi=[39,53)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 57999360 unmapped: 663552 heap: 58662912 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 55 heartbeat osd_stat(store_statfs(0x4fe13b000/0x0/0x4ffc00000, data 0x3c015/0x91000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:00.916099+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 55 handle_osd_map epochs [55,56], i have 55, src has [1,56]
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 58032128 unmapped: 1679360 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 353755 data_alloc: 218103808 data_used: 24576
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:01.916352+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 56 handle_osd_map epochs [56,57], i have 56, src has [1,57]
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 58040320 unmapped: 1671168 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:02.916537+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  log_queue is 2 last_log 19 sent 17 num 2 unsent 2 sending 2
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:34:31.963691+0000 osd.2 (osd.2) 18 : cluster [DBG] 2.1e scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:34:31.977683+0000 osd.2 (osd.2) 19 : cluster [DBG] 2.1e scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client handle_log_ack log(last 19) v1
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:34:31.963691+0000 osd.2 (osd.2) 18 : cluster [DBG] 2.1e scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:34:31.977683+0000 osd.2 (osd.2) 19 : cluster [DBG] 2.1e scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 58097664 unmapped: 1613824 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:03.916803+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 58097664 unmapped: 1613824 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:04.916953+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 58114048 unmapped: 1597440 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 57 handle_osd_map epochs [58,58], i have 57, src has [1,58]
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:05.917118+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  log_queue is 2 last_log 21 sent 19 num 2 unsent 2 sending 2
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:34:34.939351+0000 osd.2 (osd.2) 20 : cluster [DBG] 5.6 scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:34:34.953503+0000 osd.2 (osd.2) 21 : cluster [DBG] 5.6 scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 58 handle_osd_map epochs [58,59], i have 58, src has [1,59]
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client handle_log_ack log(last 21) v1
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:34:34.939351+0000 osd.2 (osd.2) 20 : cluster [DBG] 5.6 scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:34:34.953503+0000 osd.2 (osd.2) 21 : cluster [DBG] 5.6 scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 58171392 unmapped: 1540096 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 365286 data_alloc: 218103808 data_used: 32768
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 59 heartbeat osd_stat(store_statfs(0x4fe130000/0x0/0x4ffc00000, data 0x416a6/0x9d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:06.917317+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.8 deep-scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.8 deep-scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 58171392 unmapped: 1540096 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:07.917472+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  log_queue is 2 last_log 23 sent 21 num 2 unsent 2 sending 2
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:34:36.921256+0000 osd.2 (osd.2) 22 : cluster [DBG] 5.8 deep-scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:34:36.935347+0000 osd.2 (osd.2) 23 : cluster [DBG] 5.8 deep-scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client handle_log_ack log(last 23) v1
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:34:36.921256+0000 osd.2 (osd.2) 22 : cluster [DBG] 5.8 deep-scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:34:36.935347+0000 osd.2 (osd.2) 23 : cluster [DBG] 5.8 deep-scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 58171392 unmapped: 1540096 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 59 heartbeat osd_stat(store_statfs(0x4fe12c000/0x0/0x4ffc00000, data 0x42ca9/0xa0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 59 handle_osd_map epochs [60,60], i have 59, src has [1,60]
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 59 handle_osd_map epochs [60,61], i have 60, src has [1,61]
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:08.917674+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59269120 unmapped: 442368 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 61 heartbeat osd_stat(store_statfs(0x4fe126000/0x0/0x4ffc00000, data 0x4571c/0xa6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:09.917821+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 61 handle_osd_map epochs [61,62], i have 61, src has [1,62]
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.052805901s of 10.114706993s, submitted: 16
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 62 pg[6.f(unlocked)] enter Initial
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 62 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=62) [2] r=0 lpr=0 pi=[46,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000059 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 62 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=62) [2] r=0 lpr=0 pi=[46,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 62 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=62) [2] r=0 lpr=62 pi=[46,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000011 1 0.000021
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 62 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=62) [2] r=0 lpr=62 pi=[46,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 62 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=62) [2] r=0 lpr=62 pi=[46,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 62 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=62) [2] r=0 lpr=62 pi=[46,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 62 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=62) [2] r=0 lpr=62 pi=[46,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000014 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 62 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=62) [2] r=0 lpr=62 pi=[46,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 62 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=62) [2] r=0 lpr=62 pi=[46,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 62 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=62) [2] r=0 lpr=62 pi=[46,62)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 62 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=62) [2] r=0 lpr=62 pi=[46,62)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000131 1 0.000053
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 62 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=62) [2] r=0 lpr=62 pi=[46,62)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 62 pg[6.f( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=62) [2] r=0 lpr=62 pi=[46,62)/1 crt=32'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/GetLog 0.000837 2 0.000043
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 62 pg[6.f( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=62) [2] r=0 lpr=62 pi=[46,62)/1 crt=32'39 mlcod 0'0 peering m=3 mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 62 pg[6.f( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=62) [2] r=0 lpr=62 pi=[46,62)/1 crt=32'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Nov 25 23:58:26 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 25 23:58:26 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1208784011' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 62 pg[6.f( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=62) [2] r=0 lpr=62 pi=[46,62)/1 crt=32'39 mlcod 0'0 peering m=3 mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59285504 unmapped: 425984 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.a scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.a scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:10.917998+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  log_queue is 2 last_log 25 sent 23 num 2 unsent 2 sending 2
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:34:40.895740+0000 osd.2 (osd.2) 24 : cluster [DBG] 5.a scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:34:40.909837+0000 osd.2 (osd.2) 25 : cluster [DBG] 5.a scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 62 handle_osd_map epochs [62,63], i have 62, src has [1,63]
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 62 handle_osd_map epochs [62,63], i have 63, src has [1,63]
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client handle_log_ack log(last 25) v1
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:34:40.895740+0000 osd.2 (osd.2) 24 : cluster [DBG] 5.a scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:34:40.909837+0000 osd.2 (osd.2) 25 : cluster [DBG] 5.a scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 63 pg[6.f( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=62) [2] r=0 lpr=62 pi=[46,62)/1 crt=32'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.855637 2 0.000068
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 63 pg[6.f( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=62) [2] r=0 lpr=62 pi=[46,62)/1 crt=32'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering 0.856671 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 63 pg[6.f( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=62) [2] r=0 lpr=62 pi=[46,62)/1 crt=32'39 mlcod 0'0 unknown m=3 mbc={}] enter Started/Primary/Active
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 63 pg[6.f( v 32'39 lc 32'1 (0'0,32'39] local-lis/les=62/63 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=62) [2] r=0 lpr=62 pi=[46,62)/1 crt=32'39 lcod 0'0 mlcod 0'0 activating+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/Activating
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 63 pg[6.f( v 32'39 lc 32'1 (0'0,32'39] local-lis/les=62/63 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=62) [2] r=0 lpr=62 pi=[46,62)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 63 pg[6.f( v 32'39 lc 32'1 (0'0,32'39] local-lis/les=62/63 n=1 ec=39/22 lis/c=62/46 les/c/f=63/47/0 sis=62) [2] r=0 lpr=62 pi=[46,62)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/Activating 0.005781 3 0.000210
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 63 pg[6.f( v 32'39 lc 32'1 (0'0,32'39] local-lis/les=62/63 n=1 ec=39/22 lis/c=62/46 les/c/f=63/47/0 sis=62) [2] r=0 lpr=62 pi=[46,62)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 63 pg[6.f( v 32'39 lc 32'1 (0'0,32'39] local-lis/les=62/63 n=1 ec=39/22 lis/c=62/46 les/c/f=63/47/0 sis=62) [2] r=0 lpr=62 pi=[46,62)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000118 1 0.000086
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 63 pg[6.f( v 32'39 lc 32'1 (0'0,32'39] local-lis/les=62/63 n=1 ec=39/22 lis/c=62/46 les/c/f=63/47/0 sis=62) [2] r=0 lpr=62 pi=[46,62)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 63 pg[6.f( v 32'39 lc 32'1 (0'0,32'39] local-lis/les=62/63 n=1 ec=39/22 lis/c=62/46 les/c/f=63/47/0 sis=62) [2] r=0 lpr=62 pi=[46,62)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000009 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 63 pg[6.f( v 32'39 lc 32'1 (0'0,32'39] local-lis/les=62/63 n=1 ec=39/22 lis/c=62/46 les/c/f=63/47/0 sis=62) [2] r=0 lpr=62 pi=[46,62)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/Recovering
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe122000/0x0/0x4ffc00000, data 0x46d1f/0xa9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 63 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=62/63 n=1 ec=39/22 lis/c=62/46 les/c/f=63/47/0 sis=62) [2] r=0 lpr=62 pi=[46,62)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.127812 3 0.000075
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 63 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=62/63 n=1 ec=39/22 lis/c=62/46 les/c/f=63/47/0 sis=62) [2] r=0 lpr=62 pi=[46,62)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 63 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=62/63 n=1 ec=39/22 lis/c=62/46 les/c/f=63/47/0 sis=62) [2] r=0 lpr=62 pi=[46,62)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000066 0 0.000000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 pg_epoch: 63 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=62/63 n=1 ec=39/22 lis/c=62/46 les/c/f=63/47/0 sis=62) [2] r=0 lpr=62 pi=[46,62)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59359232 unmapped: 352256 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 387190 data_alloc: 218103808 data_used: 32768
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.b scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.b scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:11.918331+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  log_queue is 2 last_log 27 sent 25 num 2 unsent 2 sending 2
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:34:41.901503+0000 osd.2 (osd.2) 26 : cluster [DBG] 5.b scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:34:41.915630+0000 osd.2 (osd.2) 27 : cluster [DBG] 5.b scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client handle_log_ack log(last 27) v1
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:34:41.901503+0000 osd.2 (osd.2) 26 : cluster [DBG] 5.b scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:34:41.915630+0000 osd.2 (osd.2) 27 : cluster [DBG] 5.b scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59367424 unmapped: 344064 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:12.918575+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59367424 unmapped: 344064 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:13.918729+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59375616 unmapped: 335872 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:14.918896+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59375616 unmapped: 335872 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:15.919099+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59375616 unmapped: 335872 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 385089 data_alloc: 218103808 data_used: 32768
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:16.919248+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59383808 unmapped: 327680 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:17.919505+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.d scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.d scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59392000 unmapped: 319488 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:18.919930+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  log_queue is 2 last_log 29 sent 27 num 2 unsent 2 sending 2
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:34:47.981956+0000 osd.2 (osd.2) 28 : cluster [DBG] 5.d scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:34:47.995904+0000 osd.2 (osd.2) 29 : cluster [DBG] 5.d scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client handle_log_ack log(last 29) v1
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:34:47.981956+0000 osd.2 (osd.2) 28 : cluster [DBG] 5.d scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:34:47.995904+0000 osd.2 (osd.2) 29 : cluster [DBG] 5.d scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59408384 unmapped: 303104 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:19.920224+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59408384 unmapped: 303104 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:20.920409+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59432960 unmapped: 278528 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 386236 data_alloc: 218103808 data_used: 32768
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:21.920595+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59432960 unmapped: 278528 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:22.920775+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.e scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.723413467s of 12.791498184s, submitted: 16
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.e scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59432960 unmapped: 278528 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:23.920927+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  log_queue is 2 last_log 31 sent 29 num 2 unsent 2 sending 2
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:34:52.947540+0000 osd.2 (osd.2) 30 : cluster [DBG] 5.e scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:34:52.961662+0000 osd.2 (osd.2) 31 : cluster [DBG] 5.e scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client handle_log_ack log(last 31) v1
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:34:52.947540+0000 osd.2 (osd.2) 30 : cluster [DBG] 5.e scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:34:52.961662+0000 osd.2 (osd.2) 31 : cluster [DBG] 5.e scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59441152 unmapped: 270336 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:24.921084+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59441152 unmapped: 270336 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:25.921243+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59482112 unmapped: 229376 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 389943 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:26.921381+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59482112 unmapped: 229376 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:27.921506+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59490304 unmapped: 221184 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:28.921624+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59490304 unmapped: 221184 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:29.921746+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59490304 unmapped: 221184 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:30.921851+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59498496 unmapped: 212992 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 389943 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:31.921959+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59498496 unmapped: 212992 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:32.922076+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59498496 unmapped: 212992 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:33.922216+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59506688 unmapped: 204800 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:34.922365+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59514880 unmapped: 196608 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:35.922589+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.10 deep-scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.111580849s of 13.117458344s, submitted: 2
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.10 deep-scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59514880 unmapped: 196608 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 391091 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:36.922698+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  log_queue is 2 last_log 33 sent 31 num 2 unsent 2 sending 2
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:35:06.064970+0000 osd.2 (osd.2) 32 : cluster [DBG] 5.10 deep-scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:35:06.079026+0000 osd.2 (osd.2) 33 : cluster [DBG] 5.10 deep-scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client handle_log_ack log(last 33) v1
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:35:06.064970+0000 osd.2 (osd.2) 32 : cluster [DBG] 5.10 deep-scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:35:06.079026+0000 osd.2 (osd.2) 33 : cluster [DBG] 5.10 deep-scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59523072 unmapped: 188416 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:37.922899+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59523072 unmapped: 188416 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:38.923100+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  log_queue is 2 last_log 35 sent 33 num 2 unsent 2 sending 2
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:35:08.064517+0000 osd.2 (osd.2) 34 : cluster [DBG] 5.17 scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:35:08.078520+0000 osd.2 (osd.2) 35 : cluster [DBG] 5.17 scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client handle_log_ack log(last 35) v1
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:35:08.064517+0000 osd.2 (osd.2) 34 : cluster [DBG] 5.17 scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:35:08.078520+0000 osd.2 (osd.2) 35 : cluster [DBG] 5.17 scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59531264 unmapped: 180224 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:39.923421+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59531264 unmapped: 180224 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:40.923589+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59539456 unmapped: 172032 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 393387 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:41.923764+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  log_queue is 2 last_log 37 sent 35 num 2 unsent 2 sending 2
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:35:10.985096+0000 osd.2 (osd.2) 36 : cluster [DBG] 5.1b scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:35:10.999157+0000 osd.2 (osd.2) 37 : cluster [DBG] 5.1b scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client handle_log_ack log(last 37) v1
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:35:10.985096+0000 osd.2 (osd.2) 36 : cluster [DBG] 5.1b scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:35:10.999157+0000 osd.2 (osd.2) 37 : cluster [DBG] 5.1b scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59547648 unmapped: 163840 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:42.923991+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59555840 unmapped: 155648 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:43.924231+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59555840 unmapped: 155648 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:44.924381+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59564032 unmapped: 147456 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:45.924571+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  log_queue is 2 last_log 39 sent 37 num 2 unsent 2 sending 2
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:35:15.017993+0000 osd.2 (osd.2) 38 : cluster [DBG] 5.1c scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:35:15.032072+0000 osd.2 (osd.2) 39 : cluster [DBG] 5.1c scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.1f deep-scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.1f deep-scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client handle_log_ack log(last 39) v1
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:35:15.017993+0000 osd.2 (osd.2) 38 : cluster [DBG] 5.1c scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:35:15.032072+0000 osd.2 (osd.2) 39 : cluster [DBG] 5.1c scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59564032 unmapped: 147456 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 395683 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:46.924756+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  log_queue is 2 last_log 41 sent 39 num 2 unsent 2 sending 2
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:35:16.037143+0000 osd.2 (osd.2) 40 : cluster [DBG] 5.1f deep-scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:35:16.051218+0000 osd.2 (osd.2) 41 : cluster [DBG] 5.1f deep-scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client handle_log_ack log(last 41) v1
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:35:16.037143+0000 osd.2 (osd.2) 40 : cluster [DBG] 5.1f deep-scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:35:16.051218+0000 osd.2 (osd.2) 41 : cluster [DBG] 5.1f deep-scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59572224 unmapped: 139264 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:47.924937+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59572224 unmapped: 139264 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:48.925109+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59572224 unmapped: 139264 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:49.925288+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59580416 unmapped: 131072 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:50.925452+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59588608 unmapped: 122880 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 395683 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:51.925620+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59596800 unmapped: 114688 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:52.925770+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59596800 unmapped: 114688 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:53.925935+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.988166809s of 18.023300171s, submitted: 10
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59604992 unmapped: 106496 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:54.926079+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  log_queue is 2 last_log 43 sent 41 num 2 unsent 2 sending 2
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:35:24.088310+0000 osd.2 (osd.2) 42 : cluster [DBG] 4.18 scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:35:24.102401+0000 osd.2 (osd.2) 43 : cluster [DBG] 4.18 scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59613184 unmapped: 98304 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client handle_log_ack log(last 43) v1
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:35:24.088310+0000 osd.2 (osd.2) 42 : cluster [DBG] 4.18 scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:35:24.102401+0000 osd.2 (osd.2) 43 : cluster [DBG] 4.18 scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:55.926327+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  log_queue is 2 last_log 45 sent 43 num 2 unsent 2 sending 2
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:35:25.101112+0000 osd.2 (osd.2) 44 : cluster [DBG] 4.1a scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:35:25.115217+0000 osd.2 (osd.2) 45 : cluster [DBG] 4.1a scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client handle_log_ack log(last 45) v1
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:35:25.101112+0000 osd.2 (osd.2) 44 : cluster [DBG] 4.1a scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:35:25.115217+0000 osd.2 (osd.2) 45 : cluster [DBG] 4.1a scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59621376 unmapped: 90112 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 397979 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:56.926577+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59629568 unmapped: 81920 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:57.926844+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59629568 unmapped: 81920 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:58.927001+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59645952 unmapped: 65536 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:59.927126+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  log_queue is 2 last_log 47 sent 45 num 2 unsent 2 sending 2
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:35:29.158670+0000 osd.2 (osd.2) 46 : cluster [DBG] 4.1b scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:35:29.172804+0000 osd.2 (osd.2) 47 : cluster [DBG] 4.1b scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.e scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.e scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59645952 unmapped: 65536 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client handle_log_ack log(last 47) v1
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:35:29.158670+0000 osd.2 (osd.2) 46 : cluster [DBG] 4.1b scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:35:29.172804+0000 osd.2 (osd.2) 47 : cluster [DBG] 4.1b scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:00.927421+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  log_queue is 2 last_log 49 sent 47 num 2 unsent 2 sending 2
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:35:30.185804+0000 osd.2 (osd.2) 48 : cluster [DBG] 4.e scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:35:30.199904+0000 osd.2 (osd.2) 49 : cluster [DBG] 4.e scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59645952 unmapped: 65536 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 400274 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client handle_log_ack log(last 49) v1
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:35:30.185804+0000 osd.2 (osd.2) 48 : cluster [DBG] 4.e scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:35:30.199904+0000 osd.2 (osd.2) 49 : cluster [DBG] 4.e scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:01.927638+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59654144 unmapped: 57344 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:02.927984+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.a scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.a scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59645952 unmapped: 65536 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:03.928159+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  log_queue is 2 last_log 51 sent 49 num 2 unsent 2 sending 2
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:35:33.194099+0000 osd.2 (osd.2) 50 : cluster [DBG] 4.a scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:35:33.208225+0000 osd.2 (osd.2) 51 : cluster [DBG] 4.a scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59654144 unmapped: 57344 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client handle_log_ack log(last 51) v1
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:35:33.194099+0000 osd.2 (osd.2) 50 : cluster [DBG] 4.a scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:35:33.208225+0000 osd.2 (osd.2) 51 : cluster [DBG] 4.a scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:04.928400+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59654144 unmapped: 57344 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:05.928562+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.980714798s of 12.043251991s, submitted: 10
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59662336 unmapped: 49152 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 402569 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:06.928731+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  log_queue is 2 last_log 53 sent 51 num 2 unsent 2 sending 2
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:35:36.131618+0000 osd.2 (osd.2) 52 : cluster [DBG] 4.13 scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:35:36.145731+0000 osd.2 (osd.2) 53 : cluster [DBG] 4.13 scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.1c deep-scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.1c deep-scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59670528 unmapped: 40960 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client handle_log_ack log(last 53) v1
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:35:36.131618+0000 osd.2 (osd.2) 52 : cluster [DBG] 4.13 scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:35:36.145731+0000 osd.2 (osd.2) 53 : cluster [DBG] 4.13 scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:07.929140+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  log_queue is 2 last_log 55 sent 53 num 2 unsent 2 sending 2
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:35:37.156617+0000 osd.2 (osd.2) 54 : cluster [DBG] 4.1c deep-scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:35:37.169982+0000 osd.2 (osd.2) 55 : cluster [DBG] 4.1c deep-scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59678720 unmapped: 32768 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client handle_log_ack log(last 55) v1
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:35:37.156617+0000 osd.2 (osd.2) 54 : cluster [DBG] 4.1c deep-scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:35:37.169982+0000 osd.2 (osd.2) 55 : cluster [DBG] 4.1c deep-scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:08.929430+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  log_queue is 2 last_log 57 sent 55 num 2 unsent 2 sending 2
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:35:38.119989+0000 osd.2 (osd.2) 56 : cluster [DBG] 4.11 scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:35:38.134064+0000 osd.2 (osd.2) 57 : cluster [DBG] 4.11 scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.1c deep-scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.1c deep-scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59686912 unmapped: 24576 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client handle_log_ack log(last 57) v1
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:35:38.119989+0000 osd.2 (osd.2) 56 : cluster [DBG] 4.11 scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:35:38.134064+0000 osd.2 (osd.2) 57 : cluster [DBG] 4.11 scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:09.929706+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  log_queue is 2 last_log 59 sent 57 num 2 unsent 2 sending 2
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:35:39.113217+0000 osd.2 (osd.2) 58 : cluster [DBG] 7.1c deep-scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:35:39.127337+0000 osd.2 (osd.2) 59 : cluster [DBG] 7.1c deep-scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59686912 unmapped: 24576 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client handle_log_ack log(last 59) v1
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:35:39.113217+0000 osd.2 (osd.2) 58 : cluster [DBG] 7.1c deep-scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:35:39.127337+0000 osd.2 (osd.2) 59 : cluster [DBG] 7.1c deep-scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:10.930127+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59686912 unmapped: 24576 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 406013 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:11.930268+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59695104 unmapped: 16384 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:12.930412+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59695104 unmapped: 16384 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:13.930689+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  log_queue is 2 last_log 61 sent 59 num 2 unsent 2 sending 2
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:35:43.085275+0000 osd.2 (osd.2) 60 : cluster [DBG] 3.18 scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:35:43.099299+0000 osd.2 (osd.2) 61 : cluster [DBG] 3.18 scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client handle_log_ack log(last 61) v1
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:35:43.085275+0000 osd.2 (osd.2) 60 : cluster [DBG] 3.18 scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:35:43.099299+0000 osd.2 (osd.2) 61 : cluster [DBG] 3.18 scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59703296 unmapped: 8192 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:14.930996+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59703296 unmapped: 8192 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:15.931225+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  log_queue is 2 last_log 63 sent 61 num 2 unsent 2 sending 2
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:35:45.093375+0000 osd.2 (osd.2) 62 : cluster [DBG] 7.11 scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:35:45.107448+0000 osd.2 (osd.2) 63 : cluster [DBG] 7.11 scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client handle_log_ack log(last 63) v1
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:35:45.093375+0000 osd.2 (osd.2) 62 : cluster [DBG] 7.11 scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:35:45.107448+0000 osd.2 (osd.2) 63 : cluster [DBG] 7.11 scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59703296 unmapped: 8192 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 408309 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:16.931499+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59711488 unmapped: 0 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:17.931657+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59711488 unmapped: 0 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:18.931851+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59711488 unmapped: 0 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:19.932027+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59719680 unmapped: 1040384 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:20.932223+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59719680 unmapped: 1040384 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 408309 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:21.932371+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59727872 unmapped: 1032192 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:22.932567+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59727872 unmapped: 1032192 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:23.932744+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59727872 unmapped: 1032192 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:24.932954+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59736064 unmapped: 1024000 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:25.933279+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 20.015459061s of 20.058856964s, submitted: 12
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59752448 unmapped: 1007616 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 409457 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:26.933509+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  log_queue is 2 last_log 65 sent 63 num 2 unsent 2 sending 2
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:35:56.190527+0000 osd.2 (osd.2) 64 : cluster [DBG] 3.16 scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:35:56.204653+0000 osd.2 (osd.2) 65 : cluster [DBG] 3.16 scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.a scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.a scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client handle_log_ack log(last 65) v1
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:35:56.190527+0000 osd.2 (osd.2) 64 : cluster [DBG] 3.16 scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:35:56.204653+0000 osd.2 (osd.2) 65 : cluster [DBG] 3.16 scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59752448 unmapped: 1007616 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:27.933745+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  log_queue is 2 last_log 67 sent 65 num 2 unsent 2 sending 2
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:35:57.154506+0000 osd.2 (osd.2) 66 : cluster [DBG] 7.a scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:35:57.168476+0000 osd.2 (osd.2) 67 : cluster [DBG] 7.a scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client handle_log_ack log(last 67) v1
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:35:57.154506+0000 osd.2 (osd.2) 66 : cluster [DBG] 7.a scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:35:57.168476+0000 osd.2 (osd.2) 67 : cluster [DBG] 7.a scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59760640 unmapped: 999424 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:28.933964+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59760640 unmapped: 999424 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:29.934091+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59768832 unmapped: 991232 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:30.934264+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  log_queue is 2 last_log 69 sent 67 num 2 unsent 2 sending 2
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:36:00.097898+0000 osd.2 (osd.2) 68 : cluster [DBG] 3.11 scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:36:00.111987+0000 osd.2 (osd.2) 69 : cluster [DBG] 3.11 scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client handle_log_ack log(last 69) v1
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:36:00.097898+0000 osd.2 (osd.2) 68 : cluster [DBG] 3.11 scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:36:00.111987+0000 osd.2 (osd.2) 69 : cluster [DBG] 3.11 scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59768832 unmapped: 991232 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 411752 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:31.934566+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59777024 unmapped: 983040 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:32.934710+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59777024 unmapped: 983040 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:33.934865+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.8 deep-scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.8 deep-scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59785216 unmapped: 974848 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:34.935077+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  log_queue is 2 last_log 71 sent 69 num 2 unsent 2 sending 2
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:36:04.090135+0000 osd.2 (osd.2) 70 : cluster [DBG] 7.8 deep-scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:36:04.104274+0000 osd.2 (osd.2) 71 : cluster [DBG] 7.8 deep-scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.15 deep-scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.15 deep-scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client handle_log_ack log(last 71) v1
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:36:04.090135+0000 osd.2 (osd.2) 70 : cluster [DBG] 7.8 deep-scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:36:04.104274+0000 osd.2 (osd.2) 71 : cluster [DBG] 7.8 deep-scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59801600 unmapped: 958464 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:35.935333+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  log_queue is 2 last_log 73 sent 71 num 2 unsent 2 sending 2
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:36:05.049109+0000 osd.2 (osd.2) 72 : cluster [DBG] 7.15 deep-scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:36:05.063189+0000 osd.2 (osd.2) 73 : cluster [DBG] 7.15 deep-scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client handle_log_ack log(last 73) v1
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:36:05.049109+0000 osd.2 (osd.2) 72 : cluster [DBG] 7.15 deep-scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:36:05.063189+0000 osd.2 (osd.2) 73 : cluster [DBG] 7.15 deep-scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59817984 unmapped: 942080 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 414047 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:36.935532+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59826176 unmapped: 933888 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:37.935747+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59826176 unmapped: 933888 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:38.935868+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59834368 unmapped: 925696 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:39.935986+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59834368 unmapped: 925696 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:40.936131+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 414047 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:41.936223+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:42.936338+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:43.936529+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.5 deep-scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.928712845s of 17.964544296s, submitted: 10
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.5 deep-scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:44.936723+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  log_queue is 2 last_log 75 sent 73 num 2 unsent 2 sending 2
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:36:14.155007+0000 osd.2 (osd.2) 74 : cluster [DBG] 3.5 deep-scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:36:14.169108+0000 osd.2 (osd.2) 75 : cluster [DBG] 3.5 deep-scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client handle_log_ack log(last 75) v1
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:36:14.155007+0000 osd.2 (osd.2) 74 : cluster [DBG] 3.5 deep-scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:36:14.169108+0000 osd.2 (osd.2) 75 : cluster [DBG] 3.5 deep-scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:45.936989+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.e scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.e scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 416341 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:46.937192+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  log_queue is 2 last_log 77 sent 75 num 2 unsent 2 sending 2
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:36:16.126345+0000 osd.2 (osd.2) 76 : cluster [DBG] 3.e scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:36:16.140448+0000 osd.2 (osd.2) 77 : cluster [DBG] 3.e scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client handle_log_ack log(last 77) v1
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:36:16.126345+0000 osd.2 (osd.2) 76 : cluster [DBG] 3.e scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:36:16.140448+0000 osd.2 (osd.2) 77 : cluster [DBG] 3.e scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:47.937402+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:48.937919+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  log_queue is 2 last_log 79 sent 77 num 2 unsent 2 sending 2
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:36:18.153052+0000 osd.2 (osd.2) 78 : cluster [DBG] 7.5 scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:36:18.167216+0000 osd.2 (osd.2) 79 : cluster [DBG] 7.5 scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client handle_log_ack log(last 79) v1
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:36:18.153052+0000 osd.2 (osd.2) 78 : cluster [DBG] 7.5 scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:36:18.167216+0000 osd.2 (osd.2) 79 : cluster [DBG] 7.5 scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:49.938121+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.c scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.c scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59899904 unmapped: 860160 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:50.938337+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  log_queue is 2 last_log 81 sent 79 num 2 unsent 2 sending 2
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:36:20.179718+0000 osd.2 (osd.2) 80 : cluster [DBG] 7.c scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:36:20.193651+0000 osd.2 (osd.2) 81 : cluster [DBG] 7.c scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client handle_log_ack log(last 81) v1
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:36:20.179718+0000 osd.2 (osd.2) 80 : cluster [DBG] 7.c scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:36:20.193651+0000 osd.2 (osd.2) 81 : cluster [DBG] 7.c scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59908096 unmapped: 851968 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 418635 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:51.938551+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59908096 unmapped: 851968 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:52.938694+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59908096 unmapped: 851968 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:53.938885+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59916288 unmapped: 843776 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:54.939012+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59916288 unmapped: 843776 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:55.939461+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59924480 unmapped: 835584 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 418635 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:56.939710+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.752611160s of 12.872432709s, submitted: 8
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59932672 unmapped: 827392 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:57.940036+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  log_queue is 2 last_log 83 sent 81 num 2 unsent 2 sending 2
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:36:27.027593+0000 osd.2 (osd.2) 82 : cluster [DBG] 3.8 scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:36:27.041716+0000 osd.2 (osd.2) 83 : cluster [DBG] 3.8 scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client handle_log_ack log(last 83) v1
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:36:27.027593+0000 osd.2 (osd.2) 82 : cluster [DBG] 3.8 scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:36:27.041716+0000 osd.2 (osd.2) 83 : cluster [DBG] 3.8 scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59932672 unmapped: 827392 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:58.940567+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.e scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.e scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59940864 unmapped: 819200 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:59.940784+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  log_queue is 2 last_log 85 sent 83 num 2 unsent 2 sending 2
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:36:29.019700+0000 osd.2 (osd.2) 84 : cluster [DBG] 7.e scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:36:29.033760+0000 osd.2 (osd.2) 85 : cluster [DBG] 7.e scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client handle_log_ack log(last 85) v1
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:36:29.019700+0000 osd.2 (osd.2) 84 : cluster [DBG] 7.e scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:36:29.033760+0000 osd.2 (osd.2) 85 : cluster [DBG] 7.e scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59940864 unmapped: 819200 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:00.941346+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59949056 unmapped: 811008 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 420929 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:01.942719+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59949056 unmapped: 811008 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:02.944274+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59957248 unmapped: 802816 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:03.945488+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59957248 unmapped: 802816 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:04.946649+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  log_queue is 2 last_log 87 sent 85 num 2 unsent 2 sending 2
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:36:33.965860+0000 osd.2 (osd.2) 86 : cluster [DBG] 3.1d scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:36:33.979992+0000 osd.2 (osd.2) 87 : cluster [DBG] 3.1d scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client handle_log_ack log(last 87) v1
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:36:33.965860+0000 osd.2 (osd.2) 86 : cluster [DBG] 3.1d scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:36:33.979992+0000 osd.2 (osd.2) 87 : cluster [DBG] 3.1d scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59957248 unmapped: 802816 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:05.948061+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 422077 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59965440 unmapped: 794624 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:06.949327+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59965440 unmapped: 794624 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:07.949526+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59965440 unmapped: 794624 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:08.950013+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.934057236s of 11.955801964s, submitted: 6
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59981824 unmapped: 778240 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:09.950395+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  log_queue is 2 last_log 89 sent 87 num 2 unsent 2 sending 2
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:36:38.983375+0000 osd.2 (osd.2) 88 : cluster [DBG] 7.2 scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:36:38.997461+0000 osd.2 (osd.2) 89 : cluster [DBG] 7.2 scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client handle_log_ack log(last 89) v1
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:36:38.983375+0000 osd.2 (osd.2) 88 : cluster [DBG] 7.2 scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:36:38.997461+0000 osd.2 (osd.2) 89 : cluster [DBG] 7.2 scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59990016 unmapped: 770048 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:10.951250+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 423224 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:11.951657+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:12.952388+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  log_queue is 2 last_log 91 sent 89 num 2 unsent 2 sending 2
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:36:41.975396+0000 osd.2 (osd.2) 90 : cluster [DBG] 3.1e scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:36:41.989590+0000 osd.2 (osd.2) 91 : cluster [DBG] 3.1e scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client handle_log_ack log(last 91) v1
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:36:41.975396+0000 osd.2 (osd.2) 90 : cluster [DBG] 3.1e scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:36:41.989590+0000 osd.2 (osd.2) 91 : cluster [DBG] 3.1e scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60006400 unmapped: 753664 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:13.953223+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  log_queue is 2 last_log 93 sent 91 num 2 unsent 2 sending 2
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:36:43.922789+0000 osd.2 (osd.2) 92 : cluster [DBG] 7.1 scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:36:43.936966+0000 osd.2 (osd.2) 93 : cluster [DBG] 7.1 scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client handle_log_ack log(last 93) v1
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:36:43.922789+0000 osd.2 (osd.2) 92 : cluster [DBG] 7.1 scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:36:43.936966+0000 osd.2 (osd.2) 93 : cluster [DBG] 7.1 scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:14.953988+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:15.954687+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 426667 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60030976 unmapped: 729088 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:16.956825+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  log_queue is 2 last_log 95 sent 93 num 2 unsent 2 sending 2
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:36:45.955166+0000 osd.2 (osd.2) 94 : cluster [DBG] 7.1a scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:36:45.969268+0000 osd.2 (osd.2) 95 : cluster [DBG] 7.1a scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client handle_log_ack log(last 95) v1
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:36:45.955166+0000 osd.2 (osd.2) 94 : cluster [DBG] 7.1a scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:36:45.969268+0000 osd.2 (osd.2) 95 : cluster [DBG] 7.1a scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.7 deep-scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.7 deep-scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60030976 unmapped: 729088 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:17.957251+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  log_queue is 2 last_log 97 sent 95 num 2 unsent 2 sending 2
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:36:47.880846+0000 osd.2 (osd.2) 96 : cluster [DBG] 3.7 deep-scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:36:47.895043+0000 osd.2 (osd.2) 97 : cluster [DBG] 3.7 deep-scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client handle_log_ack log(last 97) v1
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:36:47.880846+0000 osd.2 (osd.2) 96 : cluster [DBG] 3.7 deep-scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:36:47.895043+0000 osd.2 (osd.2) 97 : cluster [DBG] 3.7 deep-scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60030976 unmapped: 729088 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:18.957601+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60039168 unmapped: 720896 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:19.957765+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60047360 unmapped: 712704 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:20.958503+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 427814 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60055552 unmapped: 704512 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.913698196s of 12.949940681s, submitted: 10
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:21.958684+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  log_queue is 2 last_log 99 sent 97 num 2 unsent 2 sending 2
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:36:51.933398+0000 osd.2 (osd.2) 98 : cluster [DBG] 6.8 scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:36:51.947491+0000 osd.2 (osd.2) 99 : cluster [DBG] 6.8 scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client handle_log_ack log(last 99) v1
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:36:51.933398+0000 osd.2 (osd.2) 98 : cluster [DBG] 6.8 scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:36:51.947491+0000 osd.2 (osd.2) 99 : cluster [DBG] 6.8 scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60071936 unmapped: 688128 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:22.959115+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60080128 unmapped: 679936 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:23.959221+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60088320 unmapped: 671744 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:24.959383+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60088320 unmapped: 671744 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:25.959572+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 6.f scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 6.f scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60112896 unmapped: 647168 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:26.960070+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  log_queue is 2 last_log 101 sent 99 num 2 unsent 2 sending 2
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:36:56.020844+0000 osd.2 (osd.2) 100 : cluster [DBG] 6.f scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:36:56.045628+0000 osd.2 (osd.2) 101 : cluster [DBG] 6.f scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client handle_log_ack log(last 101) v1
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:36:56.020844+0000 osd.2 (osd.2) 100 : cluster [DBG] 6.f scrub starts
Nov 25 23:58:26 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:36:56.045628+0000 osd.2 (osd.2) 101 : cluster [DBG] 6.f scrub ok
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60112896 unmapped: 647168 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:27.960346+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60112896 unmapped: 647168 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:28.960531+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60121088 unmapped: 638976 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:29.960721+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60121088 unmapped: 638976 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:30.960906+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60129280 unmapped: 630784 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:31.961297+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60129280 unmapped: 630784 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:32.961614+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60137472 unmapped: 622592 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:33.961885+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60137472 unmapped: 622592 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:34.962083+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60145664 unmapped: 614400 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:35.962292+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60145664 unmapped: 614400 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:36.962449+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:37.962610+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60145664 unmapped: 614400 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:38.962845+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60153856 unmapped: 606208 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:39.963049+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60153856 unmapped: 606208 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:40.963253+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60153856 unmapped: 606208 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:41.963486+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60170240 unmapped: 589824 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:42.963683+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60170240 unmapped: 589824 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:43.963841+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60178432 unmapped: 581632 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:44.964029+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60178432 unmapped: 581632 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:45.964222+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60178432 unmapped: 581632 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:46.964339+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60186624 unmapped: 573440 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:47.964448+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60186624 unmapped: 573440 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:48.964594+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60186624 unmapped: 573440 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:49.964754+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60194816 unmapped: 565248 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:50.964910+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60194816 unmapped: 565248 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:51.965077+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60219392 unmapped: 540672 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:52.965321+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60227584 unmapped: 532480 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:53.965504+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60235776 unmapped: 524288 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:54.965686+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60235776 unmapped: 524288 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:55.965910+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60235776 unmapped: 524288 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:56.966128+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60252160 unmapped: 507904 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:57.966351+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60252160 unmapped: 507904 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:58.966508+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60252160 unmapped: 507904 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:59.966715+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60260352 unmapped: 499712 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:00.966869+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60260352 unmapped: 499712 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:01.967053+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60252160 unmapped: 507904 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:02.967305+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60260352 unmapped: 499712 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:03.967728+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60268544 unmapped: 491520 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:04.967959+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60276736 unmapped: 483328 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:05.971093+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60276736 unmapped: 483328 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:06.972912+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60276736 unmapped: 483328 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:07.973629+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60293120 unmapped: 466944 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:08.974900+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60293120 unmapped: 466944 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:09.975809+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60301312 unmapped: 458752 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:10.976194+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60301312 unmapped: 458752 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:11.976448+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60325888 unmapped: 434176 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:12.978646+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60334080 unmapped: 425984 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:13.980541+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60334080 unmapped: 425984 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:14.980837+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60342272 unmapped: 417792 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:15.981114+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60350464 unmapped: 409600 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:16.981334+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60350464 unmapped: 409600 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:17.981458+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60358656 unmapped: 401408 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:18.981965+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60358656 unmapped: 401408 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:19.982110+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60366848 unmapped: 393216 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:20.982349+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60366848 unmapped: 393216 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:21.982615+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60375040 unmapped: 385024 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:22.982789+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60375040 unmapped: 385024 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:23.983149+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60375040 unmapped: 385024 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:24.983295+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60383232 unmapped: 376832 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:25.983641+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60383232 unmapped: 376832 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:26.983875+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60383232 unmapped: 376832 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:27.984016+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60391424 unmapped: 368640 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:28.984134+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60391424 unmapped: 368640 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:29.984309+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60391424 unmapped: 368640 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:30.984473+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60399616 unmapped: 360448 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:31.984739+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60416000 unmapped: 344064 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:32.984917+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60424192 unmapped: 335872 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:33.985255+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60424192 unmapped: 335872 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:34.985491+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60432384 unmapped: 327680 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:35.985786+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60440576 unmapped: 319488 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:36.986093+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60440576 unmapped: 319488 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:37.986350+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60465152 unmapped: 294912 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:38.986627+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60465152 unmapped: 294912 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:39.986822+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60465152 unmapped: 294912 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:40.987019+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60473344 unmapped: 286720 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:41.987209+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60473344 unmapped: 286720 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:42.987369+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60481536 unmapped: 278528 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:43.987520+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60481536 unmapped: 278528 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:44.987660+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60489728 unmapped: 270336 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:45.987905+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60489728 unmapped: 270336 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:46.988108+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60489728 unmapped: 270336 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:47.988336+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60497920 unmapped: 262144 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:48.988456+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60497920 unmapped: 262144 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:49.988591+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60506112 unmapped: 253952 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:50.988747+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60506112 unmapped: 253952 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:51.988866+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60522496 unmapped: 237568 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:52.988989+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60530688 unmapped: 229376 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:53.989141+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60530688 unmapped: 229376 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:54.989254+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60538880 unmapped: 221184 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:55.989472+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60522496 unmapped: 237568 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:56.989597+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60522496 unmapped: 237568 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:57.989760+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60530688 unmapped: 229376 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:58.989934+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60530688 unmapped: 229376 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:59.990164+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60530688 unmapped: 229376 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:00.990379+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60538880 unmapped: 221184 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:01.990557+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60538880 unmapped: 221184 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:02.990751+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60547072 unmapped: 212992 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:03.990898+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60547072 unmapped: 212992 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:04.991100+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60547072 unmapped: 212992 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:05.991253+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60563456 unmapped: 196608 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:06.991395+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60563456 unmapped: 196608 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:07.991551+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60563456 unmapped: 196608 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:08.991722+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60571648 unmapped: 188416 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:09.991960+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60571648 unmapped: 188416 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:10.992102+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60588032 unmapped: 172032 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:11.992276+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60596224 unmapped: 163840 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:12.992446+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60596224 unmapped: 163840 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:13.992723+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60604416 unmapped: 155648 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:14.992922+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60604416 unmapped: 155648 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:15.993224+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60612608 unmapped: 147456 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:16.993434+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60620800 unmapped: 139264 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:17.993591+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60620800 unmapped: 139264 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:18.993762+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60628992 unmapped: 131072 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:19.993901+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60628992 unmapped: 131072 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:20.994075+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60628992 unmapped: 131072 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:21.994286+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60637184 unmapped: 122880 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:22.994487+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60637184 unmapped: 122880 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:23.994741+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60645376 unmapped: 114688 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:24.994983+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60645376 unmapped: 114688 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:25.995272+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 106496 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:26.995500+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60661760 unmapped: 98304 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:27.995732+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60661760 unmapped: 98304 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:28.995902+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60661760 unmapped: 98304 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:29.996108+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60669952 unmapped: 90112 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:30.996337+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60669952 unmapped: 90112 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:31.996492+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 81920 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:32.996647+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 81920 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:33.996826+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 81920 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:34.997014+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60686336 unmapped: 73728 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:35.997235+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60686336 unmapped: 73728 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:36.997390+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60686336 unmapped: 73728 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:37.997568+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 65536 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:38.997759+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 65536 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:39.997920+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 57344 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:40.998118+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 57344 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:41.998272+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 57344 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:42.998447+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 49152 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:43.998636+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 49152 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:44.998778+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 40960 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:45.998944+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 40960 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:46.999028+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 40960 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:47.999211+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60727296 unmapped: 32768 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:48.999381+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60727296 unmapped: 32768 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:49.999583+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 24576 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:50.999766+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 24576 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:51.999897+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60743680 unmapped: 16384 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:53.000123+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60743680 unmapped: 16384 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:54.000259+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60743680 unmapped: 16384 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:55.000447+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60751872 unmapped: 8192 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:56.000619+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60751872 unmapped: 8192 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:57.000793+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60751872 unmapped: 8192 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:58.000927+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 0 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:59.001072+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 0 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:00.001234+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60768256 unmapped: 1040384 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:01.001357+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60768256 unmapped: 1040384 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:02.001526+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60768256 unmapped: 1040384 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:03.001695+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60776448 unmapped: 1032192 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:04.001864+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60776448 unmapped: 1032192 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:05.002017+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60776448 unmapped: 1032192 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:06.002262+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60784640 unmapped: 1024000 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:07.002424+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60784640 unmapped: 1024000 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:08.002534+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60792832 unmapped: 1015808 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:09.002679+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60792832 unmapped: 1015808 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:10.002857+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60792832 unmapped: 1015808 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:11.003070+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60801024 unmapped: 1007616 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:12.003235+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60801024 unmapped: 1007616 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:13.003345+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60809216 unmapped: 999424 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:14.003444+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60809216 unmapped: 999424 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:15.003586+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60817408 unmapped: 991232 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:16.003766+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60817408 unmapped: 991232 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:17.003919+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60817408 unmapped: 991232 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:18.004078+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60825600 unmapped: 983040 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:19.004236+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60825600 unmapped: 983040 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:20.004406+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60825600 unmapped: 983040 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:21.004574+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60833792 unmapped: 974848 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:22.004751+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60833792 unmapped: 974848 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:23.004916+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60841984 unmapped: 966656 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:24.005079+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60841984 unmapped: 966656 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:25.005255+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60841984 unmapped: 966656 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:26.005464+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60850176 unmapped: 958464 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:27.005598+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60850176 unmapped: 958464 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:28.005764+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60858368 unmapped: 950272 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:29.005910+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60858368 unmapped: 950272 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:30.006067+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60858368 unmapped: 950272 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:31.006207+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60866560 unmapped: 942080 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:32.006329+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60866560 unmapped: 942080 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:33.006493+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60874752 unmapped: 933888 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:34.006642+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60874752 unmapped: 933888 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:35.006784+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60874752 unmapped: 933888 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:36.006994+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60882944 unmapped: 925696 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:37.007212+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60882944 unmapped: 925696 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:38.007382+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60891136 unmapped: 917504 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:39.007557+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60891136 unmapped: 917504 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:40.007755+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60891136 unmapped: 917504 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:41.007931+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60891136 unmapped: 917504 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:42.008082+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60899328 unmapped: 909312 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:43.008261+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60899328 unmapped: 909312 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:44.008410+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60907520 unmapped: 901120 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:45.008552+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60907520 unmapped: 901120 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:46.008755+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60907520 unmapped: 901120 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:47.008944+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60907520 unmapped: 901120 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:48.009121+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60915712 unmapped: 892928 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:49.009291+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60915712 unmapped: 892928 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:50.009565+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60923904 unmapped: 884736 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:51.009735+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60923904 unmapped: 884736 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:52.009898+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60923904 unmapped: 884736 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:53.010071+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60932096 unmapped: 876544 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:54.010248+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60932096 unmapped: 876544 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:55.010393+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60940288 unmapped: 868352 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:56.010573+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60948480 unmapped: 860160 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:57.010723+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60948480 unmapped: 860160 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:58.010906+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60956672 unmapped: 851968 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:59.011076+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60956672 unmapped: 851968 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:00.011237+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60964864 unmapped: 843776 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:01.011363+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60956672 unmapped: 851968 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:02.011521+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60956672 unmapped: 851968 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:03.011661+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60956672 unmapped: 851968 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:04.011829+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60964864 unmapped: 843776 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:05.012027+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60964864 unmapped: 843776 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:06.012251+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60973056 unmapped: 835584 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:07.012369+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60973056 unmapped: 835584 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:08.012500+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60981248 unmapped: 827392 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:09.012643+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60981248 unmapped: 827392 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:10.012783+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60981248 unmapped: 827392 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:11.012909+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60989440 unmapped: 819200 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:12.013078+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60989440 unmapped: 819200 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:13.013290+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60989440 unmapped: 819200 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:14.013475+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60997632 unmapped: 811008 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:15.013667+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60997632 unmapped: 811008 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:16.013833+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61005824 unmapped: 802816 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:17.014005+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61005824 unmapped: 802816 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:18.014129+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61014016 unmapped: 794624 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:19.014545+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61014016 unmapped: 794624 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:20.014722+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61014016 unmapped: 794624 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:21.014888+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61014016 unmapped: 794624 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:22.016060+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61022208 unmapped: 786432 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:23.016283+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61022208 unmapped: 786432 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:24.016416+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61022208 unmapped: 786432 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:25.016604+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61030400 unmapped: 778240 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:26.016911+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61030400 unmapped: 778240 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:27.017025+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61038592 unmapped: 770048 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:28.017278+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61038592 unmapped: 770048 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:29.017538+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61038592 unmapped: 770048 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:30.017687+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61046784 unmapped: 761856 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:31.017818+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61046784 unmapped: 761856 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:32.017941+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61054976 unmapped: 753664 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:33.018075+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61054976 unmapped: 753664 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:34.018229+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61054976 unmapped: 753664 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:35.018509+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61063168 unmapped: 745472 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:36.018833+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61063168 unmapped: 745472 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:37.019034+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61063168 unmapped: 745472 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:38.019322+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61071360 unmapped: 737280 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:39.019575+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61071360 unmapped: 737280 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:40.019774+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61071360 unmapped: 737280 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:41.020000+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61079552 unmapped: 729088 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:42.020216+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61079552 unmapped: 729088 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:43.020390+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61087744 unmapped: 720896 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:44.020585+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61087744 unmapped: 720896 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:45.020713+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61087744 unmapped: 720896 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:46.020857+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61095936 unmapped: 712704 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:47.021008+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61095936 unmapped: 712704 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:48.021122+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61104128 unmapped: 704512 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:49.021289+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61104128 unmapped: 704512 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:50.021454+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:51.021582+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61104128 unmapped: 704512 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:52.021935+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61112320 unmapped: 696320 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:53.022080+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61112320 unmapped: 696320 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:54.022598+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61120512 unmapped: 688128 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:55.022745+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61120512 unmapped: 688128 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:56.022909+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61120512 unmapped: 688128 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:57.023205+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:58.023334+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:59.023504+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:00.023781+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61136896 unmapped: 671744 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:01.024093+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61136896 unmapped: 671744 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:02.024372+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61153280 unmapped: 655360 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:03.024626+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61153280 unmapped: 655360 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:04.025006+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61153280 unmapped: 655360 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:05.025244+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:06.025495+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:07.025904+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:08.026085+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:09.026255+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:10.026449+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61177856 unmapped: 630784 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:11.026677+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61177856 unmapped: 630784 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:12.026950+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:13.027302+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:14.027504+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61177856 unmapped: 630784 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:15.027700+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61177856 unmapped: 630784 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:16.027873+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:17.028080+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:18.028225+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61194240 unmapped: 614400 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:19.028349+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61194240 unmapped: 614400 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:20.028506+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61194240 unmapped: 614400 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:21.028648+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:22.028799+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:23.028939+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:24.029366+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:25.030128+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:26.030449+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:27.030588+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:28.031001+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:29.031334+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61227008 unmapped: 581632 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:30.031619+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61227008 unmapped: 581632 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:31.031737+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61227008 unmapped: 581632 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:32.032460+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:33.032591+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:34.033012+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:35.033303+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61243392 unmapped: 565248 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:36.033489+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61243392 unmapped: 565248 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:37.033611+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61243392 unmapped: 565248 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:38.033770+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61251584 unmapped: 557056 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:39.033940+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61251584 unmapped: 557056 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:40.034325+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61259776 unmapped: 548864 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:41.034577+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61259776 unmapped: 548864 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:42.034727+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61259776 unmapped: 548864 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:43.034891+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61267968 unmapped: 540672 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:44.035028+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61267968 unmapped: 540672 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:45.035274+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61276160 unmapped: 532480 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:46.035529+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61276160 unmapped: 532480 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:47.035698+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61276160 unmapped: 532480 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:48.035851+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61284352 unmapped: 524288 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:49.036047+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61284352 unmapped: 524288 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:50.036186+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61292544 unmapped: 516096 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:51.036355+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61292544 unmapped: 516096 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:52.036505+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61292544 unmapped: 516096 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:53.036657+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61300736 unmapped: 507904 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:54.036893+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61300736 unmapped: 507904 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:55.037052+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61308928 unmapped: 499712 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:56.037204+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61308928 unmapped: 499712 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:57.037339+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61308928 unmapped: 499712 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:58.037478+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61308928 unmapped: 499712 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:59.037618+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61317120 unmapped: 491520 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:00.037739+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61317120 unmapped: 491520 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:01.037900+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61325312 unmapped: 483328 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:02.038047+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61325312 unmapped: 483328 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:03.038388+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61325312 unmapped: 483328 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:04.038527+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61333504 unmapped: 475136 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:05.038669+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61333504 unmapped: 475136 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:06.038848+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61341696 unmapped: 466944 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:07.039215+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61341696 unmapped: 466944 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:08.039383+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61349888 unmapped: 458752 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:09.039575+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61349888 unmapped: 458752 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:10.039797+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61349888 unmapped: 458752 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:11.040013+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61358080 unmapped: 450560 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:12.040244+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61366272 unmapped: 442368 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:13.040617+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61366272 unmapped: 442368 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:14.040982+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61374464 unmapped: 434176 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:15.041254+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61374464 unmapped: 434176 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:16.041535+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61382656 unmapped: 425984 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:17.041763+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61390848 unmapped: 417792 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:18.041934+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61390848 unmapped: 417792 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:19.042127+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61399040 unmapped: 409600 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:20.042245+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61399040 unmapped: 409600 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:21.042416+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61407232 unmapped: 401408 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:22.042633+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61407232 unmapped: 401408 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:23.042783+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61407232 unmapped: 401408 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:24.042939+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61407232 unmapped: 401408 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:25.043123+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61415424 unmapped: 393216 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:26.043437+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61407232 unmapped: 401408 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:27.043607+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61415424 unmapped: 393216 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:28.043728+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61415424 unmapped: 393216 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:29.043856+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61415424 unmapped: 393216 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:30.043961+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61423616 unmapped: 385024 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:31.044196+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 4172 writes, 19K keys, 4172 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 4172 writes, 365 syncs, 11.43 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4172 writes, 19K keys, 4172 commit groups, 1.0 writes per commit group, ingest: 16.16 MB, 0.03 MB/s
                                           Interval WAL: 4172 writes, 365 syncs, 11.43 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56223ceb91f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56223ceb91f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56223ceb91f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56223ceb91f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56223ceb91f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56223ceb91f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56223ceb91f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56223ceb9090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56223ceb9090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56223ceb9090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56223ceb91f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56223ceb91f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61480960 unmapped: 327680 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:32.044355+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61489152 unmapped: 319488 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:33.044484+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61489152 unmapped: 319488 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:34.044683+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61497344 unmapped: 311296 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:35.044820+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61497344 unmapped: 311296 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:36.044971+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61497344 unmapped: 311296 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:37.045128+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61505536 unmapped: 303104 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:38.045341+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61505536 unmapped: 303104 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:39.045510+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61513728 unmapped: 294912 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:40.045639+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61513728 unmapped: 294912 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:41.045822+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61513728 unmapped: 294912 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:42.045974+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61513728 unmapped: 294912 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:43.046129+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61521920 unmapped: 286720 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:44.046307+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61521920 unmapped: 286720 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:45.046458+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61521920 unmapped: 286720 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:46.046668+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61530112 unmapped: 278528 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:47.046877+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61530112 unmapped: 278528 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:48.047889+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61538304 unmapped: 270336 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:49.048157+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61538304 unmapped: 270336 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:50.048385+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61538304 unmapped: 270336 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:51.048620+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61538304 unmapped: 270336 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:52.048946+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61546496 unmapped: 262144 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:53.049111+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61546496 unmapped: 262144 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:54.049236+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61554688 unmapped: 253952 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:55.049414+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61554688 unmapped: 253952 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:56.049609+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61562880 unmapped: 245760 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:57.049738+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61571072 unmapped: 237568 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:58.049948+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61571072 unmapped: 237568 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:59.050123+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61579264 unmapped: 229376 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:00.050233+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61579264 unmapped: 229376 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:01.050415+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61579264 unmapped: 229376 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:02.050569+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61579264 unmapped: 229376 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:03.050789+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61587456 unmapped: 221184 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:04.050974+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61587456 unmapped: 221184 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:05.051212+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61595648 unmapped: 212992 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:06.051445+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61595648 unmapped: 212992 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:07.051615+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61603840 unmapped: 204800 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:08.051801+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61603840 unmapped: 204800 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:09.052053+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61603840 unmapped: 204800 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:10.052271+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61612032 unmapped: 196608 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:11.072320+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61612032 unmapped: 196608 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:12.072485+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61620224 unmapped: 188416 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:13.072648+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61620224 unmapped: 188416 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:14.072796+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61620224 unmapped: 188416 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:15.072957+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61628416 unmapped: 180224 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:16.073795+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61628416 unmapped: 180224 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:17.073972+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61636608 unmapped: 172032 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:18.074139+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61636608 unmapped: 172032 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:19.074331+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61636608 unmapped: 172032 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:20.074488+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61644800 unmapped: 163840 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:21.074724+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61652992 unmapped: 155648 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:22.074880+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61661184 unmapped: 147456 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:23.075055+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61661184 unmapped: 147456 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:24.075260+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61661184 unmapped: 147456 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:25.075473+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61669376 unmapped: 139264 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:26.075671+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61669376 unmapped: 139264 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:27.076079+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61669376 unmapped: 139264 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:28.076229+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61669376 unmapped: 139264 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:29.076372+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61669376 unmapped: 139264 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:30.076509+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61677568 unmapped: 131072 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:31.076722+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61677568 unmapped: 131072 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:32.076907+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 122880 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:33.077076+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 122880 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:34.077238+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 122880 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:35.077403+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 114688 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:36.077567+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 114688 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:37.077712+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 114688 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:38.077838+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 106496 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:39.077993+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 106496 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:40.078157+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 106496 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:41.078369+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 98304 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:42.078518+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 98304 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:43.078694+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61718528 unmapped: 90112 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:44.078833+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61718528 unmapped: 90112 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:45.079011+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61718528 unmapped: 90112 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:46.079262+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61718528 unmapped: 90112 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:47.079394+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61718528 unmapped: 90112 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:48.079541+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61718528 unmapped: 90112 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:49.079726+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61726720 unmapped: 81920 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:50.079892+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61726720 unmapped: 81920 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:51.080131+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61726720 unmapped: 81920 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:52.080392+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61734912 unmapped: 73728 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:53.080572+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61734912 unmapped: 73728 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:54.080803+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61743104 unmapped: 65536 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:55.081819+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61743104 unmapped: 65536 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:56.082281+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61743104 unmapped: 65536 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:57.083234+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61751296 unmapped: 57344 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:58.083410+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61751296 unmapped: 57344 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:59.083546+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61759488 unmapped: 49152 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:00.083663+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61759488 unmapped: 49152 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:01.083820+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61767680 unmapped: 40960 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:02.083953+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61767680 unmapped: 40960 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:03.084125+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61767680 unmapped: 40960 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:04.084636+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61775872 unmapped: 32768 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:05.084850+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61775872 unmapped: 32768 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:06.085096+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61775872 unmapped: 32768 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:07.085500+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61784064 unmapped: 24576 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:08.085717+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61784064 unmapped: 24576 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:09.085904+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61792256 unmapped: 16384 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:10.086238+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61792256 unmapped: 16384 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:11.086575+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61792256 unmapped: 16384 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:12.086813+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61792256 unmapped: 16384 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:13.086944+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61800448 unmapped: 8192 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:14.087101+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61800448 unmapped: 8192 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:15.087316+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61800448 unmapped: 8192 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:16.087545+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61808640 unmapped: 0 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:17.087742+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61808640 unmapped: 0 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:18.087895+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61816832 unmapped: 1040384 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:19.088001+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61816832 unmapped: 1040384 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:20.088220+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61816832 unmapped: 1040384 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:21.088396+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61825024 unmapped: 1032192 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:22.088629+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61825024 unmapped: 1032192 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:23.088890+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61833216 unmapped: 1024000 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:24.089202+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61833216 unmapped: 1024000 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:25.089346+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61841408 unmapped: 1015808 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:26.089554+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61841408 unmapped: 1015808 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:27.089720+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61841408 unmapped: 1015808 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:28.089931+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61849600 unmapped: 1007616 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:29.090104+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61849600 unmapped: 1007616 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:30.090253+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:31.090382+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:32.090521+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:33.090727+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:34.090859+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:35.090970+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:36.091092+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:37.091296+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:38.091470+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:39.091631+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:40.091779+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:41.091899+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:42.092034+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:43.092149+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:44.092215+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:45.092349+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:46.092493+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:47.092613+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:48.092735+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:49.112046+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:50.112202+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:51.121324+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:52.121477+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:53.121806+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:54.122001+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:55.122130+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:56.122367+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:57.122475+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:58.122584+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:59.122710+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:00.122919+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:01.123030+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:02.123131+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:03.123331+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:04.123462+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:05.123576+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:06.123779+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:07.124093+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:08.124215+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:09.124385+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:10.124525+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:11.124713+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:12.124958+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:13.125225+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:14.125427+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:15.125564+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:16.125750+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:17.125916+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:18.126094+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:19.126225+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:20.126455+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:21.126637+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:22.126813+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:23.126943+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:24.127127+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:25.127267+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:26.127464+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:27.127648+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:28.127832+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:29.128086+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:30.128316+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:31.128499+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:32.128652+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:33.128811+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:34.129011+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:35.129183+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:36.129339+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:37.129568+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:38.129716+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:39.129845+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:40.129955+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:41.130230+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:42.130399+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:43.130549+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:44.130695+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:45.130831+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:46.131021+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:47.131166+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:48.131335+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:49.131480+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:50.131684+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:51.131799+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:52.131942+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:53.132054+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:54.132227+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:55.132379+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:56.132523+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:57.132683+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:58.132840+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:59.132964+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:00.133154+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:01.133313+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:02.133485+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:03.133676+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:04.133815+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:05.133959+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:06.134234+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:07.134364+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:08.134490+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:09.134680+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:10.134885+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:11.135052+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:12.135294+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:13.135552+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:14.135767+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:15.136013+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: no keepalive since 2025-11-25T23:46:45.136110+0000 (2106-02-07T06:28:15.999867+0000 seconds), reconnecting
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _reopen_session rank -1
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _add_conns ranks=[0]
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient(hunting): picked mon.compute-0 con 0x56223f69c800 addr [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient(hunting): start opening mon connection
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient(hunting): _renew_subs
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient(hunting): get_auth_request con 0x56223f69c800 auth_method 0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient(hunting): get_auth_request method 2 preferred_modes [2,1]
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient(hunting): _init_auth method 2
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient(hunting): _init_auth already have auth, reseting
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient(hunting): handle_auth_reply_more payload 9
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient(hunting): handle_auth_reply_more payload_len 9
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient(hunting): handle_auth_reply_more responding with 132 bytes
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient(hunting): handle_auth_done global_id 14211 payload 293
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _finish_hunting 0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: found mon.compute-0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _finish_auth 0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:15.138354+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: handle_monmap mon_map magic: 0 v1
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient:  got monmap 1 from mon.compute-0 (according to old e1)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: dump:
                                           epoch 1
                                           fsid 101922db-575f-58e2-980f-928050464f69
                                           last_changed 2025-11-25T23:31:04.907397+0000
                                           created 2025-11-25T23:31:04.907397+0000
                                           min_mon_release 18 (reef)
                                           election_strategy: 1
                                           0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: handle_config config(9 keys) v1
Nov 25 23:58:26 compute-0 ceph-osd[91111]: set_mon_vals no callback set
Nov 25 23:58:26 compute-0 ceph-osd[91111]: mgrc handle_mgr_map Got map version 9
Nov 25 23:58:26 compute-0 ceph-osd[91111]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/855624559,v1:192.168.122.100:6801/855624559]
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:19.637788+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:20.637959+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:21.638089+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:22.638469+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:23.638665+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:24.672619+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:25.672789+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:26.672935+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:27.673161+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:28.673358+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:29.673510+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:30.673635+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:31.673789+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:32.673984+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:33.674143+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:34.674296+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:35.674428+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:36.674658+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:37.674873+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:38.675005+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:39.675193+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:40.675325+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:41.675524+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:42.675685+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:43.675828+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:44.675996+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:45.676374+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:46.676547+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:47.676671+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:48.676946+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:49.677085+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:50.677229+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:51.677359+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:52.677814+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:53.678043+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:54.678235+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:55.678415+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:56.678616+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:57.678784+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:58.678913+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:59.679047+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:00.679176+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:01.679293+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:02.679420+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:03.679701+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:04.679885+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:05.680084+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:06.680300+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:07.680466+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:08.680667+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:09.680873+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:10.681006+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:11.681214+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:12.681417+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:13.681606+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:14.681801+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:15.682018+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:16.682263+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:17.682429+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:18.682645+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:19.682805+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:20.682940+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:21.683049+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:22.683160+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:23.683340+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:24.683503+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:25.683663+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:26.683828+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:27.683976+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:28.684120+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:29.684275+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:30.684449+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:31.684610+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:32.684773+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:33.684882+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:34.685038+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:35.685144+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: mgrc ms_handle_reset ms_handle_reset con 0x56223dd09c00
Nov 25 23:58:26 compute-0 ceph-osd[91111]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/855624559
Nov 25 23:58:26 compute-0 ceph-osd[91111]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/855624559,v1:192.168.122.100:6801/855624559]
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: get_auth_request con 0x56223f255800 auth_method 0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: mgrc handle_mgr_configure stats_period=5
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:36.685358+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61775872 unmapped: 1081344 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:37.685519+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61775872 unmapped: 1081344 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:38.685672+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61775872 unmapped: 1081344 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:39.685836+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61775872 unmapped: 1081344 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:40.686013+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61775872 unmapped: 1081344 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:41.686154+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:42.686387+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:43.686575+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:44.686730+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:45.686866+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:46.687081+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:47.687327+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:48.687529+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:49.687672+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:50.687793+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:51.687962+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:52.688123+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:53.688244+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:54.688390+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:55.688558+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:56.688734+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:57.688860+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:58.689008+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:59.689193+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:00.689336+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:01.689465+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:02.689625+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:03.689790+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:04.689922+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:05.690045+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:06.690262+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:07.690409+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:08.690544+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:09.690695+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:10.690846+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:11.690999+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:12.691137+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:13.691334+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:14.691470+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:15.691600+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:16.692106+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:17.692310+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:18.692434+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:19.692609+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:20.692790+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:21.692926+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:22.693076+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:23.693447+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:24.693735+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:25.693993+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:26.694164+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:27.708493+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:28.708733+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:29.709009+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:30.709193+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:31.709300+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:32.709421+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:33.709590+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:34.709764+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:35.709896+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:36.710041+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:37.710231+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:38.710470+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:39.710618+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:40.710780+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:41.710928+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:42.711085+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:43.711249+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:44.711436+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:45.711662+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:46.711818+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:47.712023+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:48.712209+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:49.712369+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:50.712550+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:51.712709+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:52.712842+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:53.712964+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:54.713222+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:55.713348+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:56.713504+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:57.713643+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:58.714528+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:59.714698+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:00.714851+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:01.715232+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:02.715389+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:03.715831+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:04.715970+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:05.716367+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:06.716582+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:07.716811+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:08.717273+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:09.717665+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:10.718521+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:11.718732+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:12.718927+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:13.719109+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:14.719262+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:15.719392+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:16.719562+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:17.719701+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:18.719842+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:19.719957+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:20.720067+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:21.720257+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:22.720404+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:23.720610+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:24.720756+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:25.720896+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:26.721105+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:27.721249+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:28.721403+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:29.721526+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:30.721748+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:31.721940+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:32.722211+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:33.722414+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:34.722550+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:35.722680+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:36.722828+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:37.723006+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:38.723269+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:39.723395+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:40.723561+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:41.723718+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:42.723925+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:43.724070+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:44.724249+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:45.724405+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:46.724577+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:47.724767+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:48.724926+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:49.725054+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:50.725210+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:51.725372+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:52.725504+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:53.725641+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:54.725780+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:55.725995+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:56.726214+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:57.726351+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:58.726507+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:59.726836+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:00.727058+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:01.727207+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:02.727333+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:03.727474+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:04.727586+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:05.728161+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:06.728480+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:07.728620+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:08.728734+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:09.728847+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:10.728970+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:11.729081+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:12.729217+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:13.729304+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:14.729479+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:15.729663+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:16.729934+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:17.730120+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:18.730228+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:19.730326+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:20.730455+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:21.730590+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:22.730690+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:23.730845+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:24.731008+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:25.731199+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:26.731347+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:27.731466+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:28.731603+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:29.731744+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:30.731817+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:31.731960+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:32.732148+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:33.732317+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:34.732444+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:35.732574+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:36.732727+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:37.732854+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:38.733048+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:39.733224+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:40.733457+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:41.733598+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:42.733821+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:43.733996+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:44.734221+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:45.734495+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:46.734746+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:47.734917+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:48.735040+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:49.735237+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:50.735463+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:51.735675+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:52.735817+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:53.735964+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:54.736124+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:55.736370+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:56.736596+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:57.736765+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:58.736945+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:59.737241+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:00.737481+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:01.737648+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:02.737788+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:03.738792+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:04.739594+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:05.740672+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:06.741461+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:07.742079+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:08.742545+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:09.742861+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:10.743111+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:11.743323+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:12.743474+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:13.744231+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:14.744416+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:15.745106+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:16.745756+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:17.746333+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:18.746740+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:19.747280+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:20.747667+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:21.748060+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:22.748241+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:23.748529+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:24.748698+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:25.749064+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:26.749482+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:27.749731+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:28.749988+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:29.750239+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:30.750402+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:31.750515+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:32.750736+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:33.751004+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:34.751143+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:35.751352+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:36.751539+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:37.751704+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:38.751821+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:39.751981+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:40.752099+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:41.752266+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:42.752389+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:43.752546+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:44.752722+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:45.752903+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:46.753049+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:47.753232+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:48.753360+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:49.753545+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:50.753680+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:51.753877+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:52.754031+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:53.754232+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:54.754391+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:55.754556+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:56.754758+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:57.754927+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:58.755061+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:59.755276+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:00.755450+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:01.755599+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:02.755727+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:03.755866+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:04.756125+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:05.756279+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:06.756452+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:07.756708+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:08.756821+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:09.757073+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:10.757241+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:11.757606+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:12.757924+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:13.758111+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:14.758266+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:15.758437+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:16.758629+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:17.758820+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:18.758994+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:19.759318+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:20.759535+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:21.759748+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:22.760046+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:23.760278+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:24.760507+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:25.760700+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:26.760958+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:27.761126+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:28.761325+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:29.761556+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:30.761741+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 4172 writes, 19K keys, 4172 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 4172 writes, 365 syncs, 11.43 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56223ceb91f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56223ceb91f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56223ceb91f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56223ceb91f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56223ceb91f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56223ceb91f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56223ceb91f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56223ceb9090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56223ceb9090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56223ceb9090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56223ceb91f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56223ceb91f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:31.761968+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:32.762226+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:33.762465+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:34.762631+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:35.762862+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:36.763151+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:37.763491+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:38.763697+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:39.763872+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:40.764029+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:41.764243+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:42.764427+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:43.764600+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:44.764751+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:45.764962+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:46.765991+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:47.766289+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:48.766431+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:49.766572+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:50.766725+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:51.766913+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:52.767132+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:53.767348+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:54.767550+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:55.767691+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:56.767899+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:57.768338+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:58.768564+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:59.768741+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:00.768928+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:01.769145+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:02.769326+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:03.769523+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:04.769719+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:05.769895+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:06.770125+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:07.770315+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:08.770491+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:09.770650+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:10.770805+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:11.771048+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:12.771244+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:13.771422+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:14.771667+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:15.771899+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:16.772089+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:17.772290+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:18.772538+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:19.772799+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:20.773009+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:21.773214+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:22.773398+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:23.773552+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:24.773720+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:25.774069+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:26.774307+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:27.774509+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:28.774702+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:29.774962+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:30.775222+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:31.775378+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:32.775562+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:33.775727+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:34.775856+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:35.776037+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:36.776248+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:37.776420+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:38.776578+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:39.776727+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:40.776885+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:41.777049+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:42.777237+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:43.777399+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:44.777561+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:45.777771+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:46.777995+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:47.778152+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:48.778318+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:49.778457+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:50.778638+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:51.778730+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:52.778862+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:53.778948+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:54.779069+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:55.779231+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:56.779392+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:57.779540+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:58.779659+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:59.779831+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:00.779953+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:01.780080+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:02.780248+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:03.780421+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:04.780566+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:05.780749+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:06.780934+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:07.781127+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:08.781330+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:09.781508+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:10.781642+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56223ea28c00
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:11.781779+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:12.781949+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 63 handle_osd_map epochs [64,65], i have 63, src has [1,65]
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 1071.016235352s of 1071.030395508s, submitted: 4
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 65 heartbeat osd_stat(store_statfs(0x4fe117000/0x0/0x4ffc00000, data 0x4afe6/0xb6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:13.782091+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62128128 unmapped: 17514496 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:14.782281+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 65 handle_osd_map epochs [65,66], i have 65, src has [1,66]
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 66 ms_handle_reset con 0x56223ea28c00 session 0x56223f2c0b40
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62185472 unmapped: 17457152 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:15.782526+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 502193 data_alloc: 218103808 data_used: 114688
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62193664 unmapped: 17448960 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:16.782768+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62193664 unmapped: 17448960 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:17.783931+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56223e823c00
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62308352 unmapped: 17334272 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:18.784812+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _renew_subs
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 66 handle_osd_map epochs [67,67], i have 66, src has [1,67]
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 67 ms_handle_reset con 0x56223e823c00 session 0x56223eae34a0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 67 heartbeat osd_stat(store_statfs(0x4fd4a3000/0x0/0x4ffc00000, data 0xcbc602/0xd2b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 63406080 unmapped: 16236544 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:19.785096+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 63406080 unmapped: 16236544 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:20.785838+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 539786 data_alloc: 218103808 data_used: 122880
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 63406080 unmapped: 16236544 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:21.786462+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 63406080 unmapped: 16236544 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:22.786709+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 63406080 unmapped: 16236544 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:23.787140+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 67 heartbeat osd_stat(store_statfs(0x4fd49e000/0x0/0x4ffc00000, data 0xcbdbfb/0xd2f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 67 handle_osd_map epochs [68,68], i have 67, src has [1,68]
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 67 handle_osd_map epochs [68,68], i have 68, src has [1,68]
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.124011040s of 11.322376251s, submitted: 43
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62291968 unmapped: 17350656 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:24.787397+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62291968 unmapped: 17350656 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:25.787549+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 542246 data_alloc: 218103808 data_used: 122880
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:26.788002+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62291968 unmapped: 17350656 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fd49b000/0x0/0x4ffc00000, data 0xcbf09b/0xd32000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:27.788262+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62291968 unmapped: 17350656 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fd49b000/0x0/0x4ffc00000, data 0xcbf09b/0xd32000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:28.788624+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62291968 unmapped: 17350656 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fd49b000/0x0/0x4ffc00000, data 0xcbf09b/0xd32000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:29.789005+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62291968 unmapped: 17350656 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:30.789374+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62291968 unmapped: 17350656 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 542246 data_alloc: 218103808 data_used: 122880
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:31.789745+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62291968 unmapped: 17350656 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:32.790074+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62291968 unmapped: 17350656 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:33.790272+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62291968 unmapped: 17350656 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fd49b000/0x0/0x4ffc00000, data 0xcbf09b/0xd32000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:34.790609+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62291968 unmapped: 17350656 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fd49b000/0x0/0x4ffc00000, data 0xcbf09b/0xd32000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:35.790765+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62291968 unmapped: 17350656 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 542246 data_alloc: 218103808 data_used: 122880
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fd49b000/0x0/0x4ffc00000, data 0xcbf09b/0xd32000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:36.791053+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62308352 unmapped: 17334272 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:37.791310+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62308352 unmapped: 17334272 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:38.791527+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62308352 unmapped: 17334272 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:39.791791+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62308352 unmapped: 17334272 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:40.791951+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62308352 unmapped: 17334272 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fd49b000/0x0/0x4ffc00000, data 0xcbf09b/0xd32000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 542246 data_alloc: 218103808 data_used: 122880
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:41.792288+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62308352 unmapped: 17334272 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:42.792549+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62308352 unmapped: 17334272 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:43.792781+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62308352 unmapped: 17334272 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:44.793050+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62308352 unmapped: 17334272 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fd49b000/0x0/0x4ffc00000, data 0xcbf09b/0xd32000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:45.793228+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62308352 unmapped: 17334272 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 542246 data_alloc: 218103808 data_used: 122880
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:46.793482+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62308352 unmapped: 17334272 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:47.793684+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62308352 unmapped: 17334272 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:48.794003+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62308352 unmapped: 17334272 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fd49b000/0x0/0x4ffc00000, data 0xcbf09b/0xd32000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:49.794131+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62308352 unmapped: 17334272 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:50.794371+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62308352 unmapped: 17334272 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 542246 data_alloc: 218103808 data_used: 122880
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:51.796323+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62308352 unmapped: 17334272 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fd49b000/0x0/0x4ffc00000, data 0xcbf09b/0xd32000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:52.798118+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62308352 unmapped: 17334272 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56223e822800
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 28.678512573s of 28.689655304s, submitted: 13
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _renew_subs
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 68 handle_osd_map epochs [69,69], i have 68, src has [1,69]
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 69 ms_handle_reset con 0x56223e822800 session 0x56223eae2d20
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:53.798515+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62414848 unmapped: 17227776 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:54.798820+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62414848 unmapped: 17227776 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 69 heartbeat osd_stat(store_statfs(0x4fd496000/0x0/0x4ffc00000, data 0xcc0a78/0xd37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56223e822c00
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:55.798998+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62423040 unmapped: 17219584 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 604393 data_alloc: 218103808 data_used: 131072
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:56.799205+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62554112 unmapped: 25485312 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:57.799417+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 69 heartbeat osd_stat(store_statfs(0x4fbc96000/0x0/0x4ffc00000, data 0x24c0a88/0x2538000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62603264 unmapped: 25436160 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 69 ms_handle_reset con 0x56223e822c00 session 0x56223f2205a0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:58.799535+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62603264 unmapped: 25436160 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56223e823000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _renew_subs
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 69 handle_osd_map epochs [70,70], i have 69, src has [1,70]
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:59.799708+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62619648 unmapped: 25419776 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 70 handle_osd_map epochs [70,71], i have 70, src has [1,71]
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:00.799851+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 71 ms_handle_reset con 0x56223e823000 session 0x56223f221a40
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 63610880 unmapped: 24428544 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 564280 data_alloc: 218103808 data_used: 139264
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56223e822800
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:01.799981+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 63651840 unmapped: 24387584 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 71 heartbeat osd_stat(store_statfs(0x4fd48f000/0x0/0x4ffc00000, data 0xcc3640/0xd3d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 71 heartbeat osd_stat(store_statfs(0x4fd48f000/0x0/0x4ffc00000, data 0xcc3640/0xd3d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _renew_subs
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 71 handle_osd_map epochs [72,72], i have 71, src has [1,72]
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:02.800114+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 72 ms_handle_reset con 0x56223e822800 session 0x56223f707e00
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 63840256 unmapped: 24199168 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:03.800273+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 63840256 unmapped: 24199168 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56223e822c00
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.976924896s of 11.311155319s, submitted: 70
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 72 handle_osd_map epochs [72,73], i have 72, src has [1,73]
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 73 ms_handle_reset con 0x56223e822c00 session 0x56223f5f1e00
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:04.800472+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 63938560 unmapped: 24100864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56223e823c00
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 73 ms_handle_reset con 0x56223e823c00 session 0x56223f5f14a0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 73 handle_osd_map epochs [73,74], i have 73, src has [1,74]
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:05.800632+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 64061440 unmapped: 23977984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 574353 data_alloc: 218103808 data_used: 139264
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56223ea28c00
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56223e823400
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:06.800797+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 64118784 unmapped: 23920640 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _renew_subs
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 74 handle_osd_map epochs [75,75], i have 74, src has [1,75]
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 75 ms_handle_reset con 0x56223ea28c00 session 0x56223f5f0960
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 75 ms_handle_reset con 0x56223e823400 session 0x56223f742f00
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:07.801073+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 64135168 unmapped: 23904256 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56223e823400
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 75 ms_handle_reset con 0x56223e823400 session 0x56223f5bf4a0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56223e822800
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 75 ms_handle_reset con 0x56223e822800 session 0x56223f5be960
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 75 heartbeat osd_stat(store_statfs(0x4fd481000/0x0/0x4ffc00000, data 0xcc8cfc/0xd4b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:08.801265+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 64135168 unmapped: 23904256 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56223e822c00
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56223e823c00
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _renew_subs
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 75 handle_osd_map epochs [76,76], i have 75, src has [1,76]
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 76 ms_handle_reset con 0x56223e822c00 session 0x56223f707a40
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:09.801450+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 76 ms_handle_reset con 0x56223e823c00 session 0x56223f2c0780
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 23764992 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56223ea28c00
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56223f254c00
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _renew_subs
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 76 handle_osd_map epochs [77,77], i have 76, src has [1,77]
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 77 heartbeat osd_stat(store_statfs(0x4fd480000/0x0/0x4ffc00000, data 0xcca2d4/0xd4c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [0,0,1,1])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:10.801568+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 77 ms_handle_reset con 0x56223f254c00 session 0x56223f59f860
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 64413696 unmapped: 23625728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 77 ms_handle_reset con 0x56223ea28c00 session 0x56223f2e7860
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 588235 data_alloc: 218103808 data_used: 147456
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:11.801808+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 64446464 unmapped: 23592960 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56223e822800
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:12.801962+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 64446464 unmapped: 23592960 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 77 handle_osd_map epochs [77,78], i have 77, src has [1,78]
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:13.802232+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 64454656 unmapped: 23584768 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56223e822c00
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _renew_subs
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 78 handle_osd_map epochs [79,79], i have 78, src has [1,79]
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.005178452s of 10.492918968s, submitted: 152
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:14.802378+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 79 ms_handle_reset con 0x56223e822c00 session 0x56223f2205a0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 79 ms_handle_reset con 0x56223e822800 session 0x56223ea89860
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56223e822400
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 64561152 unmapped: 23478272 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 79 ms_handle_reset con 0x56223e822400 session 0x56223ea89680
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 79 heartbeat osd_stat(store_statfs(0x4fd479000/0x0/0x4ffc00000, data 0xcce187/0xd54000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:15.802577+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 64561152 unmapped: 23478272 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 594951 data_alloc: 218103808 data_used: 147456
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:16.802791+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 64561152 unmapped: 23478272 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:17.802992+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 79 handle_osd_map epochs [80,80], i have 79, src has [1,80]
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 64577536 unmapped: 23461888 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:18.803147+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 64577536 unmapped: 23461888 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:19.803406+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56223e823c00
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 64585728 unmapped: 23453696 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 80 heartbeat osd_stat(store_statfs(0x4fd476000/0x0/0x4ffc00000, data 0xccf683/0xd57000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 80 ms_handle_reset con 0x56223e823c00 session 0x56223ea88f00
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:20.803647+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 64585728 unmapped: 23453696 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 597747 data_alloc: 218103808 data_used: 147456
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 80 handle_osd_map epochs [81,81], i have 80, src has [1,81]
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56223e822400
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _renew_subs
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 81 handle_osd_map epochs [82,82], i have 81, src has [1,82]
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:21.803790+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 82 ms_handle_reset con 0x56223e822400 session 0x56223f5f0960
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 64741376 unmapped: 23298048 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56223e823c00
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 82 handle_osd_map epochs [82,83], i have 82, src has [1,83]
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:22.803953+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 83 ms_handle_reset con 0x56223e823c00 session 0x56223eae3860
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 64757760 unmapped: 23281664 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56223e822c00
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:23.804100+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _renew_subs
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 83 handle_osd_map epochs [84,84], i have 83, src has [1,84]
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 84 ms_handle_reset con 0x56223e822c00 session 0x56223eae2d20
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 65863680 unmapped: 22175744 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56223e822800
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 84 handle_osd_map epochs [84,85], i have 84, src has [1,85]
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.822913170s of 10.012957573s, submitted: 69
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:24.804225+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 85 ms_handle_reset con 0x56223e822800 session 0x56223eae2b40
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 65929216 unmapped: 22110208 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56223ea28c00
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:25.804338+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 85 handle_osd_map epochs [85,86], i have 85, src has [1,86]
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 86 ms_handle_reset con 0x56223ea28c00 session 0x56223f2c1a40
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 86 heartbeat osd_stat(store_statfs(0x4fd45f000/0x0/0x4ffc00000, data 0xcd6c4b/0xd6d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 65953792 unmapped: 22085632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 634272 data_alloc: 218103808 data_used: 200704
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:26.804524+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 65953792 unmapped: 22085632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:27.804682+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 65953792 unmapped: 22085632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:28.804809+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56223e822400
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56223e822800
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 65953792 unmapped: 22085632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:29.804971+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _renew_subs
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 86 handle_osd_map epochs [87,87], i have 86, src has [1,87]
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 87 heartbeat osd_stat(store_statfs(0x4fd459000/0x0/0x4ffc00000, data 0xcd996d/0xd74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 22036480 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 87 ms_handle_reset con 0x56223e822800 session 0x56223f2210e0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56223e822c00
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:30.805159+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 87 handle_osd_map epochs [87,88], i have 87, src has [1,88]
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 88 ms_handle_reset con 0x56223e822400 session 0x56223f2c10e0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 67420160 unmapped: 20619264 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 88 ms_handle_reset con 0x56223e822c00 session 0x56223f220000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 641350 data_alloc: 218103808 data_used: 200704
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56223e823c00
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56223e86dc00
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56223f255000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:31.805372+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _renew_subs
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 88 handle_osd_map epochs [89,89], i have 88, src has [1,89]
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 67584000 unmapped: 20455424 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 89 ms_handle_reset con 0x56223e86dc00 session 0x56223f220f00
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x562240975000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:32.805569+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _renew_subs
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 89 handle_osd_map epochs [90,90], i have 89, src has [1,90]
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 90 heartbeat osd_stat(store_statfs(0x4fbe7f000/0x0/0x4ffc00000, data 0xd01d2a/0xd9e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 67690496 unmapped: 20348928 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 90 ms_handle_reset con 0x562240975000 session 0x56223f707c20
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x562240975000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:33.805765+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56223e822400
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _renew_subs
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 90 handle_osd_map epochs [91,91], i have 90, src has [1,91]
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 67805184 unmapped: 20234240 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 91 ms_handle_reset con 0x562240975000 session 0x56223f5bef00
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56223e822800
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56223e822c00
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:34.805944+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 91 handle_osd_map epochs [91,92], i have 91, src has [1,92]
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.528375626s of 10.122215271s, submitted: 156
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 92 ms_handle_reset con 0x56223e822c00 session 0x56223ea01680
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 92 ms_handle_reset con 0x56223e822400 session 0x56223f11bc20
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 68911104 unmapped: 19128320 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 92 ms_handle_reset con 0x56223e822800 session 0x56223e9d41e0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 92 handle_osd_map epochs [92,93], i have 92, src has [1,93]
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:35.806125+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbe79000/0x0/0x4ffc00000, data 0xd04238/0xda0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56223e86dc00
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 69869568 unmapped: 18169856 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 93 ms_handle_reset con 0x56223e86dc00 session 0x56223ea010e0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 652558 data_alloc: 218103808 data_used: 217088
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56223e822400
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 93 ms_handle_reset con 0x56223e822400 session 0x56223ea005a0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:36.806342+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56223e822800
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 93 ms_handle_reset con 0x56223e822800 session 0x56223f2e8960
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 69926912 unmapped: 18112512 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 93 handle_osd_map epochs [94,94], i have 93, src has [1,94]
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:37.806523+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 69959680 unmapped: 18079744 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 94 heartbeat osd_stat(store_statfs(0x4fbe76000/0x0/0x4ffc00000, data 0xd06d56/0xda6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56223e822c00
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:38.806686+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _renew_subs
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 94 handle_osd_map epochs [95,95], i have 94, src has [1,95]
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70017024 unmapped: 18022400 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 95 ms_handle_reset con 0x56223e822c00 session 0x56223f2e92c0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:39.806827+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70033408 unmapped: 18006016 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x562240975000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 95 handle_osd_map epochs [95,96], i have 95, src has [1,96]
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 96 ms_handle_reset con 0x562240975000 session 0x56223f2e8b40
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:40.807037+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70066176 unmapped: 17973248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 664123 data_alloc: 218103808 data_used: 229376
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 96 heartbeat osd_stat(store_statfs(0x4fbe70000/0x0/0x4ffc00000, data 0xd099ba/0xdac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:41.807189+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 17956864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:42.807401+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 17956864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:43.807617+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x562240975400
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 96 ms_handle_reset con 0x562240975400 session 0x56223e9d4000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x562240975400
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56223e822400
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 17956864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 96 ms_handle_reset con 0x56223e822400 session 0x56223ea00000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 96 ms_handle_reset con 0x562240975400 session 0x56223f707e00
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56223e822800
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 96 ms_handle_reset con 0x56223e822800 session 0x56223f2e74a0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x562240975000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 96 ms_handle_reset con 0x562240975000 session 0x56223dc9da40
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x562240975c00
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 96 ms_handle_reset con 0x562240975c00 session 0x56223f220780
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x562240975c00
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 96 ms_handle_reset con 0x562240975c00 session 0x56223f743e00
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:44.807795+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56223e822400
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 96 ms_handle_reset con 0x56223e822400 session 0x56223ea005a0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70090752 unmapped: 17948672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 96 heartbeat osd_stat(store_statfs(0x4fbe70000/0x0/0x4ffc00000, data 0xd099ba/0xdac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 96 handle_osd_map epochs [96,97], i have 96, src has [1,97]
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.487519264s of 10.742533684s, submitted: 87
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56223e822800
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:45.807962+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 97 ms_handle_reset con 0x56223e822800 session 0x56223ea010e0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70033408 unmapped: 18006016 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x562240975000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 97 ms_handle_reset con 0x562240975000 session 0x56223f2e8b40
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x562240975400
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 97 ms_handle_reset con 0x562240975400 session 0x56223f2e92c0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 668581 data_alloc: 218103808 data_used: 237568
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:46.808103+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56223e822400
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56223e822800
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70033408 unmapped: 18006016 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:47.808467+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70033408 unmapped: 18006016 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:48.808574+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70033408 unmapped: 18006016 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:49.808717+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70033408 unmapped: 18006016 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x562240975000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 97 ms_handle_reset con 0x562240975000 session 0x56223f2e8000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x562240975c00
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:50.808847+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 97 heartbeat osd_stat(store_statfs(0x4fbe6d000/0x0/0x4ffc00000, data 0xd0aea2/0xdb0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70049792 unmapped: 17989632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 668581 data_alloc: 218103808 data_used: 237568
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _renew_subs
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 97 handle_osd_map epochs [98,98], i have 97, src has [1,98]
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:51.809012+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 98 ms_handle_reset con 0x562240975c00 session 0x56223f2c03c0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56224066d000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x562240654c00
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 98 ms_handle_reset con 0x562240654c00 session 0x56223f2f8000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 98 ms_handle_reset con 0x56224066d000 session 0x56223f706d20
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56223e822c00
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 98 ms_handle_reset con 0x56223e822c00 session 0x56223f743a40
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x562240654c00
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70123520 unmapped: 17915904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 98 heartbeat osd_stat(store_statfs(0x4fbe6a000/0x0/0x4ffc00000, data 0xd0c45c/0xdb3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 98 handle_osd_map epochs [99,99], i have 98, src has [1,99]
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 98 handle_osd_map epochs [99,99], i have 99, src has [1,99]
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:52.809148+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 99 ms_handle_reset con 0x562240654c00 session 0x56223f5be960
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70131712 unmapped: 17907712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:53.809330+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70131712 unmapped: 17907712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:54.809473+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 99 heartbeat osd_stat(store_statfs(0x4fbe65000/0x0/0x4ffc00000, data 0xd0de6a/0xdb7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 99 handle_osd_map epochs [100,100], i have 99, src has [1,100]
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 99 handle_osd_map epochs [100,100], i have 100, src has [1,100]
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56224066cc00
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70131712 unmapped: 17907712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 100 ms_handle_reset con 0x56224066cc00 session 0x56223f5f0f00
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:55.809614+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56224066c800
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 17850368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 679111 data_alloc: 218103808 data_used: 241664
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 100 handle_osd_map epochs [100,101], i have 100, src has [1,101]
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.938570976s of 11.146072388s, submitted: 55
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:56.809907+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 101 ms_handle_reset con 0x56224066c800 session 0x56223f220780
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 17850368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:57.810082+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 17850368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 101 ms_handle_reset con 0x56223e822400 session 0x56223f59e5a0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 101 ms_handle_reset con 0x56223e822800 session 0x56223ea01c20
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:58.810230+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56223e822400
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 101 heartbeat osd_stat(store_statfs(0x4fbe61000/0x0/0x4ffc00000, data 0xd1066a/0xdbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70213632 unmapped: 17825792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:59.810397+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _renew_subs
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 101 handle_osd_map epochs [102,102], i have 101, src has [1,102]
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 102 ms_handle_reset con 0x56223e822400 session 0x56223f2205a0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70246400 unmapped: 17793024 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:00.810590+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70246400 unmapped: 17793024 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 683786 data_alloc: 218103808 data_used: 245760
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:01.810743+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70246400 unmapped: 17793024 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:02.810871+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70246400 unmapped: 17793024 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:03.811102+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70246400 unmapped: 17793024 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 102 handle_osd_map epochs [103,103], i have 102, src has [1,103]
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 103 heartbeat osd_stat(store_statfs(0x4fbe5f000/0x0/0x4ffc00000, data 0xd11c84/0xdbe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:04.811274+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 103 ms_handle_reset con 0x56223e823c00 session 0x56223f5f14a0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 103 ms_handle_reset con 0x56223f255000 session 0x56223f2f9680
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x562240654c00
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 17768448 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 103 ms_handle_reset con 0x562240654c00 session 0x56223ea01a40
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 103 heartbeat osd_stat(store_statfs(0x4fbe5c000/0x0/0x4ffc00000, data 0xd13140/0xdc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:05.811429+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 17768448 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 683002 data_alloc: 218103808 data_used: 241664
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 103 handle_osd_map epochs [104,104], i have 103, src has [1,104]
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56224066c800
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 104 ms_handle_reset con 0x56224066c800 session 0x56223eae2780
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:06.811576+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56224066d000
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.860716820s of 10.143070221s, submitted: 96
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 17752064 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:07.811694+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _renew_subs
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 104 handle_osd_map epochs [105,105], i have 104, src has [1,105]
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 105 ms_handle_reset con 0x56224066d000 session 0x56223f2e9c20
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 17735680 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:08.811832+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 105 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0xcf1bde/0xda2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 17735680 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:09.812310+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 17735680 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:10.813642+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 17735680 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 689942 data_alloc: 218103808 data_used: 245760
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:11.813757+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 17735680 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:12.813883+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 17735680 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:13.814027+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 17735680 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 105 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0xcf1bde/0xda2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:14.814277+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 105 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0xcf1bde/0xda2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 17735680 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:15.814382+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 17735680 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 689942 data_alloc: 218103808 data_used: 245760
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:16.814577+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 17735680 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:17.814746+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 17727488 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:18.814914+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 105 handle_osd_map epochs [106,106], i have 105, src has [1,106]
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.980017662s of 12.055953979s, submitted: 14
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:19.815046+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:20.815244+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 692244 data_alloc: 218103808 data_used: 245760
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:21.815366+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:22.815540+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:23.815680+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:24.815878+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:25.816374+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:26.816765+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:27.817271+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:28.817623+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:29.817960+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:30.818205+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:31.818381+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:32.818542+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:33.818800+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:34.819005+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:35.819255+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:36.819586+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:37.819699+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:38.819853+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:39.820261+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:40.820526+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:26 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:26 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:41.820737+0000)
Nov 25 23:58:26 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:26 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:42.820880+0000)
Nov 25 23:58:27 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:43.821041+0000)
Nov 25 23:58:27 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 23:58:27 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:44.821311+0000)
Nov 25 23:58:27 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:45.821531+0000)
Nov 25 23:58:27 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:27 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:27 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:27 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:46.821763+0000)
Nov 25 23:58:27 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:47.821947+0000)
Nov 25 23:58:27 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:48.822114+0000)
Nov 25 23:58:27 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:49.822273+0000)
Nov 25 23:58:27 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 23:58:27 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:50.822423+0000)
Nov 25 23:58:27 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:27 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:27 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:27 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:51.822583+0000)
Nov 25 23:58:27 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:52.822765+0000)
Nov 25 23:58:27 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:53.822959+0000)
Nov 25 23:58:27 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 23:58:27 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:27 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:54.823163+0000)
Nov 25 23:58:27 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:55.823342+0000)
Nov 25 23:58:27 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:27 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:27 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:27 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:56.823510+0000)
Nov 25 23:58:27 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:27 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:57.823642+0000)
Nov 25 23:58:27 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:27 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:58.823764+0000)
Nov 25 23:58:27 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:59.823856+0000)
Nov 25 23:58:27 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:00.823968+0000)
Nov 25 23:58:27 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:27 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:27 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:27 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:01.824098+0000)
Nov 25 23:58:27 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:02.824223+0000)
Nov 25 23:58:27 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 23:58:27 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:03.824381+0000)
Nov 25 23:58:27 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:04.824511+0000)
Nov 25 23:58:27 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:27 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:05.824662+0000)
Nov 25 23:58:27 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 23:58:27 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:27 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 23:58:27 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:27 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:27 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:06.824840+0000)
Nov 25 23:58:27 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:27 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:07.824980+0000)
Nov 25 23:58:27 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:08.825138+0000)
Nov 25 23:58:27 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 23:58:27 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:09.825277+0000)
Nov 25 23:58:27 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:10.825418+0000)
Nov 25 23:58:27 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 23:58:27 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:27 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:27 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:27 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:11.825584+0000)
Nov 25 23:58:27 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:12.825716+0000)
Nov 25 23:58:27 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:13.825842+0000)
Nov 25 23:58:27 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 23:58:27 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:14.826015+0000)
Nov 25 23:58:27 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70352896 unmapped: 17686528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:15.826199+0000)
Nov 25 23:58:27 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70352896 unmapped: 17686528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:27 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:27 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:27 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:16.826381+0000)
Nov 25 23:58:27 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70352896 unmapped: 17686528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:17.826509+0000)
Nov 25 23:58:27 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70352896 unmapped: 17686528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:27 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:18.826769+0000)
Nov 25 23:58:27 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 23:58:27 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70352896 unmapped: 17686528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:19.826911+0000)
Nov 25 23:58:27 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70352896 unmapped: 17686528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:20.827039+0000)
Nov 25 23:58:27 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70352896 unmapped: 17686528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:27 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:27 compute-0 ceph-mon[75654]: from='client.14752 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 23:58:27 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:27 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:27 compute-0 ceph-mon[75654]: from='client.14754 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:21.827163+0000)
Nov 25 23:58:27 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70352896 unmapped: 17686528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:27 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/2611995556' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:27 compute-0 ceph-mon[75654]: from='client.14758 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:22.827357+0000)
Nov 25 23:58:27 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70352896 unmapped: 17686528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:27 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/3045823816' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:27 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/1208784011' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:23.827537+0000)
Nov 25 23:58:27 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 23:58:27 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70352896 unmapped: 17686528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:24.827697+0000)
Nov 25 23:58:27 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70352896 unmapped: 17686528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:25.827844+0000)
Nov 25 23:58:27 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70352896 unmapped: 17686528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:27 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:27 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:27 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:26.828023+0000)
Nov 25 23:58:27 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70352896 unmapped: 17686528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:27.828223+0000)
Nov 25 23:58:27 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 23:58:27 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70352896 unmapped: 17686528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:27 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:28.828418+0000)
Nov 25 23:58:27 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70352896 unmapped: 17686528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:29.828607+0000)
Nov 25 23:58:27 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70352896 unmapped: 17686528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:30.829017+0000)
Nov 25 23:58:27 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70361088 unmapped: 17678336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:27 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:27 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:27 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:31.829278+0000)
Nov 25 23:58:27 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70361088 unmapped: 17678336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:32.829842+0000)
Nov 25 23:58:27 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70361088 unmapped: 17678336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:33.830306+0000)
Nov 25 23:58:27 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 23:58:27 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70361088 unmapped: 17678336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:34.830706+0000)
Nov 25 23:58:27 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70361088 unmapped: 17678336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:35.832300+0000)
Nov 25 23:58:27 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70361088 unmapped: 17678336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:36.834231+0000)
Nov 25 23:58:27 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:27 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:27 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 23:58:27 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70361088 unmapped: 17678336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:37.834472+0000)
Nov 25 23:58:27 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70361088 unmapped: 17678336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:38.834673+0000)
Nov 25 23:58:27 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 23:58:27 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70361088 unmapped: 17678336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:39.834806+0000)
Nov 25 23:58:27 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 23:58:27 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70361088 unmapped: 17678336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:40.834942+0000)
Nov 25 23:58:27 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70361088 unmapped: 17678336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:41.835247+0000)
Nov 25 23:58:27 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:27 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:27 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 23:58:27 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70361088 unmapped: 17678336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:42.835373+0000)
Nov 25 23:58:27 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70361088 unmapped: 17678336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:43.835501+0000)
Nov 25 23:58:27 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70361088 unmapped: 17678336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:44.835617+0000)
Nov 25 23:58:27 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70361088 unmapped: 17678336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:45.835760+0000)
Nov 25 23:58:27 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 23:58:27 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70361088 unmapped: 17678336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:27 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:46.835930+0000)
Nov 25 23:58:27 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:27 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:27 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 23:58:27 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70369280 unmapped: 17670144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:47.836064+0000)
Nov 25 23:58:27 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70369280 unmapped: 17670144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:48.836241+0000)
Nov 25 23:58:27 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70369280 unmapped: 17670144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:49.836415+0000)
Nov 25 23:58:27 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70369280 unmapped: 17670144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:27 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:50.905015+0000)
Nov 25 23:58:27 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70369280 unmapped: 17670144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:51.905159+0000)
Nov 25 23:58:27 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:27 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:27 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 25 23:58:27 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 23:58:27 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70369280 unmapped: 17670144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:52.905326+0000)
Nov 25 23:58:27 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70549504 unmapped: 17489920 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:53.905442+0000)
Nov 25 23:58:27 compute-0 ceph-osd[91111]: do_command 'config diff' '{prefix=config diff}'
Nov 25 23:58:27 compute-0 ceph-osd[91111]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 25 23:58:27 compute-0 ceph-osd[91111]: do_command 'config show' '{prefix=config show}'
Nov 25 23:58:27 compute-0 ceph-osd[91111]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 25 23:58:27 compute-0 ceph-osd[91111]: do_command 'counter dump' '{prefix=counter dump}'
Nov 25 23:58:27 compute-0 ceph-osd[91111]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 25 23:58:27 compute-0 ceph-osd[91111]: do_command 'counter schema' '{prefix=counter schema}'
Nov 25 23:58:27 compute-0 ceph-osd[91111]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 25 23:58:27 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70868992 unmapped: 17170432 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:54.905560+0000)
Nov 25 23:58:27 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71122944 unmapped: 16916480 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: tick
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 25 23:58:27 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:55.905716+0000)
Nov 25 23:58:27 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 25 23:58:27 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71024640 unmapped: 17014784 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:27 compute-0 ceph-osd[91111]: do_command 'log dump' '{prefix=log dump}'
Nov 25 23:58:27 compute-0 rsyslogd[1001]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 23:58:27 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14768 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 23:58:27 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 25 23:58:27 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/140068122' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 25 23:58:27 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14772 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 25 23:58:27 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 25 23:58:27 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2109943533' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 25 23:58:27 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14776 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 23:58:28 compute-0 ceph-mon[75654]: from='client.14762 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 23:58:28 compute-0 ceph-mon[75654]: pgmap v892: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:58:28 compute-0 ceph-mon[75654]: from='client.14766 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 25 23:58:28 compute-0 ceph-mon[75654]: from='client.14768 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 23:58:28 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/140068122' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 25 23:58:28 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/2109943533' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 25 23:58:28 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 25 23:58:28 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/51285997' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 25 23:58:28 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14780 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 23:58:28 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Nov 25 23:58:28 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3987879754' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 25 23:58:28 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v893: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:58:28 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14784 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 23:58:28 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Nov 25 23:58:28 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2177530395' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 25 23:58:29 compute-0 ceph-mon[75654]: from='client.14772 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 25 23:58:29 compute-0 ceph-mon[75654]: from='client.14776 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 23:58:29 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/51285997' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 25 23:58:29 compute-0 ceph-mon[75654]: from='client.14780 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 23:58:29 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/3987879754' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 25 23:58:29 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/2177530395' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 25 23:58:29 compute-0 podman[262588]: 2025-11-25 23:58:29.259780266 +0000 UTC m=+0.070323373 container health_status 9c33443d6f21f9126b440289b7d384a59ef92406382d0e40a2b11f1da5dc47d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 23:58:29 compute-0 podman[262577]: 2025-11-25 23:58:29.302196039 +0000 UTC m=+0.121204946 container health_status 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 23:58:29 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14792 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 23:58:29 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-25T23:58:29.459+0000 7f36737f5640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 25 23:58:29 compute-0 ceph-mgr[75954]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 25 23:58:29 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0) v1
Nov 25 23:58:29 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2190590891' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 25 23:58:29 compute-0 crontab[262758]: (root) LIST (root)
Nov 25 23:58:29 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Nov 25 23:58:29 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/620990861' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 25 23:58:30 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Nov 25 23:58:30 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3583569641' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 25 23:58:30 compute-0 ceph-mon[75654]: pgmap v893: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:58:30 compute-0 ceph-mon[75654]: from='client.14784 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 23:58:30 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/2190590891' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 25 23:58:30 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/620990861' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 25 23:58:30 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/3583569641' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 25 23:58:30 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Nov 25 23:58:30 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2521465429' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 25 23:58:30 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Nov 25 23:58:30 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3747309463' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 25 23:58:30 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v894: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:58:30 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:58:30 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Nov 25 23:58:30 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3028224869' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 25 23:58:30 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Nov 25 23:58:30 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/434467421' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 25 23:58:30 compute-0 ceph-osd[90055]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.013628 2 0.000043
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000010 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.17(unlocked)] enter Initial
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.17( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000066 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.17( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.17( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000016 1 0.000042
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.17( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.17( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.17( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.17( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000011 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.17( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.17( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.17( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.17( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000179 1 0.000067
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.17( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.11(unlocked)] enter Initial
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.11( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000028 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.11( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.11( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000014
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.11( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.11( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.11( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.11( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.11( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.11( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.11( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.11( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000052 1 0.000030
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.11( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:30 compute-0 ceph-osd[90055]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.013164 2 0.000060
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:30 compute-0 ceph-osd[90055]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.012811 2 0.000109
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.15(unlocked)] enter Initial
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.15( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000048 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.15( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.15( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000034 1 0.000039
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.15( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.15( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.15( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.15( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.15( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.15( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.15( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.15( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000063 1 0.000039
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.15( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:30 compute-0 ceph-osd[90055]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[6.1( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.012391 2 0.000105
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[6.1( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[6.1( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[6.1( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.1f( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=35) [1] r=0 lpr=35 crt=0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 8.563383 17 0.000123
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.1f( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=35) [1] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 8.568710 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.1f( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=35) [1] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary 8.568837 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.1f( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=35) [1] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] exit Started 8.568867 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.1f( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=35) [1] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.12(unlocked)] enter Initial
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.12( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000015 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.12( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.12( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000010
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.1f( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.436471939s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 87.733673096s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.12( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.12( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.12( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.12( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.12( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.12( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.12( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.1f( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.436425209s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.733673096s@ mbc={}] exit Reset 0.000085 1 0.000132
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.1f( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.436425209s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.733673096s@ mbc={}] enter Started
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.1f( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.436425209s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.733673096s@ mbc={}] enter Start
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.1f( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.436425209s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.733673096s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.1f( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.436425209s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.733673096s@ mbc={}] exit Start 0.000013 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[3.1f( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=15.436425209s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.733673096s@ mbc={}] enter Started/Stray
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.12( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000063 1 0.000042
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.12( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.13(unlocked)] enter Initial
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.13( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000034 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.13( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.13( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000009
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.13( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.13( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.13( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.13( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.13( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.13( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.13( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.13( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000054 1 0.000032
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.13( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.9(unlocked)] enter Initial
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.9( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000271 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.9( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.9( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000008 1 0.000019
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.9( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.9( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.9( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.9( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000023 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.9( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.9( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.9( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.9( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000054 1 0.000045
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.9( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:30 compute-0 ceph-osd[90055]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.16(unlocked)] enter Initial
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.013111 2 0.000058
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.16( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000020 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.16( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.16( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000010
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.16( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000009 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.16( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.16( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.16( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.16( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.16( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.16( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.16( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000043 1 0.000033
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.16( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.d(unlocked)] enter Initial
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.d( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000025 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.d( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.d( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000015
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.d( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.d( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.d( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.d( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.d( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.d( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.d( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.d( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000049 1 0.000041
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.d( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.3(unlocked)] enter Initial
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.3( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000011 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.3( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.3( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000004 1 0.000007
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.3( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.3( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.3( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.3( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.3( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.3( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.3( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.3( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000029 1 0.000022
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.3( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:30 compute-0 ceph-osd[90055]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[6.5( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetLog 0.013066 2 0.000070
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[6.5( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[6.5( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[6.5( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.4(unlocked)] enter Initial
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.4( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000013 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.4( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.4( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000005 1 0.000009
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.4( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.4( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.4( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.4( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.4( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.4( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.4( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.4( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000033 1 0.000032
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.4( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.7(unlocked)] enter Initial
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.7( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000050 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.7( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.7( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000018 1 0.000043
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.7( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.7( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.7( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.7( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000011 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.7( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.7( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.7( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.7( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000205 1 0.000115
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.7( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.6(unlocked)] enter Initial
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.6( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000095 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.6( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.6( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000013 1 0.000035
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.6( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.6( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.6( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.6( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000011 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.6( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.6( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.6( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.6( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000136 1 0.000088
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.6( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:30 compute-0 ceph-osd[90055]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.013970 2 0.000062
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000017 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.9(unlocked)] enter Initial
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.9( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000060 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.9( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.9( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000013 1 0.000025
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.9( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.9( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.9( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.9( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000011 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.9( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.9( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.9( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.9( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000119 1 0.000077
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.9( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.a(unlocked)] enter Initial
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.a( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000033 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.a( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.a( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000011
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.a( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.a( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.a( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.a( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000011 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.a( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.a( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.a( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.a( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000082 1 0.000037
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.a( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.c(unlocked)] enter Initial
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.c( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000056 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.c( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.c( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000013 1 0.000033
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.c( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.c( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.c( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.c( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000019 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.c( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.c( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.c( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.c( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000101 1 0.000073
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.c( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.f(unlocked)] enter Initial
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.f( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000044 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.f( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.f( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000015 1 0.000025
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.f( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.f( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.f( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.f( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000020 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.f( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.f( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.f( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.f( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000125 1 0.000073
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.f( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:30 compute-0 ceph-osd[90055]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[6.d( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetLog 0.019024 2 0.000027
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[6.d( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[6.d( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetMissing 0.000008 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[6.d( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:30 compute-0 ceph-osd[90055]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[6.7( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.015595 2 0.000057
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[6.7( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[6.7( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000012 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[6.7( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.1(unlocked)] enter Initial
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.1( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000043 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.1( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.1( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000013 1 0.000026
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.1( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.1( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.1( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.1( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000023 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.1( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.1( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.1( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.1( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000117 1 0.000071
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.1( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.19(unlocked)] enter Initial
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.19( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000051 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.19( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.19( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000011 1 0.000027
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.19( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.19( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.19( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.19( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000015 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.19( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.19( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.19( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.19( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000128 1 0.000075
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.19( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.5(unlocked)] enter Initial
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.5( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000024 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.5( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.5( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000019
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.5( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.5( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.5( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.5( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.5( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.5( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.5( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.5( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000085 1 0.000034
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.5( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.18(unlocked)] enter Initial
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.18( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000014 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.18( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.18( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000024 1 0.000027
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.18( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.18( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.18( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.18( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.18( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.18( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.18( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.18( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000039 1 0.000036
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.18( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.1a(unlocked)] enter Initial
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.1a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000022 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.1a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.1a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000010 1 0.000013
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.1a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.1a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.1a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.1a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.1a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.1a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.1a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.1a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000277 1 0.000031
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.1a( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.1c] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.1c] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.11] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.11] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.a] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.a] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.8] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.15] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.8] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.15] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.5] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.1] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.2] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.c] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.5] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.c] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.e] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.e] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.2] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.1a] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.1] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.1d( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.016128 2 0.000026
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.1d( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:30 compute-0 ceph-osd[90055]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.1d( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.1d( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[6.9( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.024992 2 0.000067
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[6.9( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[6.9( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000011 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[6.9( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.024081 2 0.000090
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:30 compute-0 ceph-osd[90055]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[6.b( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.023676 2 0.000071
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[6.b( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[6.b( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[6.b( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:30 compute-0 ceph-osd[90055]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.023415 2 0.000059
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:30 compute-0 ceph-osd[90055]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.021783 2 0.000106
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000007 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.021299 2 0.000091
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:30 compute-0 ceph-osd[90055]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.021006 2 0.000047
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.1a] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.13] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.13] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.9] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.4] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.f] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.f] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.9] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.3] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.3] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.4] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.6] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.017882 2 0.000079
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000007 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.6] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.1f] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.18] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.1f] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.1b] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.18] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.1b] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.017303 2 0.000070
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.11( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.017214 2 0.000029
Nov 25 23:58:30 compute-0 ceph-osd[90055]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.11( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.11( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000017 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.11( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:30 compute-0 ceph-osd[90055]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.016946 2 0.000032
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:30 compute-0 ceph-osd[90055]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.12( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.016710 2 0.000025
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.12( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.12( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.12( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.13( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.017635 2 0.000026
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.13( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.13( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000008 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.13( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.9( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.017222 2 0.000027
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.9( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.9( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.9( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:30 compute-0 ceph-osd[90055]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.016885 2 0.000016
Nov 25 23:58:30 compute-0 ceph-osd[90055]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.016695 2 0.000019
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000008 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:30 compute-0 ceph-osd[90055]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.016281 2 0.000082
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:30 compute-0 ceph-osd[90055]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.015809 2 0.000074
Nov 25 23:58:30 compute-0 ceph-osd[90055]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.017147 2 0.000027
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:30 compute-0 ceph-osd[90055]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.16( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.017413 2 0.000026
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.16( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.16( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.16( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:30 compute-0 ceph-osd[90055]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.014931 2 0.000203
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.f( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.013974 2 0.000070
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.f( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.f( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.f( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:30 compute-0 ceph-osd[90055]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.012868 2 0.000024
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:30 compute-0 ceph-osd[90055]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.015466 2 0.000056
Nov 25 23:58:30 compute-0 ceph-osd[90055]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.1( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.013478 2 0.000055
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.1( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000009 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.1( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.1( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:30 compute-0 ceph-osd[90055]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.1a( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.012409 2 0.000039
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.1a( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.1a( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.1a( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:30 compute-0 ceph-osd[90055]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.19( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.013219 2 0.000057
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.19( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.19( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.19( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:30 compute-0 ceph-osd[90055]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.c( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.014784 2 0.000074
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.c( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.c( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000007 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.c( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:30 compute-0 ceph-osd[90055]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.18( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.013147 2 0.000021
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.18( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.18( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 42 pg[5.18( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:33:38.832579+0000)
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 9 sent 7 num 2 unsent 2 sending 2
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:34:08.563024+0000 osd.1 (osd.1) 8 : cluster [DBG] 3.4 scrub starts
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:34:08.577138+0000 osd.1 (osd.1) 9 : cluster [DBG] 3.4 scrub ok
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 42 heartbeat osd_stat(store_statfs(0x4fe0ef000/0x0/0x4ffc00000, data 0x9c035/0xde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:30 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 58130432 unmapped: 1499136 heap: 59629568 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 9) v1
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:34:08.563024+0000 osd.1 (osd.1) 8 : cluster [DBG] 3.4 scrub starts
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:34:08.577138+0000 osd.1 (osd.1) 9 : cluster [DBG] 3.4 scrub ok
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 42 handle_osd_map epochs [42,43], i have 42, src has [1,43]
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 42 handle_osd_map epochs [43,43], i have 43, src has [1,43]
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 42 handle_osd_map epochs [43,43], i have 43, src has [1,43]
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.19( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.972849 2 0.000036
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.19( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.986264 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.19( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.18( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.972598 2 0.000396
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.19( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.1a( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.972971 2 0.000037
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.18( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.985850 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.1a( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.985702 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.1a( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.18( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.1a( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.18( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.c( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.972895 2 0.000534
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.c( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.987835 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.f( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.973322 2 0.000032
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.c( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.1d( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.976882 2 0.000027
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.f( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.987483 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.1d( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.993115 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.c( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.1d( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.f( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.1d( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.f( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.973919 2 0.000063
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.989957 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.6( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.973761 2 0.000052
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.989415 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.9( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.973931 2 0.000026
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.986911 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.5( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.1( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.974025 2 0.000031
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.1( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.987671 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.1( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.1( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 43 handle_osd_map epochs [43,43], i have 43, src has [1,43]
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.974229 2 0.000055
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.989302 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.974408 2 0.000024
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.991632 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.d( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 43 handle_osd_map epochs [43,43], i have 43, src has [1,43]
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.a( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.9( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.974755 2 0.000034
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.9( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.992063 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.9( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.9( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 43 handle_osd_map epochs [43,43], i have 43, src has [1,43]
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.16( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.974473 2 0.000278
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.16( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.991954 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.16( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.16( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.12( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.976000 2 0.000041
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.12( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.992810 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.12( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.976053 2 0.000067
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.12( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.993105 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.13( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.974937 2 0.000060
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.13( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.992674 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.13( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.13( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.976294 2 0.000028
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.993821 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.17( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.11( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.976220 2 0.000065
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.11( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.993549 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.11( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.11( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.15( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.976680 2 0.000060
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.994671 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.1b( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.975048 2 0.000035
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.991589 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.975132 2 0.000066
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.7( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.992097 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.3( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 43 handle_osd_map epochs [43,43], i have 43, src has [1,43]
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.978129 2 0.000053
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.999333 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.10( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.978714 2 0.000075
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.978744 2 0.000043
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.000933 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.000297 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.14( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.12( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.978948 2 0.000032
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.002506 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.8( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.975931 2 0.000037
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.992705 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.4( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.979224 2 0.000032
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.b( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.979170 2 0.000031
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.003926 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.b( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 1.003024 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.9( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.b( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.b( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.9( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.979368 2 0.000089
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.9( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.004590 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.9( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.9( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.991849 2 0.000093
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.006026 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.7( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.989807 2 0.001072
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.7( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 1.005629 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.5( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.7( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.5( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.993223 2 0.000033
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.5( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering 1.006534 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.7( v 32'39 lc 32'11 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.993654 2 0.000082
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.006915 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.5( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 unknown m=2 mbc={}] enter Started/Primary/Active
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.7( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.1( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.994667 2 0.000040
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.5( v 32'39 lc 32'6 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 activating+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Activating
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.1( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.007285 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.1( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.1( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.995044 2 0.000026
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.008082 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.4( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.995218 2 0.000028
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.008553 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.2( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.3( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.996097 2 0.000039
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.3( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering 1.008951 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.3( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 unknown m=2 mbc={}] enter Started/Primary/Active
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.3( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 activating+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Activating
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.996438 2 0.000088
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.009360 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.d( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.990473 2 0.000089
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.d( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering 1.009673 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.d( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 unknown m=2 mbc={}] enter Started/Primary/Active
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.f( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.996277 2 0.000069
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.010081 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.d( v 32'39 lc 32'7 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 activating+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Activating
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.d( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.f( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.996524 2 0.000448
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.f( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering 1.011004 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.f( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 unknown m=3 mbc={}] enter Started/Primary/Active
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.f( v 32'39 lc 32'1 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 activating+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/Activating
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.1a( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.19( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.18( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.c( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.1a( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.010266 4 0.000125
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.1a( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.1a( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000013 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.19( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.010342 4 0.000192
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.1a( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.19( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.18( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.010299 4 0.000146
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.19( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000010 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.19( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.18( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.18( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000010 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.18( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.c( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.010194 4 0.000063
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.c( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.c( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.c( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.18( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.014016 7 0.000056
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.18( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.18( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.18( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000348 1 0.000055
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.18( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.1d( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.9( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.f( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.1d( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.019871 4 0.000080
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.1d( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.1d( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.1d( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.9( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.019334 4 0.000093
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.9( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.9( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.9( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.5( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.6( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.1( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.d( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.9( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.a( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.12( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.13( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.f( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.020283 4 0.000108
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.f( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.f( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000010 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.17( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.f( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.11( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.15( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.1b( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.7( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.10( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.3( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.14( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.12( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.8( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.4( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.5( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.019848 4 0.000056
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.5( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.5( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.5( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.9( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.9( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.5( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.7( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.b( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.1( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.16( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.4( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.5( v 32'39 lc 32'6 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.2( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.3( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.d( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.d( v 32'39 lc 32'7 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.f( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.6( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.020307 4 0.000129
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.f( v 32'39 lc 32'1 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.6( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.6( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.6( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.1( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.020034 4 0.000059
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.1( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.1( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.1( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.d( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.019904 4 0.000071
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.9( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.019853 4 0.000044
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.d( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.9( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.a( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.019888 4 0.000174
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.a( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.9( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.a( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.d( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000011 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.a( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.12( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.019765 4 0.000040
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.9( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.d( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.12( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.12( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.13( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.019732 4 0.000038
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.12( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.13( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.13( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000025 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.13( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.17( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.019723 4 0.000038
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.17( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.17( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.11( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.019691 4 0.000039
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.17( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.11( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.11( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.15( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.019688 4 0.000216
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.11( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.15( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.7( v 32'39 lc 32'11 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.1b( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.019670 4 0.000039
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.1b( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.15( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.1b( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.15( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.1b( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.7( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.019493 4 0.000055
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.7( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.10( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.019424 4 0.000050
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.7( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.10( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.7( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.10( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.10( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.14( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.018920 4 0.000059
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.14( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.12( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.018916 4 0.000161
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.12( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.14( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.3( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.019481 4 0.000083
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.8( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.018769 4 0.000058
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.12( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.8( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.14( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.12( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.8( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.8( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.4( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.018723 4 0.000100
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.4( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.9( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.018533 4 0.000087
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.4( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.9( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.4( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.9( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.9( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.5( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.018412 4 0.000083
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.5( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.5( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.9( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.018709 4 0.000166
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.9( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.5( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.9( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.9( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.7( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.018389 4 0.000058
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.7( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.7( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.7( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.b( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.018663 5 0.000273
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.b( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.1( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.018338 4 0.000058
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.1( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.1( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.1( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.4( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.018242 4 0.000085
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.4( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.4( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.4( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.16( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.020152 4 0.000106
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.16( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.16( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[5.16( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.3( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.5( v 32'39 lc 32'6 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/Activating 0.018425 5 0.000168
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.2( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.018196 4 0.000044
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.2( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.5( v 32'39 lc 32'6 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.3( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000197 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.2( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[2.3( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.2( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.d( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.017913 4 0.000059
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.d( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.d( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.d( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.3( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/Activating 0.018158 5 0.000079
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.3( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.f( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.018035 4 0.000104
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.f( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.f( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[4.f( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/37 les/c/f=43/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.d( v 32'39 lc 32'7 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/Activating 0.018024 5 0.000162
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.d( v 32'39 lc 32'7 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.f( v 32'39 lc 32'1 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/Activating 0.017848 4 0.000233
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.f( v 32'39 lc 32'1 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.7( v 32'39 lc 32'11 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.019038 5 0.000429
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.b( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000610 1 0.000589
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.7( v 32'39 lc 32'11 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.b( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.b( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000009 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.b( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.1c( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.020544 7 0.000107
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.1c( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.1c( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.16( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.019114 7 0.000114
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.16( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.16( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.e( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.018970 7 0.000121
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.e( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.e( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.15( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.018830 7 0.000075
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.8( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.018847 7 0.000053
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.8( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.15( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.15( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.8( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.5( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.018575 7 0.000094
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.5( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.5( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.2( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.017748 7 0.000122
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.2( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.2( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.11( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.019474 7 0.000096
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.11( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.019117 7 0.000124
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.11( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.11( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.5( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.018433 7 0.000099
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.5( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.5( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.8( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.017670 7 0.000156
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.8( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.8( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.c( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.017689 7 0.000102
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.c( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.c( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.1( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.018111 7 0.000434
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.1( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.1( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.1e( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.017214 7 0.000058
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.1e( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.1e( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.1d( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.017371 7 0.000054
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.1d( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.1d( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.1a( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.017131 7 0.000119
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.e( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.017539 7 0.000082
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.e( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.e( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.1a( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.1a( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.7( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.018023 7 0.000255
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.7( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.7( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.a( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.019288 7 0.000121
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.a( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.a( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.11( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.11( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.18( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.014176 1 0.000071
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.18( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.014579 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.18( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.028636 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.1f( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.023852 7 0.000119
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.1f( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.1f( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.15( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.029974 7 0.001545
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.15( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.15( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.1b( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.026382 7 0.000118
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.1b( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.1b( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.9( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.026767 7 0.000141
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.a( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.026953 7 0.000069
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.a( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.9( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.a( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.13( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.029906 7 0.000132
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.13( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.13( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.9( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.3( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.027607 7 0.000077
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.3( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.028332 7 0.000182
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.3( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.3( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.3( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.3( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.f( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.028700 7 0.000105
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.6( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.027369 7 0.000130
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.f( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.6( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.f( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.6( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.18( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.027202 7 0.000078
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.18( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.18( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.c( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.029064 7 0.000121
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.c( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.c( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.6( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.028138 7 0.000331
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.6( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.6( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.4( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.028823 7 0.000091
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.f( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.029416 7 0.000192
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.4( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.f( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.4( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.f( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.9( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.029241 7 0.000472
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.1b( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.027491 7 0.000106
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.9( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.1b( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.9( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.1b( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.17( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.030989 7 0.000041
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.17( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.17( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.1f( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.027538 7 0.000115
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.1f( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.1f( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.1( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.028741 7 0.000141
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.1( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.12( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.029786 7 0.000457
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.1( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.12( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.12( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.015368 1 0.000067
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000016 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.5( v 32'39 lc 32'6 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.015984 1 0.000535
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.5( v 32'39 lc 32'6 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.5( v 32'39 lc 32'6 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000011 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.5( v 32'39 lc 32'6 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Recovering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.878012657s of 10.115623474s, submitted: 499
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.5( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.292794 1 0.000079
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.5( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.5( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000016 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.5( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.3( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.308847 1 0.000497
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.3( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.3( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000018 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.3( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Recovering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.3( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.144758 1 0.000109
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.3( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.3( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.3( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.d( v 32'39 lc 32'7 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.453646 1 0.000449
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.d( v 32'39 lc 32'7 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.d( v 32'39 lc 32'7 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000018 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.d( v 32'39 lc 32'7 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Recovering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.141584 1 0.000100
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000011 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.7( v 32'39 lc 32'11 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.594948 1 0.000047
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.7( v 32'39 lc 32'11 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.7( v 32'39 lc 32'11 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000011 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.7( v 32'39 lc 32'11 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.f( v 32'39 lc 32'1 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.667516 2 0.000426
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.7( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.072085 1 0.000116
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.f( v 32'39 lc 32'1 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.7( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.7( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000017 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.7( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.f( v 32'39 lc 32'1 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000036 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.f( v 32'39 lc 32'1 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/Recovering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:33:39.832780+0000)
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.215949 1 0.000126
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.1c( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.881819 1 0.000027
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000026 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.1c( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/39 les/c/f=43/41/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.1c] failed. State was: not registered w/ OSD
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.16( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.881980 1 0.000012
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.16( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.e( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.881790 1 0.000021
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.e( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.15( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.881795 1 0.000054
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.15( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.15] failed. State was: not registered w/ OSD
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.8( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.881900 1 0.000061
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.8( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.8] failed. State was: not registered w/ OSD
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.5( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.881956 1 0.000020
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.5( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.5] failed. State was: not registered w/ OSD
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.2( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.882008 1 0.000013
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.2( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.2] failed. State was: not registered w/ OSD
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.5( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.882002 1 0.000077
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.5( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.8( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.882069 1 0.000013
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.8( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.c( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.882111 1 0.000013
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.c( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.c] failed. State was: not registered w/ OSD
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.1( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.882168 1 0.000012
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.1( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.1] failed. State was: not registered w/ OSD
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.1e( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.882233 1 0.000014
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.1e( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.11( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.882422 1 0.000122
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.11( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.1d( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.882442 1 0.000014
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.1d( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.e( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.882509 1 0.000015
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.e( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.e] failed. State was: not registered w/ OSD
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.1a( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.882751 1 0.000031
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.1a( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.1a] failed. State was: not registered w/ OSD
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.7( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.882845 1 0.000015
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.7( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.a( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.883069 1 0.000027
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.a( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.a] failed. State was: not registered w/ OSD
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.11( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.882535 1 0.000882
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.11( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.11] failed. State was: not registered w/ OSD
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.1f( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.874821 1 0.000035
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.1f( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.15( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.874249 1 0.000032
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.15( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.1b( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.874300 1 0.000025
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.1b( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.1b] failed. State was: not registered w/ OSD
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.a( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.874340 1 0.000025
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.a( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.13( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.874411 1 0.000030
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.13( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.13] failed. State was: not registered w/ OSD
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.9( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.874488 1 0.000044
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.9( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.3( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.874053 1 0.000036
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.3( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.3] failed. State was: not registered w/ OSD
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.3( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.874157 1 0.000024
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.3( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.f( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.874217 1 0.000019
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.f( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.f] failed. State was: not registered w/ OSD
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.6( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.874288 1 0.000019
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.6( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.6] failed. State was: not registered w/ OSD
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.18( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.874419 1 0.000020
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.18( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.18] failed. State was: not registered w/ OSD
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.c( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.874459 1 0.000028
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.c( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.6( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.874535 1 0.000017
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.6( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.f( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.874636 1 0.000016
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.f( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.4( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.874834 1 0.000020
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.4( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.4] failed. State was: not registered w/ OSD
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.9( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.874937 1 0.000017
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.9( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.9] failed. State was: not registered w/ OSD
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.1b( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.875027 1 0.000017
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.1b( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.17( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.875211 1 0.000029
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.17( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.1f( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.875292 1 0.000019
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.1f( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.1f] failed. State was: not registered w/ OSD
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.1( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.875350 1 0.000017
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.1( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.12( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.875400 1 0.000017
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.12( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.1c( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 DELETING pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.010538 1 0.000122
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.1c( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.892456 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.1c( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.913034 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.1c] failed. State was: not registered w/ OSD
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.16( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.014778 1 0.000033
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.16( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.896792 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.16( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.915965 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.16] failed. State was: not registered w/ OSD
Nov 25 23:58:30 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 58540032 unmapped: 1089536 heap: 59629568 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.e( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.022306 1 0.000030
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.e( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.904140 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.e( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.923161 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.15( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 DELETING pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.029560 1 0.000025
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.15( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.911424 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.15( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.930306 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.15] failed. State was: not registered w/ OSD
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.5( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 DELETING pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.036949 1 0.000035
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.5( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.918955 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.5( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.937559 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.5] failed. State was: not registered w/ OSD
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.8( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 DELETING pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.044335 1 0.000118
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.8( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.926285 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.8( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.945159 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.8] failed. State was: not registered w/ OSD
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.2( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 DELETING pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.051643 1 0.000025
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.2( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.933687 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.2( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.951509 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.2] failed. State was: not registered w/ OSD
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.8( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.058711 1 0.000033
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.8( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.940814 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.8( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.958586 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.5( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.066151 1 0.000090
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.5( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.948188 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.5( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.966746 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.5] failed. State was: not registered w/ OSD
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.c( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 DELETING pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.073471 1 0.000023
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.c( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.955603 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.c( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.973370 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.c] failed. State was: not registered w/ OSD
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.1( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 DELETING pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.080780 1 0.000075
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.1( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.962982 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.1( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.981444 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.1] failed. State was: not registered w/ OSD
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.1e( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.088138 1 0.000066
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.1e( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.970424 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.1e( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.987666 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.11( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.095405 1 0.000112
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.11( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.977862 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.11( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.997061 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.1d( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.102733 1 0.000054
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.1d( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.985210 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[3.1d( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.002605 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 43 handle_osd_map epochs [44,44], i have 43, src has [1,44]
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.e( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 DELETING pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.113388 1 0.000021
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.e( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.995963 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 43 pg[7.e( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.013593 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.2(unlocked)] enter Initial
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.2( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=0 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000123 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.2( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=0 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.2( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000050 1 0.000089
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.2( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.2( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.2( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.2( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000592 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.2( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.2( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.2( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.2( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000276 1 0.000890
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.2( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.6(unlocked)] enter Initial
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.6( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=0 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000080 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.6( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=0 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.6( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000017 1 0.000030
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.6( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.6( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.6( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.6( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000013 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.6( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.6( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.6( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.6( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000154 1 0.000169
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.6( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.e(unlocked)] enter Initial
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.e( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=0 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000085 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.e( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=0 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.e( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000011 1 0.000027
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.e( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.e( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.e( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.e( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000019 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.e( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.e( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.e( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.e( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000101 1 0.000074
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.e( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.a(unlocked)] enter Initial
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=0 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000086 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=0 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000020 1 0.000046
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000014 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000155 1 0.000083
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:30 compute-0 ceph-osd[90055]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.2( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.002173 2 0.000166
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.2( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.2( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000034 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.2( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.e] failed. State was: not registered w/ OSD
Nov 25 23:58:30 compute-0 ceph-osd[90055]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.6( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.002901 2 0.000064
Nov 25 23:58:30 compute-0 ceph-osd[90055]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.6( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.6( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.6( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.e( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.002560 2 0.000045
Nov 25 23:58:30 compute-0 ceph-osd[90055]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.e( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.a( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.002105 2 0.000071
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.a( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.a( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000010 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.e( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000010 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.e( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[6.a( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[7.1a( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 DELETING pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.125351 4 0.000106
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[3.7( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.125247 4 0.000194
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[3.7( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.008153 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[7.1a( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.008167 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[3.7( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [2] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.026228 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[7.1a( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.025368 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.1a] failed. State was: not registered w/ OSD
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[7.a( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 DELETING pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.131524 4 0.000047
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[7.a( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.014630 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[7.a( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.033983 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.a] failed. State was: not registered w/ OSD
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[7.11( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 DELETING pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.138779 4 0.000039
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[7.11( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.021358 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[7.11( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [2] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.041735 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.11] failed. State was: not registered w/ OSD
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[3.1f( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.146142 4 0.000157
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[3.1f( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.021055 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[3.1f( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.044960 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[3.15( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.153617 4 0.000052
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[3.15( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.027914 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[3.15( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.057928 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[7.1b( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.160890 4 0.000025
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[7.1b( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.035260 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[7.1b( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.061731 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.1b] failed. State was: not registered w/ OSD
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[7.13( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.168064 4 0.000043
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[7.13( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.042522 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[7.13( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.072494 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.13] failed. State was: not registered w/ OSD
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[3.a( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.175533 4 0.000095
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[3.a( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.049949 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[3.a( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.076932 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.a] failed. State was: not registered w/ OSD
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[3.9( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.182743 4 0.000026
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[3.9( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.057266 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[3.9( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.084095 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[7.3( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.190368 4 0.000101
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[7.3( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.064493 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[7.3( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.092172 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.3] failed. State was: not registered w/ OSD
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[3.3( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.197436 4 0.000056
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[3.3( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.071647 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[3.3( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.100144 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.3] failed. State was: not registered w/ OSD
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[7.f( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.204688 4 0.000029
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[7.f( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.078945 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[7.f( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.107719 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.f] failed. State was: not registered w/ OSD
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[7.18( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.211860 4 0.000037
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[7.18( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.086324 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[7.18( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.113587 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.18] failed. State was: not registered w/ OSD
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[7.6( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.219797 4 0.000171
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[7.6( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.094143 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[7.6( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.121573 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.6] failed. State was: not registered w/ OSD
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[3.c( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.226741 4 0.000032
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[3.c( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.101234 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[3.c( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.130324 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.c] failed. State was: not registered w/ OSD
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[3.6( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.234042 4 0.000107
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[3.6( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.108629 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[3.6( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.137036 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[3.f( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.241354 4 0.000077
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[3.f( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.116059 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[3.f( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.145522 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.f] failed. State was: not registered w/ OSD
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[7.4( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.248572 4 0.000051
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[7.4( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.123471 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[7.4( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.152340 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.4] failed. State was: not registered w/ OSD
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[7.9( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.255776 4 0.000102
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[7.9( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.130764 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[7.9( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.160035 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.9] failed. State was: not registered w/ OSD
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[3.1b( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.263202 4 0.000060
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[3.1b( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.138323 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[3.1b( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.165859 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[3.17( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.270339 4 0.000090
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[3.17( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.145687 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[3.17( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.176729 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[7.1f( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.277439 4 0.000046
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[7.1f( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.152780 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[7.1f( empty lb MIN local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.180392 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[7.1f] failed. State was: not registered w/ OSD
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[3.1( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.284997 4 0.000036
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[3.1( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.160394 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[3.1( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.189222 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[3.12( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 DELETING pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.292135 4 0.000023
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[3.12( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.167576 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 44 pg[3.12( empty lb MIN local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 2.197395 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:33:40.832945+0000)
Nov 25 23:58:30 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:30 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:30 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 58531840 unmapped: 2146304 heap: 60678144 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:30 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 362176 data_alloc: 218103808 data_used: 0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 44 handle_osd_map epochs [44,45], i have 44, src has [1,45]
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 44 handle_osd_map epochs [44,45], i have 45, src has [1,45]
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.a( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.997991 2 0.000138
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.a( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.000348 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.a( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.e( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.998089 2 0.000126
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.e( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 1.000929 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.e( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.e( v 32'39 lc 32'10 (0'0,32'39] local-lis/les=44/45 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.6( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.998399 2 0.000067
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.6( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 1.001585 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.6( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.2( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.999866 2 0.000238
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.2( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.002524 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.2( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.2( v 32'39 (0'0,32'39] local-lis/les=44/45 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.6( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=44/45 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004684 3 0.000171
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000027 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 45 handle_osd_map epochs [45,45], i have 45, src has [1,45]
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.e( v 32'39 lc 32'10 (0'0,32'39] local-lis/les=44/45 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.2( v 32'39 (0'0,32'39] local-lis/les=44/45 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.6( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=44/45 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.2( v 32'39 (0'0,32'39] local-lis/les=44/45 n=2 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.011059 4 0.000129
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.2( v 32'39 (0'0,32'39] local-lis/les=44/45 n=2 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.2( v 32'39 (0'0,32'39] local-lis/les=44/45 n=2 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000014 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.2( v 32'39 (0'0,32'39] local-lis/les=44/45 n=2 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.6( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=44/45 n=2 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.011139 4 0.000415
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.6( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=44/45 n=2 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.e( v 32'39 lc 32'10 (0'0,32'39] local-lis/les=44/45 n=1 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.011617 4 0.000274
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.6( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=44/45 n=2 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000148 1 0.000110
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.6( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=44/45 n=2 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.6( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=44/45 n=2 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000017 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.6( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=44/45 n=2 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.e( v 32'39 lc 32'10 (0'0,32'39] local-lis/les=44/45 n=1 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.6( v 32'39 (0'0,32'39] local-lis/les=44/45 n=2 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.016560 2 0.000089
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.6( v 32'39 (0'0,32'39] local-lis/les=44/45 n=2 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.6( v 32'39 (0'0,32'39] local-lis/les=44/45 n=2 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000015 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.6( v 32'39 (0'0,32'39] local-lis/les=44/45 n=2 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.e( v 32'39 lc 32'10 (0'0,32'39] local-lis/les=44/45 n=1 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.016599 2 0.000176
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.e( v 32'39 lc 32'10 (0'0,32'39] local-lis/les=44/45 n=1 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.e( v 32'39 lc 32'10 (0'0,32'39] local-lis/les=44/45 n=1 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000012 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.e( v 32'39 lc 32'10 (0'0,32'39] local-lis/les=44/45 n=1 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.e( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.064804 1 0.000109
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.e( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.e( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000016 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 45 pg[6.e( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=39/22 lis/c=44/39 les/c/f=45/41/0 sis=44) [1] r=0 lpr=44 pi=[39,44)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:33:41.833132+0000)
Nov 25 23:58:30 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 58703872 unmapped: 1974272 heap: 60678144 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 45 handle_osd_map epochs [45,46], i have 45, src has [1,46]
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=32'39 mlcod 32'39 active+clean] exit Started/Primary/Active/Clean 3.006733 7 0.000077
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started/Primary/Active 3.041584 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started/Primary 4.044637 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started 4.044694 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=32'39 mlcod 32'39 active mbc={255={}}] enter Reset
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=12.976484299s) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 32'39 active pruub 89.309509277s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=12.976387978s) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 89.309509277s@ mbc={}] exit Reset 0.000140 1 0.000204
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=12.976387978s) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 89.309509277s@ mbc={}] enter Started
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=12.976387978s) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 89.309509277s@ mbc={}] enter Start
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=12.976387978s) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 89.309509277s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=12.976387978s) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 89.309509277s@ mbc={}] exit Start 0.000016 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=12.976387978s) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 89.309509277s@ mbc={}] enter Started/Stray
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.7( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=32'39 mlcod 32'39 active+clean] exit Started/Primary/Active/Clean 2.355219 7 0.000231
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.7( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started/Primary/Active 3.041575 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.7( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started/Primary 4.047231 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.7( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started 4.047271 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.7( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=32'39 mlcod 32'39 active mbc={255={}}] enter Reset
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.7( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=12.976278305s) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 32'39 active pruub 89.309593201s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.7( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=12.976189613s) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 89.309593201s@ mbc={}] exit Reset 0.000141 1 0.000207
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.7( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=12.976189613s) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 89.309593201s@ mbc={}] enter Started
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.7( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=12.976189613s) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 89.309593201s@ mbc={}] enter Start
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.7( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=12.976189613s) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 89.309593201s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.7( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=12.976189613s) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 89.309593201s@ mbc={}] exit Start 0.000019 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.7( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=12.976189613s) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 89.309593201s@ mbc={}] enter Started/Stray
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.3( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=32'39 mlcod 32'39 active+clean] exit Started/Primary/Active/Clean 2.569489 7 0.000068
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.3( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started/Primary/Active 3.041406 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.3( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started/Primary 4.050376 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.3( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started 4.050400 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.3( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=32'39 mlcod 32'39 active mbc={255={}}] enter Reset
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.3( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=12.976170540s) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 32'39 active pruub 89.309707642s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=32'39 mlcod 32'39 active+clean] exit Started/Primary/Active/Clean 2.139547 7 0.000224
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started/Primary/Active 3.041204 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started/Primary 4.052237 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started 4.052277 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.3( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=12.976113319s) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 89.309707642s@ mbc={}] exit Reset 0.000091 1 0.000133
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=32'39 mlcod 32'39 active mbc={255={}}] enter Reset
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.3( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=12.976113319s) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 89.309707642s@ mbc={}] enter Started
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.3( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=12.976113319s) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 89.309707642s@ mbc={}] enter Start
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.3( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=12.976113319s) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 89.309707642s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=12.976114273s) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 32'39 active pruub 89.309745789s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.3( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=12.976113319s) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 89.309707642s@ mbc={}] exit Start 0.000015 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.3( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=12.976113319s) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 89.309707642s@ mbc={}] enter Started/Stray
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=12.976060867s) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 89.309745789s@ mbc={}] exit Reset 0.000085 1 0.000124
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=12.976060867s) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 89.309745789s@ mbc={}] enter Started
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=12.976060867s) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 89.309745789s@ mbc={}] enter Start
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=12.976060867s) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 89.309745789s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=12.976060867s) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 89.309745789s@ mbc={}] exit Start 0.000014 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 46 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=12.976060867s) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 89.309745789s@ mbc={}] enter Started/Stray
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 46 handle_osd_map epochs [46,46], i have 46, src has [1,46]
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:33:42.833245+0000)
Nov 25 23:58:30 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 58810368 unmapped: 1867776 heap: 60678144 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 46 handle_osd_map epochs [46,47], i have 46, src has [1,47]
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.027789 7 0.000175
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.3( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.027345 7 0.000266
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.3( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.7( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.027590 7 0.000186
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.3( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.7( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.7( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 47 handle_osd_map epochs [47,47], i have 47, src has [1,47]
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 47 handle_osd_map epochs [47,47], i have 47, src has [1,47]
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.027415 7 0.000116
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 pi=[42,46)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.017389 2 0.000092
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 pi=[42,46)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.017513 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 pi=[42,46)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] enter Started/ToDelete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 pi=[42,46)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 pi=[42,46)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000108 1 0.000263
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 pi=[42,46)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.7( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 pi=[42,46)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.076207 2 0.000058
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.7( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 pi=[42,46)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.076307 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.7( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 pi=[42,46)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] enter Started/ToDelete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.7( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 pi=[42,46)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.7( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 pi=[42,46)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000143 1 0.000281
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.7( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 pi=[42,46)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.b( v 32'39 (0'0,32'39] lb MIN local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 DELETING pi=[42,46)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.120835 2 0.000288
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.b( v 32'39 (0'0,32'39] lb MIN local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 pi=[42,46)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] exit Started/ToDelete 0.121059 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.b( v 32'39 (0'0,32'39] lb MIN local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 pi=[42,46)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] exit Started 1.166491 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.7( v 32'39 (0'0,32'39] lb MIN local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 DELETING pi=[42,46)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.135980 2 0.000305
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.7( v 32'39 (0'0,32'39] lb MIN local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 pi=[42,46)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] exit Started/ToDelete 0.136271 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.7( v 32'39 (0'0,32'39] lb MIN local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 pi=[42,46)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] exit Started 1.240287 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 pi=[42,46)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.284396 2 0.000050
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 pi=[42,46)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.284461 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 pi=[42,46)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] enter Started/ToDelete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 pi=[42,46)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 pi=[42,46)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000136 1 0.000146
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 pi=[42,46)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.3( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 pi=[42,46)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.409841 2 0.000063
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.3( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 pi=[42,46)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.409886 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.3( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 pi=[42,46)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] enter Started/ToDelete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.3( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 pi=[42,46)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.3( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 pi=[42,46)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000155 1 0.000087
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.3( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 pi=[42,46)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.f( v 32'39 (0'0,32'39] lb MIN local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 DELETING pi=[42,46)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.135139 2 0.000246
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.f( v 32'39 (0'0,32'39] lb MIN local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 pi=[42,46)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] exit Started/ToDelete 0.135553 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.f( v 32'39 (0'0,32'39] lb MIN local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 pi=[42,46)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] exit Started 1.447544 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.3( v 32'39 (0'0,32'39] lb MIN local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 DELETING pi=[42,46)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.032028 2 0.000215
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.3( v 32'39 (0'0,32'39] lb MIN local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 pi=[42,46)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] exit Started/ToDelete 0.032286 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 47 pg[6.3( v 32'39 (0'0,32'39] lb MIN local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=-1 lpr=46 pi=[42,46)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] exit Started 1.469667 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:33:43.833392+0000)
Nov 25 23:58:30 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 58703872 unmapped: 1974272 heap: 60678144 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 47 heartbeat osd_stat(store_statfs(0x4fe0dd000/0x0/0x4ffc00000, data 0xa2a3b/0xef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:33:44.833580+0000)
Nov 25 23:58:30 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 58703872 unmapped: 1974272 heap: 60678144 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 3.b scrub starts
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 3.b scrub ok
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:33:45.833700+0000)
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 11 sent 9 num 2 unsent 2 sending 2
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:34:15.720347+0000 osd.1 (osd.1) 10 : cluster [DBG] 3.b scrub starts
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:34:15.734345+0000 osd.1 (osd.1) 11 : cluster [DBG] 3.b scrub ok
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:30 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:30 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 58703872 unmapped: 1974272 heap: 60678144 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:30 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 356691 data_alloc: 218103808 data_used: 0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 11) v1
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:34:15.720347+0000 osd.1 (osd.1) 10 : cluster [DBG] 3.b scrub starts
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:34:15.734345+0000 osd.1 (osd.1) 11 : cluster [DBG] 3.b scrub ok
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 47 heartbeat osd_stat(store_statfs(0x4fe0dd000/0x0/0x4ffc00000, data 0xa2a3b/0xef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 3.d scrub starts
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 3.d scrub ok
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:33:46.833891+0000)
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 13 sent 11 num 2 unsent 2 sending 2
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:34:16.739009+0000 osd.1 (osd.1) 12 : cluster [DBG] 3.d scrub starts
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:34:16.753081+0000 osd.1 (osd.1) 13 : cluster [DBG] 3.d scrub ok
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 58728448 unmapped: 1949696 heap: 60678144 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 13) v1
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:34:16.739009+0000 osd.1 (osd.1) 12 : cluster [DBG] 3.d scrub starts
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:34:16.753081+0000 osd.1 (osd.1) 13 : cluster [DBG] 3.d scrub ok
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:33:47.834108+0000)
Nov 25 23:58:30 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 58728448 unmapped: 1949696 heap: 60678144 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 47 handle_osd_map epochs [47,48], i have 47, src has [1,48]
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:33:48.834328+0000)
Nov 25 23:58:30 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 58736640 unmapped: 1941504 heap: 60678144 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 48 heartbeat osd_stat(store_statfs(0x4fe0db000/0x0/0x4ffc00000, data 0xa403e/0xf2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 48 pg[6.c(unlocked)] enter Initial
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 48 pg[6.c( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=0 lpr=0 pi=[39,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000104 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 48 pg[6.c( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=0 lpr=0 pi=[39,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 48 pg[6.c( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000019 1 0.000045
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 48 pg[6.c( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 48 pg[6.c( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 48 pg[6.c( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 48 pg[6.c( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000011 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 48 pg[6.c( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 48 pg[6.c( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 48 pg[6.c( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 48 pg[6.c( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000202 1 0.000074
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 48 pg[6.c( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 48 pg[6.4(unlocked)] enter Initial
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 48 pg[6.4( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=0 lpr=0 pi=[39,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000047 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 48 pg[6.4( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=0 lpr=0 pi=[39,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 48 pg[6.4( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000009 1 0.000017
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 48 pg[6.4( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 48 pg[6.4( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 48 pg[6.4( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 48 pg[6.4( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 48 pg[6.4( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 48 pg[6.4( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 48 pg[6.4( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 48 pg[6.4( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000117 1 0.000044
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 48 pg[6.4( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:30 compute-0 ceph-osd[90055]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 48 pg[6.c( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=32'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.001091 2 0.000068
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 48 pg[6.c( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=32'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 48 pg[6.c( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=32'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000009 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 48 pg[6.c( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=32'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:30 compute-0 ceph-osd[90055]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 48 pg[6.4( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=32'39 mlcod 0'0 peering m=4 mbc={}] exit Started/Primary/Peering/GetLog 0.000844 2 0.000054
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 48 pg[6.4( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=32'39 mlcod 0'0 peering m=4 mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 48 pg[6.4( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=32'39 mlcod 0'0 peering m=4 mbc={}] exit Started/Primary/Peering/GetMissing 0.000007 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 48 pg[6.4( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=32'39 mlcod 0'0 peering m=4 mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:30 compute-0 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.566368103s of 10.003333092s, submitted: 153
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:33:49.834569+0000)
Nov 25 23:58:30 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 58736640 unmapped: 1941504 heap: 60678144 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 3.10 deep-scrub starts
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 3.10 deep-scrub ok
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:33:50.834713+0000)
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 15 sent 13 num 2 unsent 2 sending 2
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:34:20.644641+0000 osd.1 (osd.1) 14 : cluster [DBG] 3.10 deep-scrub starts
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:34:20.658779+0000 osd.1 (osd.1) 15 : cluster [DBG] 3.10 deep-scrub ok
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 48 handle_osd_map epochs [48,49], i have 48, src has [1,49]
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 48 handle_osd_map epochs [49,49], i have 49, src has [1,49]
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.c( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=32'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 1.593731 2 0.000086
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.4( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=32'39 mlcod 0'0 peering m=4 mbc={}] exit Started/Primary/Peering/WaitUpThru 1.593373 2 0.000077
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.c( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=32'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 1.595099 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.4( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=32'39 mlcod 0'0 peering m=4 mbc={}] exit Started/Primary/Peering 1.594402 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.4( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=32'39 mlcod 0'0 unknown m=4 mbc={}] enter Started/Primary/Active
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.c( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=32'39 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.4( v 32'39 lc 32'8 (0'0,32'39] local-lis/les=48/49 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=32'39 lcod 0'0 mlcod 0'0 activating+degraded m=4 mbc={255={(0+1)=4}}] enter Started/Primary/Active/Activating
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.c( v 32'39 lc 32'9 (0'0,32'39] local-lis/les=48/49 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=32'39 lcod 0'0 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.5( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=32'39 mlcod 32'39 active+clean] exit Started/Primary/Active/Clean 11.508167 17 0.000105
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.5( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started/Primary/Active 11.835573 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.5( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started/Primary 12.842137 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.5( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started 12.842177 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.5( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=32'39 mlcod 32'39 active mbc={255={}}] enter Reset
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.5( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49 pruub=12.182587624s) [0] r=-1 lpr=49 pi=[42,49)/1 crt=32'39 mlcod 32'39 active pruub 97.309936523s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.5( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49 pruub=12.182506561s) [0] r=-1 lpr=49 pi=[42,49)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 97.309936523s@ mbc={}] exit Reset 0.000124 1 0.000180
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.5( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49 pruub=12.182506561s) [0] r=-1 lpr=49 pi=[42,49)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 97.309936523s@ mbc={}] enter Started
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.5( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49 pruub=12.182506561s) [0] r=-1 lpr=49 pi=[42,49)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 97.309936523s@ mbc={}] enter Start
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.5( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49 pruub=12.182506561s) [0] r=-1 lpr=49 pi=[42,49)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 97.309936523s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.5( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49 pruub=12.182506561s) [0] r=-1 lpr=49 pi=[42,49)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 97.309936523s@ mbc={}] exit Start 0.000014 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.5( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49 pruub=12.182506561s) [0] r=-1 lpr=49 pi=[42,49)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 97.309936523s@ mbc={}] enter Started/Stray
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=32'39 mlcod 32'39 active+clean] exit Started/Primary/Active/Clean 11.221861 17 0.000095
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started/Primary/Active 11.835303 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started/Primary 12.844997 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started 12.845050 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=32'39 mlcod 32'39 active mbc={255={}}] enter Reset
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49 pruub=12.182564735s) [0] r=-1 lpr=49 pi=[42,49)/1 crt=32'39 mlcod 32'39 active pruub 97.310157776s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49 pruub=12.182506561s) [0] r=-1 lpr=49 pi=[42,49)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 97.310157776s@ mbc={}] exit Reset 0.000092 1 0.000118
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49 pruub=12.182506561s) [0] r=-1 lpr=49 pi=[42,49)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 97.310157776s@ mbc={}] enter Started
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49 pruub=12.182506561s) [0] r=-1 lpr=49 pi=[42,49)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 97.310157776s@ mbc={}] enter Start
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49 pruub=12.182506561s) [0] r=-1 lpr=49 pi=[42,49)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 97.310157776s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49 pruub=12.182506561s) [0] r=-1 lpr=49 pi=[42,49)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 97.310157776s@ mbc={}] exit Start 0.000009 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49 pruub=12.182506561s) [0] r=-1 lpr=49 pi=[42,49)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 97.310157776s@ mbc={}] enter Started/Stray
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.c( v 32'39 lc 32'9 (0'0,32'39] local-lis/les=48/49 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.4( v 32'39 lc 32'8 (0'0,32'39] local-lis/les=48/49 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=4 mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.c( v 32'39 lc 32'9 (0'0,32'39] local-lis/les=48/49 n=1 ec=39/22 lis/c=48/39 les/c/f=49/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.004755 5 0.000240
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.c( v 32'39 lc 32'9 (0'0,32'39] local-lis/les=48/49 n=1 ec=39/22 lis/c=48/39 les/c/f=49/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.4( v 32'39 lc 32'8 (0'0,32'39] local-lis/les=48/49 n=2 ec=39/22 lis/c=48/39 les/c/f=49/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=4 mbc={255={(0+1)=4}}] exit Started/Primary/Active/Activating 0.004849 4 0.000186
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.4( v 32'39 lc 32'8 (0'0,32'39] local-lis/les=48/49 n=2 ec=39/22 lis/c=48/39 les/c/f=49/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=4 mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.c( v 32'39 lc 32'9 (0'0,32'39] local-lis/les=48/49 n=1 ec=39/22 lis/c=48/39 les/c/f=49/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000107 1 0.000046
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.c( v 32'39 lc 32'9 (0'0,32'39] local-lis/les=48/49 n=1 ec=39/22 lis/c=48/39 les/c/f=49/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.c( v 32'39 lc 32'9 (0'0,32'39] local-lis/les=48/49 n=1 ec=39/22 lis/c=48/39 les/c/f=49/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000005 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.c( v 32'39 lc 32'9 (0'0,32'39] local-lis/les=48/49 n=1 ec=39/22 lis/c=48/39 les/c/f=49/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.c( v 32'39 (0'0,32'39] local-lis/les=48/49 n=1 ec=39/22 lis/c=48/39 les/c/f=49/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.040860 1 0.000050
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.c( v 32'39 (0'0,32'39] local-lis/les=48/49 n=1 ec=39/22 lis/c=48/39 les/c/f=49/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.c( v 32'39 (0'0,32'39] local-lis/les=48/49 n=1 ec=39/22 lis/c=48/39 les/c/f=49/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000024 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.c( v 32'39 (0'0,32'39] local-lis/les=48/49 n=1 ec=39/22 lis/c=48/39 les/c/f=49/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.4( v 32'39 lc 32'8 (0'0,32'39] local-lis/les=48/49 n=2 ec=39/22 lis/c=48/39 les/c/f=49/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=4 mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.041074 2 0.000022
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.4( v 32'39 lc 32'8 (0'0,32'39] local-lis/les=48/49 n=2 ec=39/22 lis/c=48/39 les/c/f=49/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=4 mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.4( v 32'39 lc 32'8 (0'0,32'39] local-lis/les=48/49 n=2 ec=39/22 lis/c=48/39 les/c/f=49/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=4 mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000005 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.4( v 32'39 lc 32'8 (0'0,32'39] local-lis/les=48/49 n=2 ec=39/22 lis/c=48/39 les/c/f=49/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=4 mbc={255={(0+1)=4}}] enter Started/Primary/Active/Recovering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:30 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:30 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 59826176 unmapped: 851968 heap: 60678144 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:30 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 372598 data_alloc: 218103808 data_used: 8192
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.4( v 32'39 (0'0,32'39] local-lis/les=48/49 n=2 ec=39/22 lis/c=48/39 les/c/f=49/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.253074 1 0.000070
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.4( v 32'39 (0'0,32'39] local-lis/les=48/49 n=2 ec=39/22 lis/c=48/39 les/c/f=49/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.4( v 32'39 (0'0,32'39] local-lis/les=48/49 n=2 ec=39/22 lis/c=48/39 les/c/f=49/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000017 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 49 pg[6.4( v 32'39 (0'0,32'39] local-lis/les=48/49 n=2 ec=39/22 lis/c=48/39 les/c/f=49/41/0 sis=48) [1] r=0 lpr=48 pi=[39,48)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 3.13 deep-scrub starts
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 3.13 deep-scrub ok
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:33:51.834922+0000)
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  log_queue is 4 last_log 17 sent 15 num 4 unsent 2 sending 2
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:34:21.596489+0000 osd.1 (osd.1) 16 : cluster [DBG] 3.13 deep-scrub starts
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:34:21.610590+0000 osd.1 (osd.1) 17 : cluster [DBG] 3.13 deep-scrub ok
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 15) v1
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:34:20.644641+0000 osd.1 (osd.1) 14 : cluster [DBG] 3.10 deep-scrub starts
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:34:20.658779+0000 osd.1 (osd.1) 15 : cluster [DBG] 3.10 deep-scrub ok
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 17) v1
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:34:21.596489+0000 osd.1 (osd.1) 16 : cluster [DBG] 3.13 deep-scrub starts
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:34:21.610590+0000 osd.1 (osd.1) 17 : cluster [DBG] 3.13 deep-scrub ok
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 49 handle_osd_map epochs [50,50], i have 49, src has [1,50]
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 50 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=-1 lpr=49 pi=[42,49)/1 crt=32'39 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.013354 6 0.000119
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 50 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=-1 lpr=49 pi=[42,49)/1 crt=32'39 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 50 pg[6.5( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=-1 lpr=49 pi=[42,49)/1 crt=32'39 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.013583 6 0.000143
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 50 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=-1 lpr=49 pi=[42,49)/1 crt=32'39 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 50 pg[6.5( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=-1 lpr=49 pi=[42,49)/1 crt=32'39 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 50 pg[6.5( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=-1 lpr=49 pi=[42,49)/1 crt=32'39 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 50 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=-1 lpr=49 pi=[42,49)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.070249 3 0.000075
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 50 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=-1 lpr=49 pi=[42,49)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.070299 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 50 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=-1 lpr=49 pi=[42,49)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] enter Started/ToDelete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 50 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=-1 lpr=49 pi=[42,49)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 50 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=-1 lpr=49 pi=[42,49)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000126 1 0.000097
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 50 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=-1 lpr=49 pi=[42,49)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Nov 25 23:58:30 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 59924480 unmapped: 753664 heap: 60678144 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 50 pg[6.5( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=-1 lpr=49 pi=[42,49)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.196211 3 0.000093
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 50 pg[6.5( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=-1 lpr=49 pi=[42,49)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.196264 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 50 pg[6.5( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=-1 lpr=49 pi=[42,49)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] enter Started/ToDelete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 50 pg[6.5( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=-1 lpr=49 pi=[42,49)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 50 pg[6.5( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=-1 lpr=49 pi=[42,49)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000158 1 0.000095
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 50 pg[6.5( v 32'39 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=-1 lpr=49 pi=[42,49)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 50 pg[6.d( v 32'39 (0'0,32'39] lb MIN local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=-1 lpr=49 DELETING pi=[42,49)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.130512 2 0.000237
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 50 pg[6.d( v 32'39 (0'0,32'39] lb MIN local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=-1 lpr=49 pi=[42,49)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] exit Started/ToDelete 0.130693 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 50 pg[6.d( v 32'39 (0'0,32'39] lb MIN local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=-1 lpr=49 pi=[42,49)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] exit Started 1.214421 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 50 pg[6.5( v 32'39 (0'0,32'39] lb MIN local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=-1 lpr=49 DELETING pi=[42,49)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.023458 2 0.000192
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 50 pg[6.5( v 32'39 (0'0,32'39] lb MIN local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=-1 lpr=49 pi=[42,49)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] exit Started/ToDelete 0.023695 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 50 pg[6.5( v 32'39 (0'0,32'39] lb MIN local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=-1 lpr=49 pi=[42,49)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] exit Started 1.233629 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 50 heartbeat osd_stat(store_statfs(0x4fe0d6000/0x0/0x4ffc00000, data 0xa5c65/0xf8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:33:52.835156+0000)
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 19 sent 17 num 2 unsent 2 sending 2
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:34:22.606939+0000 osd.1 (osd.1) 18 : cluster [DBG] 3.14 scrub starts
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:34:22.621012+0000 osd.1 (osd.1) 19 : cluster [DBG] 3.14 scrub ok
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 19) v1
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:34:22.606939+0000 osd.1 (osd.1) 18 : cluster [DBG] 3.14 scrub starts
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:34:22.621012+0000 osd.1 (osd.1) 19 : cluster [DBG] 3.14 scrub ok
Nov 25 23:58:30 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 59924480 unmapped: 753664 heap: 60678144 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:33:53.835377+0000)
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 21 sent 19 num 2 unsent 2 sending 2
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:34:23.571546+0000 osd.1 (osd.1) 20 : cluster [DBG] 3.19 scrub starts
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:34:23.585701+0000 osd.1 (osd.1) 21 : cluster [DBG] 3.19 scrub ok
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 21) v1
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:34:23.571546+0000 osd.1 (osd.1) 20 : cluster [DBG] 3.19 scrub starts
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:34:23.585701+0000 osd.1 (osd.1) 21 : cluster [DBG] 3.19 scrub ok
Nov 25 23:58:30 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 59932672 unmapped: 745472 heap: 60678144 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:33:54.835592+0000)
Nov 25 23:58:30 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 59940864 unmapped: 737280 heap: 60678144 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 50 handle_osd_map epochs [51,51], i have 50, src has [1,51]
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:33:55.835800+0000)
Nov 25 23:58:30 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:30 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:30 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 59965440 unmapped: 712704 heap: 60678144 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:30 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 378620 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 51 handle_osd_map epochs [52,52], i have 51, src has [1,52]
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 52 heartbeat osd_stat(store_statfs(0x4fe0d4000/0x0/0x4ffc00000, data 0xa7035/0xf9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:33:56.835965+0000)
Nov 25 23:58:30 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 60006400 unmapped: 671744 heap: 60678144 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:33:57.836117+0000)
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 52 heartbeat osd_stat(store_statfs(0x4fe0cd000/0x0/0x4ffc00000, data 0xa9c3b/0xff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 52 handle_osd_map epochs [53,53], i have 52, src has [1,53]
Nov 25 23:58:30 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 557056 heap: 61726720 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:33:58.836302+0000)
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 53 handle_osd_map epochs [53,54], i have 53, src has [1,54]
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 54 pg[6.9( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=32'39 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 19.911372 33 0.000110
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 54 pg[6.9( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 19.929968 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 54 pg[6.9( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 20.934610 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 54 pg[6.9( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 20.934659 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 54 pg[6.9( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=42) [1] r=0 lpr=42 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 54 pg[6.9( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=54 pruub=12.088303566s) [0] r=-1 lpr=54 pi=[42,54)/1 crt=32'39 lcod 0'0 mlcod 0'0 active pruub 105.309837341s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 54 pg[6.9( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=54 pruub=12.087999344s) [0] r=-1 lpr=54 pi=[42,54)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 105.309837341s@ mbc={}] exit Reset 0.000355 1 0.000410
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 54 pg[6.9( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=54 pruub=12.087999344s) [0] r=-1 lpr=54 pi=[42,54)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 105.309837341s@ mbc={}] enter Started
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 54 pg[6.9( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=54 pruub=12.087999344s) [0] r=-1 lpr=54 pi=[42,54)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 105.309837341s@ mbc={}] enter Start
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 54 pg[6.9( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=54 pruub=12.087999344s) [0] r=-1 lpr=54 pi=[42,54)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 105.309837341s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 54 pg[6.9( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=54 pruub=12.087999344s) [0] r=-1 lpr=54 pi=[42,54)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 105.309837341s@ mbc={}] exit Start 0.000119 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 54 pg[6.9( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=54 pruub=12.087999344s) [0] r=-1 lpr=54 pi=[42,54)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 105.309837341s@ mbc={}] enter Started/Stray
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 54 handle_osd_map epochs [53,54], i have 54, src has [1,54]
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 540672 heap: 61726720 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 3.1a deep-scrub starts
Nov 25 23:58:30 compute-0 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.732617378s of 10.148954391s, submitted: 40
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 3.1a deep-scrub ok
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 54 heartbeat osd_stat(store_statfs(0x4fe0c7000/0x0/0x4ffc00000, data 0xac841/0x105000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:33:59.836517+0000)
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 23 sent 21 num 2 unsent 2 sending 2
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:34:29.498720+0000 osd.1 (osd.1) 22 : cluster [DBG] 3.1a deep-scrub starts
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:34:29.512966+0000 osd.1 (osd.1) 23 : cluster [DBG] 3.1a deep-scrub ok
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 23) v1
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:34:29.498720+0000 osd.1 (osd.1) 22 : cluster [DBG] 3.1a deep-scrub starts
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:34:29.512966+0000 osd.1 (osd.1) 23 : cluster [DBG] 3.1a deep-scrub ok
Nov 25 23:58:30 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 540672 heap: 61726720 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _renew_subs
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 54 handle_osd_map epochs [55,55], i have 54, src has [1,55]
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 55 pg[6.9( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=-1 lpr=54 pi=[42,54)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.502368 6 0.000504
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 55 pg[6.9( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=-1 lpr=54 pi=[42,54)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 55 pg[6.9( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=-1 lpr=54 pi=[42,54)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 55 pg[6.9( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=-1 lpr=54 pi=[42,54)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000717 1 0.000084
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 55 pg[6.9( v 32'39 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=-1 lpr=54 pi=[42,54)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 55 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=39/22 lis/c=44/44 les/c/f=45/45/0 sis=44) [1] r=0 lpr=44 crt=32'39 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 19.409195 31 0.000174
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 55 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=39/22 lis/c=44/44 les/c/f=45/45/0 sis=44) [1] r=0 lpr=44 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 19.414038 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 55 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=39/22 lis/c=44/44 les/c/f=45/45/0 sis=44) [1] r=0 lpr=44 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 20.414417 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 55 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=39/22 lis/c=44/44 les/c/f=45/45/0 sis=44) [1] r=0 lpr=44 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 20.414457 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 55 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=39/22 lis/c=44/44 les/c/f=45/45/0 sis=44) [1] r=0 lpr=44 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 55 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=39/22 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=12.590789795s) [0] r=-1 lpr=55 pi=[44,55)/1 crt=32'39 lcod 0'0 mlcod 0'0 active pruub 107.316719055s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 55 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=39/22 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=12.590709686s) [0] r=-1 lpr=55 pi=[44,55)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 107.316719055s@ mbc={}] exit Reset 0.000164 1 0.000248
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 55 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=39/22 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=12.590709686s) [0] r=-1 lpr=55 pi=[44,55)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 107.316719055s@ mbc={}] enter Started
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 55 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=39/22 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=12.590709686s) [0] r=-1 lpr=55 pi=[44,55)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 107.316719055s@ mbc={}] enter Start
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 55 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=39/22 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=12.590709686s) [0] r=-1 lpr=55 pi=[44,55)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 107.316719055s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 55 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=39/22 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=12.590709686s) [0] r=-1 lpr=55 pi=[44,55)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 107.316719055s@ mbc={}] exit Start 0.000016 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 55 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=39/22 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=12.590709686s) [0] r=-1 lpr=55 pi=[44,55)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 107.316719055s@ mbc={}] enter Started/Stray
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 55 handle_osd_map epochs [55,55], i have 55, src has [1,55]
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 55 pg[6.9( v 32'39 (0'0,32'39] lb MIN local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=-1 lpr=54 DELETING pi=[42,54)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.013391 2 0.000105
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 55 pg[6.9( v 32'39 (0'0,32'39] lb MIN local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=-1 lpr=54 pi=[42,54)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.014206 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 55 pg[6.9( v 32'39 (0'0,32'39] lb MIN local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=-1 lpr=54 pi=[42,54)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.516826 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:00.836733+0000)
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 25 sent 23 num 2 unsent 2 sending 2
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:34:30.529252+0000 osd.1 (osd.1) 24 : cluster [DBG] 3.1c scrub starts
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:34:30.543264+0000 osd.1 (osd.1) 25 : cluster [DBG] 3.1c scrub ok
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 25) v1
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:34:30.529252+0000 osd.1 (osd.1) 24 : cluster [DBG] 3.1c scrub starts
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:34:30.543264+0000 osd.1 (osd.1) 25 : cluster [DBG] 3.1c scrub ok
Nov 25 23:58:30 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:30 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:30 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 491520 heap: 61726720 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:30 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 390250 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _renew_subs
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 55 handle_osd_map epochs [56,56], i have 55, src has [1,56]
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 56 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=39/22 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=-1 lpr=55 pi=[44,55)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.986485 6 0.000159
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 56 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=39/22 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=-1 lpr=55 pi=[44,55)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 56 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=39/22 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=-1 lpr=55 pi=[44,55)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 56 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=39/22 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=-1 lpr=55 pi=[44,55)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000271 1 0.000054
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 56 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=44/45 n=1 ec=39/22 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=-1 lpr=55 pi=[44,55)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 56 pg[6.a( v 32'39 (0'0,32'39] lb MIN local-lis/les=44/45 n=1 ec=39/22 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=-1 lpr=55 DELETING pi=[44,55)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.003829 2 0.000121
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 56 pg[6.a( v 32'39 (0'0,32'39] lb MIN local-lis/les=44/45 n=1 ec=39/22 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=-1 lpr=55 pi=[44,55)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.004189 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 56 pg[6.a( v 32'39 (0'0,32'39] lb MIN local-lis/les=44/45 n=1 ec=39/22 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=-1 lpr=55 pi=[44,55)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.990739 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: unregistering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:01.836984+0000)
Nov 25 23:58:30 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61276160 unmapped: 450560 heap: 61726720 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:02.837147+0000)
Nov 25 23:58:30 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61284352 unmapped: 442368 heap: 61726720 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:03.837289+0000)
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 56 handle_osd_map epochs [57,57], i have 56, src has [1,57]
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 57 pg[6.b(unlocked)] enter Initial
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 57 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=0 pi=[46,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000092 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 57 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=0 pi=[46,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 57 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000017 1 0.000041
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 57 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 57 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 57 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 57 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000012 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 57 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 57 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 57 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 57 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000153 1 0.000065
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 57 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 57 pg[6.b( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=32'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.001320 2 0.000137
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 57 pg[6.b( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=32'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 57 pg[6.b( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=32'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000034 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 57 pg[6.b( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=32'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:30 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61325312 unmapped: 401408 heap: 61726720 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 57 handle_osd_map epochs [57,58], i have 57, src has [1,58]
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 57 handle_osd_map epochs [58,58], i have 58, src has [1,58]
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 58 pg[6.b( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=32'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.166969 2 0.000268
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 58 pg[6.b( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=32'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 0.168649 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 58 pg[6.b( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=32'39 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 58 pg[6.b( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=57/58 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=32'39 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 58 pg[6.b( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=57/58 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=32'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 58 pg[6.b( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=57/58 n=1 ec=39/22 lis/c=57/46 les/c/f=58/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=32'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.005532 4 0.000229
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 58 pg[6.b( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=57/58 n=1 ec=39/22 lis/c=57/46 les/c/f=58/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=32'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 58 pg[6.b( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=57/58 n=1 ec=39/22 lis/c=57/46 les/c/f=58/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=32'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000232 1 0.000306
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 58 pg[6.b( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=57/58 n=1 ec=39/22 lis/c=57/46 les/c/f=58/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=32'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 58 pg[6.b( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=57/58 n=1 ec=39/22 lis/c=57/46 les/c/f=58/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=32'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000053 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 58 pg[6.b( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=57/58 n=1 ec=39/22 lis/c=57/46 les/c/f=58/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=32'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 58 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=57/58 n=1 ec=39/22 lis/c=57/46 les/c/f=58/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.012069 2 0.000279
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 58 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=57/58 n=1 ec=39/22 lis/c=57/46 les/c/f=58/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 58 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=57/58 n=1 ec=39/22 lis/c=57/46 les/c/f=58/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000054 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 58 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=57/58 n=1 ec=39/22 lis/c=57/46 les/c/f=58/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:04.837410+0000)
Nov 25 23:58:30 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61341696 unmapped: 385024 heap: 61726720 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 58 heartbeat osd_stat(store_statfs(0x4fe0bb000/0x0/0x4ffc00000, data 0xb1ed2/0x111000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:05.837603+0000)
Nov 25 23:58:30 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:30 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:30 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61341696 unmapped: 385024 heap: 61726720 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:30 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 401296 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 58 handle_osd_map epochs [59,59], i have 58, src has [1,59]
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 59 pg[6.d(unlocked)] enter Initial
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 59 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=0 pi=[49,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000086 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 59 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=0 pi=[49,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 59 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000024 1 0.000049
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 59 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 59 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 59 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 59 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000024 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 59 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 59 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 59 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 59 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000226 1 0.000093
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 59 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 59 handle_osd_map epochs [59,59], i have 59, src has [1,59]
Nov 25 23:58:30 compute-0 ceph-osd[90055]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 59 pg[6.d( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=49/50 n=1 ec=39/22 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=32'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetLog 0.000961 2 0.000073
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 59 pg[6.d( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=49/50 n=1 ec=39/22 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=32'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 59 pg[6.d( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=49/50 n=1 ec=39/22 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=32'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetMissing 0.000036 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 59 pg[6.d( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=49/50 n=1 ec=39/22 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=32'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:06.837751+0000)
Nov 25 23:58:30 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61390848 unmapped: 335872 heap: 61726720 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 59 handle_osd_map epochs [59,60], i have 59, src has [1,60]
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 59 handle_osd_map epochs [59,60], i have 60, src has [1,60]
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 60 pg[6.d( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=49/50 n=1 ec=39/22 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=32'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.417097 2 0.000301
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 60 pg[6.d( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=49/50 n=1 ec=39/22 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=32'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering 0.418506 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 60 pg[6.d( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=49/50 n=1 ec=39/22 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=32'39 mlcod 0'0 unknown m=2 mbc={}] enter Started/Primary/Active
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 60 pg[6.d( v 32'39 lc 32'7 (0'0,32'39] local-lis/les=59/60 n=1 ec=39/22 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=32'39 lcod 0'0 mlcod 0'0 activating+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Activating
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 60 pg[6.d( v 32'39 lc 32'7 (0'0,32'39] local-lis/les=59/60 n=1 ec=39/22 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 60 pg[6.d( v 32'39 lc 32'7 (0'0,32'39] local-lis/les=59/60 n=1 ec=39/22 lis/c=59/49 les/c/f=60/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/Activating 0.002997 3 0.000495
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 60 pg[6.d( v 32'39 lc 32'7 (0'0,32'39] local-lis/les=59/60 n=1 ec=39/22 lis/c=59/49 les/c/f=60/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 60 pg[6.d( v 32'39 lc 32'7 (0'0,32'39] local-lis/les=59/60 n=1 ec=39/22 lis/c=59/49 les/c/f=60/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000086 1 0.000073
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 60 pg[6.d( v 32'39 lc 32'7 (0'0,32'39] local-lis/les=59/60 n=1 ec=39/22 lis/c=59/49 les/c/f=60/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 60 pg[6.d( v 32'39 lc 32'7 (0'0,32'39] local-lis/les=59/60 n=1 ec=39/22 lis/c=59/49 les/c/f=60/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000005 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 60 pg[6.d( v 32'39 lc 32'7 (0'0,32'39] local-lis/les=59/60 n=1 ec=39/22 lis/c=59/49 les/c/f=60/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Recovering
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 60 handle_osd_map epochs [60,60], i have 60, src has [1,60]
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 60 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=59/60 n=1 ec=39/22 lis/c=59/49 les/c/f=60/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.102074 3 0.000038
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 60 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=59/60 n=1 ec=39/22 lis/c=59/49 les/c/f=60/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 60 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=59/60 n=1 ec=39/22 lis/c=59/49 les/c/f=60/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000016 0 0.000000
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 pg_epoch: 60 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=59/60 n=1 ec=39/22 lis/c=59/49 les/c/f=60/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:07.837916+0000)
Nov 25 23:58:30 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61407232 unmapped: 319488 heap: 61726720 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 60 handle_osd_map epochs [60,61], i have 60, src has [1,61]
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:08.838085+0000)
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 27 sent 25 num 2 unsent 2 sending 2
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:34:38.603854+0000 osd.1 (osd.1) 26 : cluster [DBG] 7.7 scrub starts
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:34:38.618085+0000 osd.1 (osd.1) 27 : cluster [DBG] 7.7 scrub ok
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61440000 unmapped: 286720 heap: 61726720 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 27) v1
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:34:38.603854+0000 osd.1 (osd.1) 26 : cluster [DBG] 7.7 scrub starts
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:34:38.618085+0000 osd.1 (osd.1) 27 : cluster [DBG] 7.7 scrub ok
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 61 heartbeat osd_stat(store_statfs(0x4fe0b0000/0x0/0x4ffc00000, data 0xb5f8e/0x11b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:09.838255+0000)
Nov 25 23:58:30 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61480960 unmapped: 245760 heap: 61726720 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:10.838487+0000)
Nov 25 23:58:30 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:30 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:30 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61489152 unmapped: 237568 heap: 61726720 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:30 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 416683 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 61 handle_osd_map epochs [62,63], i have 61, src has [1,63]
Nov 25 23:58:30 compute-0 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.581572533s of 11.712023735s, submitted: 33
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:11.838656+0000)
Nov 25 23:58:30 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61628416 unmapped: 1146880 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 7.b scrub starts
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 7.b scrub ok
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:12.838818+0000)
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 29 sent 27 num 2 unsent 2 sending 2
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:34:42.615365+0000 osd.1 (osd.1) 28 : cluster [DBG] 7.b scrub starts
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:34:42.629404+0000 osd.1 (osd.1) 29 : cluster [DBG] 7.b scrub ok
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61636608 unmapped: 1138688 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 29) v1
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:34:42.615365+0000 osd.1 (osd.1) 28 : cluster [DBG] 7.b scrub starts
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:34:42.629404+0000 osd.1 (osd.1) 29 : cluster [DBG] 7.b scrub ok
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 7.d scrub starts
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 7.d scrub ok
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:13.839031+0000)
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 31 sent 29 num 2 unsent 2 sending 2
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:34:43.604677+0000 osd.1 (osd.1) 30 : cluster [DBG] 7.d scrub starts
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:34:43.618940+0000 osd.1 (osd.1) 31 : cluster [DBG] 7.d scrub ok
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61644800 unmapped: 1130496 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 31) v1
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:34:43.604677+0000 osd.1 (osd.1) 30 : cluster [DBG] 7.d scrub starts
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:34:43.618940+0000 osd.1 (osd.1) 31 : cluster [DBG] 7.d scrub ok
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:14.839305+0000)
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 33 sent 31 num 2 unsent 2 sending 2
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:34:44.562201+0000 osd.1 (osd.1) 32 : cluster [DBG] 7.10 scrub starts
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:34:44.576327+0000 osd.1 (osd.1) 33 : cluster [DBG] 7.10 scrub ok
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61652992 unmapped: 1122304 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 33) v1
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:34:44.562201+0000 osd.1 (osd.1) 32 : cluster [DBG] 7.10 scrub starts
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:34:44.576327+0000 osd.1 (osd.1) 33 : cluster [DBG] 7.10 scrub ok
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 7.12 deep-scrub starts
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 7.12 deep-scrub ok
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:15.839539+0000)
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 35 sent 33 num 2 unsent 2 sending 2
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:34:45.603755+0000 osd.1 (osd.1) 34 : cluster [DBG] 7.12 deep-scrub starts
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:34:45.617839+0000 osd.1 (osd.1) 35 : cluster [DBG] 7.12 deep-scrub ok
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:30 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:30 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61661184 unmapped: 1114112 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:30 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 424817 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 35) v1
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:34:45.603755+0000 osd.1 (osd.1) 34 : cluster [DBG] 7.12 deep-scrub starts
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:34:45.617839+0000 osd.1 (osd.1) 35 : cluster [DBG] 7.12 deep-scrub ok
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:16.839778+0000)
Nov 25 23:58:30 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61661184 unmapped: 1114112 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:17.839931+0000)
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 37 sent 35 num 2 unsent 2 sending 2
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:34:47.662210+0000 osd.1 (osd.1) 36 : cluster [DBG] 7.14 scrub starts
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:34:47.675701+0000 osd.1 (osd.1) 37 : cluster [DBG] 7.14 scrub ok
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61669376 unmapped: 1105920 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 37) v1
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:34:47.662210+0000 osd.1 (osd.1) 36 : cluster [DBG] 7.14 scrub starts
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:34:47.675701+0000 osd.1 (osd.1) 37 : cluster [DBG] 7.14 scrub ok
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:18.840261+0000)
Nov 25 23:58:30 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1089536 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:19.840499+0000)
Nov 25 23:58:30 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1089536 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:20.840702+0000)
Nov 25 23:58:30 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:30 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:30 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1081344 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:30 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 425965 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Nov 25 23:58:30 compute-0 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.323520660s of 10.366091728s, submitted: 11
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:21.840830+0000)
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 39 sent 37 num 2 unsent 2 sending 2
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:34:51.576877+0000 osd.1 (osd.1) 38 : cluster [DBG] 7.16 scrub starts
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:34:51.590969+0000 osd.1 (osd.1) 39 : cluster [DBG] 7.16 scrub ok
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1064960 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 39) v1
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:34:51.576877+0000 osd.1 (osd.1) 38 : cluster [DBG] 7.16 scrub starts
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:34:51.590969+0000 osd.1 (osd.1) 39 : cluster [DBG] 7.16 scrub ok
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:22.841004+0000)
Nov 25 23:58:30 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1064960 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:23.841125+0000)
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 41 sent 39 num 2 unsent 2 sending 2
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:34:53.556060+0000 osd.1 (osd.1) 40 : cluster [DBG] 7.17 scrub starts
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:34:53.570160+0000 osd.1 (osd.1) 41 : cluster [DBG] 7.17 scrub ok
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61726720 unmapped: 1048576 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 41) v1
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:34:53.556060+0000 osd.1 (osd.1) 40 : cluster [DBG] 7.17 scrub starts
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:34:53.570160+0000 osd.1 (osd.1) 41 : cluster [DBG] 7.17 scrub ok
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:24.841297+0000)
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 43 sent 41 num 2 unsent 2 sending 2
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:34:54.529004+0000 osd.1 (osd.1) 42 : cluster [DBG] 7.19 scrub starts
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:34:54.543133+0000 osd.1 (osd.1) 43 : cluster [DBG] 7.19 scrub ok
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61726720 unmapped: 1048576 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 43) v1
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:34:54.529004+0000 osd.1 (osd.1) 42 : cluster [DBG] 7.19 scrub starts
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:34:54.543133+0000 osd.1 (osd.1) 43 : cluster [DBG] 7.19 scrub ok
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:25.841459+0000)
Nov 25 23:58:30 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:30 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:30 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61734912 unmapped: 1040384 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:30 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 429409 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:26.841583+0000)
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 45 sent 43 num 2 unsent 2 sending 2
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:34:56.511887+0000 osd.1 (osd.1) 44 : cluster [DBG] 7.1d scrub starts
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:34:56.525996+0000 osd.1 (osd.1) 45 : cluster [DBG] 7.1d scrub ok
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61734912 unmapped: 1040384 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 45) v1
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:34:56.511887+0000 osd.1 (osd.1) 44 : cluster [DBG] 7.1d scrub starts
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:34:56.525996+0000 osd.1 (osd.1) 45 : cluster [DBG] 7.1d scrub ok
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:27.841784+0000)
Nov 25 23:58:30 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61734912 unmapped: 1040384 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:28.841910+0000)
Nov 25 23:58:30 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61743104 unmapped: 1032192 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:29.842064+0000)
Nov 25 23:58:30 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61743104 unmapped: 1032192 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:30.842217+0000)
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 47 sent 45 num 2 unsent 2 sending 2
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:35:00.491194+0000 osd.1 (osd.1) 46 : cluster [DBG] 7.1e scrub starts
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:35:00.505207+0000 osd.1 (osd.1) 47 : cluster [DBG] 7.1e scrub ok
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:30 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:30 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61767680 unmapped: 1007616 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:30 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 431705 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 47) v1
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:35:00.491194+0000 osd.1 (osd.1) 46 : cluster [DBG] 7.1e scrub starts
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:35:00.505207+0000 osd.1 (osd.1) 47 : cluster [DBG] 7.1e scrub ok
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:31.842371+0000)
Nov 25 23:58:30 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61767680 unmapped: 1007616 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:32.842584+0000)
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:30 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61767680 unmapped: 1007616 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:33.842775+0000)
Nov 25 23:58:30 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61775872 unmapped: 999424 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:34.842856+0000)
Nov 25 23:58:30 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61775872 unmapped: 999424 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.f scrub starts
Nov 25 23:58:30 compute-0 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.960755348s of 13.993970871s, submitted: 10
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.f scrub ok
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:35.842988+0000)
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 49 sent 47 num 2 unsent 2 sending 2
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:35:05.570679+0000 osd.1 (osd.1) 48 : cluster [DBG] 4.f scrub starts
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:35:05.584781+0000 osd.1 (osd.1) 49 : cluster [DBG] 4.f scrub ok
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:30 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:30 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61792256 unmapped: 983040 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:30 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 432852 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 49) v1
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:35:05.570679+0000 osd.1 (osd.1) 48 : cluster [DBG] 4.f scrub starts
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:35:05.584781+0000 osd.1 (osd.1) 49 : cluster [DBG] 4.f scrub ok
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:36.843208+0000)
Nov 25 23:58:30 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61800448 unmapped: 974848 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:37.843368+0000)
Nov 25 23:58:30 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61808640 unmapped: 966656 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:38.843566+0000)
Nov 25 23:58:30 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61808640 unmapped: 966656 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:39.843787+0000)
Nov 25 23:58:30 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61808640 unmapped: 966656 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:40.843927+0000)
Nov 25 23:58:30 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:30 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:30 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61816832 unmapped: 958464 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:30 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 432852 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:41.844110+0000)
Nov 25 23:58:30 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61825024 unmapped: 950272 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.d scrub starts
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.d scrub ok
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:42.844255+0000)
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 51 sent 49 num 2 unsent 2 sending 2
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:35:12.687862+0000 osd.1 (osd.1) 50 : cluster [DBG] 4.d scrub starts
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:35:12.702026+0000 osd.1 (osd.1) 51 : cluster [DBG] 4.d scrub ok
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61841408 unmapped: 933888 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 51) v1
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:35:12.687862+0000 osd.1 (osd.1) 50 : cluster [DBG] 4.d scrub starts
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:35:12.702026+0000 osd.1 (osd.1) 51 : cluster [DBG] 4.d scrub ok
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:43.844504+0000)
Nov 25 23:58:30 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61849600 unmapped: 925696 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:44.844666+0000)
Nov 25 23:58:30 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61849600 unmapped: 925696 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:45.844798+0000)
Nov 25 23:58:30 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:30 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:30 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 917504 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:30 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 433999 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:46.844949+0000)
Nov 25 23:58:30 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 917504 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:47.845134+0000)
Nov 25 23:58:30 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 917504 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Nov 25 23:58:30 compute-0 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.131777763s of 13.148053169s, submitted: 4
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:48.845307+0000)
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 53 sent 51 num 2 unsent 2 sending 2
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:35:18.719099+0000 osd.1 (osd.1) 52 : cluster [DBG] 4.2 scrub starts
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:35:18.733190+0000 osd.1 (osd.1) 53 : cluster [DBG] 4.2 scrub ok
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 909312 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 53) v1
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:35:18.719099+0000 osd.1 (osd.1) 52 : cluster [DBG] 4.2 scrub starts
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:35:18.733190+0000 osd.1 (osd.1) 53 : cluster [DBG] 4.2 scrub ok
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:49.845548+0000)
Nov 25 23:58:30 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 909312 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:50.845690+0000)
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:30 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:30 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:30 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61874176 unmapped: 901120 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:30 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 435146 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:51.845883+0000)
Nov 25 23:58:30 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61874176 unmapped: 901120 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:52.846066+0000)
Nov 25 23:58:30 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 892928 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:53.846235+0000)
Nov 25 23:58:30 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 892928 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:54.846396+0000)
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 55 sent 53 num 2 unsent 2 sending 2
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:35:24.697793+0000 osd.1 (osd.1) 54 : cluster [DBG] 4.4 scrub starts
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:35:24.711869+0000 osd.1 (osd.1) 55 : cluster [DBG] 4.4 scrub ok
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 884736 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 55) v1
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:35:24.697793+0000 osd.1 (osd.1) 54 : cluster [DBG] 4.4 scrub starts
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:35:24.711869+0000 osd.1 (osd.1) 55 : cluster [DBG] 4.4 scrub ok
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:55.847211+0000)
Nov 25 23:58:30 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:30 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:30 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61915136 unmapped: 860160 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:30 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 436293 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:56.847595+0000)
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 57 sent 55 num 2 unsent 2 sending 2
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:35:26.720528+0000 osd.1 (osd.1) 56 : cluster [DBG] 6.1 scrub starts
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:35:26.734492+0000 osd.1 (osd.1) 57 : cluster [DBG] 6.1 scrub ok
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:30 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61923328 unmapped: 851968 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 57) v1
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:35:26.720528+0000 osd.1 (osd.1) 56 : cluster [DBG] 6.1 scrub starts
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:35:26.734492+0000 osd.1 (osd.1) 57 : cluster [DBG] 6.1 scrub ok
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:57.848445+0000)
Nov 25 23:58:30 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61923328 unmapped: 851968 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Nov 25 23:58:30 compute-0 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.016060829s of 10.034604073s, submitted: 6
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:58.848861+0000)
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 59 sent 57 num 2 unsent 2 sending 2
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:35:28.753751+0000 osd.1 (osd.1) 58 : cluster [DBG] 4.7 scrub starts
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:35:28.767842+0000 osd.1 (osd.1) 59 : cluster [DBG] 4.7 scrub ok
Nov 25 23:58:30 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:30 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61931520 unmapped: 843776 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 59) v1
Nov 25 23:58:30 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:35:28.753751+0000 osd.1 (osd.1) 58 : cluster [DBG] 4.7 scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:35:28.767842+0000 osd.1 (osd.1) 59 : cluster [DBG] 4.7 scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:59.849229+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61931520 unmapped: 843776 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:00.849491+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 61 sent 59 num 2 unsent 2 sending 2
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:35:30.698001+0000 osd.1 (osd.1) 60 : cluster [DBG] 4.5 scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:35:30.712226+0000 osd.1 (osd.1) 61 : cluster [DBG] 4.5 scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61939712 unmapped: 835584 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 439734 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 61) v1
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:35:30.698001+0000 osd.1 (osd.1) 60 : cluster [DBG] 4.5 scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:35:30.712226+0000 osd.1 (osd.1) 61 : cluster [DBG] 4.5 scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:01.849880+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61947904 unmapped: 827392 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:02.850097+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 63 sent 61 num 2 unsent 2 sending 2
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:35:32.624285+0000 osd.1 (osd.1) 62 : cluster [DBG] 5.1d scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:35:32.638363+0000 osd.1 (osd.1) 63 : cluster [DBG] 5.1d scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61964288 unmapped: 811008 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 63) v1
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:35:32.624285+0000 osd.1 (osd.1) 62 : cluster [DBG] 5.1d scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:35:32.638363+0000 osd.1 (osd.1) 63 : cluster [DBG] 5.1d scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:03.850251+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61964288 unmapped: 811008 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:04.850373+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61964288 unmapped: 811008 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:05.850623+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61972480 unmapped: 802816 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 440882 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:06.850791+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61972480 unmapped: 802816 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:07.850932+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61980672 unmapped: 794624 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:08.851098+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61980672 unmapped: 794624 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.789491653s of 10.819688797s, submitted: 6
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:09.851248+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 65 sent 63 num 2 unsent 2 sending 2
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:35:39.573549+0000 osd.1 (osd.1) 64 : cluster [DBG] 4.9 scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:35:39.587398+0000 osd.1 (osd.1) 65 : cluster [DBG] 4.9 scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61988864 unmapped: 786432 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 65) v1
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:35:39.573549+0000 osd.1 (osd.1) 64 : cluster [DBG] 4.9 scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:35:39.587398+0000 osd.1 (osd.1) 65 : cluster [DBG] 4.9 scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:10.851512+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61988864 unmapped: 786432 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 442029 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:11.851694+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61988864 unmapped: 786432 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:12.851865+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 67 sent 65 num 2 unsent 2 sending 2
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:35:42.508345+0000 osd.1 (osd.1) 66 : cluster [DBG] 4.8 scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:35:42.522449+0000 osd.1 (osd.1) 67 : cluster [DBG] 4.8 scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 67) v1
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:35:42.508345+0000 osd.1 (osd.1) 66 : cluster [DBG] 4.8 scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:35:42.522449+0000 osd.1 (osd.1) 67 : cluster [DBG] 4.8 scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61997056 unmapped: 778240 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:13.852070+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61997056 unmapped: 778240 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:14.852238+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 69 sent 67 num 2 unsent 2 sending 2
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:35:44.477082+0000 osd.1 (osd.1) 68 : cluster [DBG] 4.14 scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:35:44.491189+0000 osd.1 (osd.1) 69 : cluster [DBG] 4.14 scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 69) v1
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:35:44.477082+0000 osd.1 (osd.1) 68 : cluster [DBG] 4.14 scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:35:44.491189+0000 osd.1 (osd.1) 69 : cluster [DBG] 4.14 scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61997056 unmapped: 778240 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:15.852480+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62013440 unmapped: 761856 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 444324 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:16.852744+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 71 sent 69 num 2 unsent 2 sending 2
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:35:46.443633+0000 osd.1 (osd.1) 70 : cluster [DBG] 4.12 scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:35:46.457926+0000 osd.1 (osd.1) 71 : cluster [DBG] 4.12 scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 71) v1
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:35:46.443633+0000 osd.1 (osd.1) 70 : cluster [DBG] 4.12 scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:35:46.457926+0000 osd.1 (osd.1) 71 : cluster [DBG] 4.12 scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62013440 unmapped: 761856 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:17.852950+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 73 sent 71 num 2 unsent 2 sending 2
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:35:47.433360+0000 osd.1 (osd.1) 72 : cluster [DBG] 4.10 scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:35:47.447492+0000 osd.1 (osd.1) 73 : cluster [DBG] 4.10 scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 73) v1
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:35:47.433360+0000 osd.1 (osd.1) 72 : cluster [DBG] 4.10 scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:35:47.447492+0000 osd.1 (osd.1) 73 : cluster [DBG] 4.10 scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62038016 unmapped: 737280 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:18.853099+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62038016 unmapped: 737280 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:19.853242+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62046208 unmapped: 729088 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:20.898772+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62046208 unmapped: 729088 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 446620 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.852318764s of 11.895004272s, submitted: 10
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:21.898942+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 75 sent 73 num 2 unsent 2 sending 2
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:35:51.468599+0000 osd.1 (osd.1) 74 : cluster [DBG] 2.1b scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:35:51.482700+0000 osd.1 (osd.1) 75 : cluster [DBG] 2.1b scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 75) v1
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:35:51.468599+0000 osd.1 (osd.1) 74 : cluster [DBG] 2.1b scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:35:51.482700+0000 osd.1 (osd.1) 75 : cluster [DBG] 2.1b scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62062592 unmapped: 712704 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:22.899190+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62070784 unmapped: 704512 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:23.899384+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62070784 unmapped: 704512 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:24.899546+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62078976 unmapped: 696320 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.17 deep-scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.17 deep-scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:25.899659+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 77 sent 75 num 2 unsent 2 sending 2
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:35:55.580024+0000 osd.1 (osd.1) 76 : cluster [DBG] 2.17 deep-scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:35:55.594114+0000 osd.1 (osd.1) 77 : cluster [DBG] 2.17 deep-scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 77) v1
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:35:55.580024+0000 osd.1 (osd.1) 76 : cluster [DBG] 2.17 deep-scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:35:55.594114+0000 osd.1 (osd.1) 77 : cluster [DBG] 2.17 deep-scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62095360 unmapped: 679936 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 448916 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:26.899852+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62103552 unmapped: 671744 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:27.899973+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62103552 unmapped: 671744 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:28.900106+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 79 sent 77 num 2 unsent 2 sending 2
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:35:58.500728+0000 osd.1 (osd.1) 78 : cluster [DBG] 5.11 scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:35:58.514817+0000 osd.1 (osd.1) 79 : cluster [DBG] 5.11 scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 79) v1
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:35:58.500728+0000 osd.1 (osd.1) 78 : cluster [DBG] 5.11 scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:35:58.514817+0000 osd.1 (osd.1) 79 : cluster [DBG] 5.11 scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62119936 unmapped: 655360 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.15 deep-scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.15 deep-scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:29.900507+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 81 sent 79 num 2 unsent 2 sending 2
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:35:59.459754+0000 osd.1 (osd.1) 80 : cluster [DBG] 2.15 deep-scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:35:59.473847+0000 osd.1 (osd.1) 81 : cluster [DBG] 2.15 deep-scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 81) v1
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:35:59.459754+0000 osd.1 (osd.1) 80 : cluster [DBG] 2.15 deep-scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:35:59.473847+0000 osd.1 (osd.1) 81 : cluster [DBG] 2.15 deep-scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62119936 unmapped: 655360 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:30.900715+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 83 sent 81 num 2 unsent 2 sending 2
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:36:00.429488+0000 osd.1 (osd.1) 82 : cluster [DBG] 5.12 scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:36:00.443864+0000 osd.1 (osd.1) 83 : cluster [DBG] 5.12 scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 83) v1
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:36:00.429488+0000 osd.1 (osd.1) 82 : cluster [DBG] 5.12 scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:36:00.443864+0000 osd.1 (osd.1) 83 : cluster [DBG] 5.12 scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62136320 unmapped: 638976 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 452360 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:31.900941+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 85 sent 83 num 2 unsent 2 sending 2
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:36:01.416751+0000 osd.1 (osd.1) 84 : cluster [DBG] 5.13 scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:36:01.430716+0000 osd.1 (osd.1) 85 : cluster [DBG] 5.13 scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 85) v1
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:36:01.416751+0000 osd.1 (osd.1) 84 : cluster [DBG] 5.13 scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:36:01.430716+0000 osd.1 (osd.1) 85 : cluster [DBG] 5.13 scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62144512 unmapped: 630784 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:32.901200+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62144512 unmapped: 630784 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:33.901341+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62144512 unmapped: 630784 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:34.901461+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62152704 unmapped: 622592 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:35.901597+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62160896 unmapped: 614400 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 453508 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:36.901734+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62169088 unmapped: 606208 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.911233902s of 15.955444336s, submitted: 12
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:37.901901+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 87 sent 85 num 2 unsent 2 sending 2
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:36:07.423953+0000 osd.1 (osd.1) 86 : cluster [DBG] 5.9 scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:36:07.438037+0000 osd.1 (osd.1) 87 : cluster [DBG] 5.9 scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 87) v1
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:36:07.423953+0000 osd.1 (osd.1) 86 : cluster [DBG] 5.9 scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:36:07.438037+0000 osd.1 (osd.1) 87 : cluster [DBG] 5.9 scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62185472 unmapped: 589824 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:38.902124+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62185472 unmapped: 589824 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:39.902239+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62193664 unmapped: 581632 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:40.902357+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62193664 unmapped: 581632 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454655 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:41.902492+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62201856 unmapped: 573440 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:42.902663+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62201856 unmapped: 573440 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:43.902880+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62210048 unmapped: 565248 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:44.903368+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62218240 unmapped: 557056 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:45.903611+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62218240 unmapped: 557056 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454655 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:46.903772+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62226432 unmapped: 548864 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:47.903901+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62226432 unmapped: 548864 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.033292770s of 11.040759087s, submitted: 2
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:48.904022+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 89 sent 87 num 2 unsent 2 sending 2
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:36:18.464712+0000 osd.1 (osd.1) 88 : cluster [DBG] 2.3 scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:36:18.478778+0000 osd.1 (osd.1) 89 : cluster [DBG] 2.3 scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 89) v1
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:36:18.464712+0000 osd.1 (osd.1) 88 : cluster [DBG] 2.3 scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:36:18.478778+0000 osd.1 (osd.1) 89 : cluster [DBG] 2.3 scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62226432 unmapped: 548864 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:49.904257+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62234624 unmapped: 540672 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:50.904426+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62234624 unmapped: 540672 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 455802 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:51.904687+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 91 sent 89 num 2 unsent 2 sending 2
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:36:21.510339+0000 osd.1 (osd.1) 90 : cluster [DBG] 2.4 scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:36:21.524302+0000 osd.1 (osd.1) 91 : cluster [DBG] 2.4 scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62259200 unmapped: 516096 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 91) v1
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:36:21.510339+0000 osd.1 (osd.1) 90 : cluster [DBG] 2.4 scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:36:21.524302+0000 osd.1 (osd.1) 91 : cluster [DBG] 2.4 scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.7 deep-scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.7 deep-scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:52.904990+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 93 sent 91 num 2 unsent 2 sending 2
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:36:22.556881+0000 osd.1 (osd.1) 92 : cluster [DBG] 2.7 deep-scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:36:22.570941+0000 osd.1 (osd.1) 93 : cluster [DBG] 2.7 deep-scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62259200 unmapped: 516096 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 93) v1
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:36:22.556881+0000 osd.1 (osd.1) 92 : cluster [DBG] 2.7 deep-scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:36:22.570941+0000 osd.1 (osd.1) 93 : cluster [DBG] 2.7 deep-scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:53.905262+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62267392 unmapped: 507904 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:54.905415+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62267392 unmapped: 507904 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:55.905598+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62275584 unmapped: 499712 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 458096 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:56.905757+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62283776 unmapped: 491520 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:57.905907+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 95 sent 93 num 2 unsent 2 sending 2
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:36:27.591428+0000 osd.1 (osd.1) 94 : cluster [DBG] 2.6 scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:36:27.605504+0000 osd.1 (osd.1) 95 : cluster [DBG] 2.6 scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62283776 unmapped: 491520 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 95) v1
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:36:27.591428+0000 osd.1 (osd.1) 94 : cluster [DBG] 2.6 scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:36:27.605504+0000 osd.1 (osd.1) 95 : cluster [DBG] 2.6 scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:58.906069+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62291968 unmapped: 483328 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:59.906272+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62300160 unmapped: 475136 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.d scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.077341080s of 12.105804443s, submitted: 8
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.d scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:00.906468+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 97 sent 95 num 2 unsent 2 sending 2
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:36:30.570514+0000 osd.1 (osd.1) 96 : cluster [DBG] 2.d scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:36:30.584652+0000 osd.1 (osd.1) 97 : cluster [DBG] 2.d scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62316544 unmapped: 458752 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 460390 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 97) v1
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:36:30.570514+0000 osd.1 (osd.1) 96 : cluster [DBG] 2.d scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:36:30.584652+0000 osd.1 (osd.1) 97 : cluster [DBG] 2.d scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:01.907717+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62324736 unmapped: 450560 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:02.908070+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62324736 unmapped: 450560 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:03.909133+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62332928 unmapped: 442368 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:04.909687+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62332928 unmapped: 442368 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:05.910943+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 460390 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62332928 unmapped: 442368 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.16 deep-scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.16 deep-scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:06.911612+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 99 sent 97 num 2 unsent 2 sending 2
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:36:36.568094+0000 osd.1 (osd.1) 98 : cluster [DBG] 5.16 deep-scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:36:36.582258+0000 osd.1 (osd.1) 99 : cluster [DBG] 5.16 deep-scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62349312 unmapped: 425984 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 99) v1
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:36:36.568094+0000 osd.1 (osd.1) 98 : cluster [DBG] 5.16 deep-scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:36:36.582258+0000 osd.1 (osd.1) 99 : cluster [DBG] 5.16 deep-scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:07.911843+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62349312 unmapped: 425984 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:08.912010+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62357504 unmapped: 417792 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.a deep-scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.a deep-scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:09.912249+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 101 sent 99 num 2 unsent 2 sending 2
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:36:39.597771+0000 osd.1 (osd.1) 100 : cluster [DBG] 2.a deep-scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:36:39.611914+0000 osd.1 (osd.1) 101 : cluster [DBG] 2.a deep-scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62357504 unmapped: 417792 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 101) v1
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:36:39.597771+0000 osd.1 (osd.1) 100 : cluster [DBG] 2.a deep-scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:36:39.611914+0000 osd.1 (osd.1) 101 : cluster [DBG] 2.a deep-scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:10.912461+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 462685 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62365696 unmapped: 409600 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:11.912676+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62365696 unmapped: 409600 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:12.913279+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62365696 unmapped: 409600 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:13.913756+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62373888 unmapped: 401408 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:14.914223+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62373888 unmapped: 401408 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:15.914595+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 462685 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62382080 unmapped: 393216 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:16.914921+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62382080 unmapped: 393216 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:17.915059+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62382080 unmapped: 393216 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:18.915217+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62390272 unmapped: 385024 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:19.915385+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62390272 unmapped: 385024 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:20.915522+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 462685 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62398464 unmapped: 376832 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:21.915635+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62414848 unmapped: 360448 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:22.915817+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62414848 unmapped: 360448 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.f scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 23.019342422s of 23.038656235s, submitted: 6
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.f scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:23.915968+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 103 sent 101 num 2 unsent 2 sending 2
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:36:53.609229+0000 osd.1 (osd.1) 102 : cluster [DBG] 5.f scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:36:53.623287+0000 osd.1 (osd.1) 103 : cluster [DBG] 5.f scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62431232 unmapped: 344064 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 103) v1
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:36:53.609229+0000 osd.1 (osd.1) 102 : cluster [DBG] 5.f scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:36:53.623287+0000 osd.1 (osd.1) 103 : cluster [DBG] 5.f scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:24.916236+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 105 sent 103 num 2 unsent 2 sending 2
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:36:54.576098+0000 osd.1 (osd.1) 104 : cluster [DBG] 2.5 scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:36:54.590204+0000 osd.1 (osd.1) 105 : cluster [DBG] 2.5 scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62431232 unmapped: 344064 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 105) v1
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:36:54.576098+0000 osd.1 (osd.1) 104 : cluster [DBG] 2.5 scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:36:54.590204+0000 osd.1 (osd.1) 105 : cluster [DBG] 2.5 scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:25.916529+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 464979 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62447616 unmapped: 327680 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:26.916731+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62447616 unmapped: 327680 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:27.916884+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62447616 unmapped: 327680 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:28.917063+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62455808 unmapped: 319488 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:29.917275+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62455808 unmapped: 319488 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:30.917483+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 464979 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62464000 unmapped: 311296 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:31.917708+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62464000 unmapped: 311296 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:32.917862+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62464000 unmapped: 311296 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:33.918044+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 107 sent 105 num 2 unsent 2 sending 2
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:37:03.446915+0000 osd.1 (osd.1) 106 : cluster [DBG] 2.9 scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:37:03.460961+0000 osd.1 (osd.1) 107 : cluster [DBG] 2.9 scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 107) v1
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:37:03.446915+0000 osd.1 (osd.1) 106 : cluster [DBG] 2.9 scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:37:03.460961+0000 osd.1 (osd.1) 107 : cluster [DBG] 2.9 scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62472192 unmapped: 303104 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:34.918352+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62472192 unmapped: 303104 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:35.918639+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.775456429s of 12.810349464s, submitted: 6
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 467273 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62480384 unmapped: 294912 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:36.918773+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 109 sent 107 num 2 unsent 2 sending 2
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:37:06.419652+0000 osd.1 (osd.1) 108 : cluster [DBG] 5.1 scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:37:06.433774+0000 osd.1 (osd.1) 109 : cluster [DBG] 5.1 scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 109) v1
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:37:06.419652+0000 osd.1 (osd.1) 108 : cluster [DBG] 5.1 scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:37:06.433774+0000 osd.1 (osd.1) 109 : cluster [DBG] 5.1 scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.1a deep-scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.1a deep-scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62488576 unmapped: 286720 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:37.918969+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 111 sent 109 num 2 unsent 2 sending 2
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:37:07.432757+0000 osd.1 (osd.1) 110 : cluster [DBG] 5.1a deep-scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:37:07.446848+0000 osd.1 (osd.1) 111 : cluster [DBG] 5.1a deep-scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 111) v1
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:37:07.432757+0000 osd.1 (osd.1) 110 : cluster [DBG] 5.1a deep-scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:37:07.446848+0000 osd.1 (osd.1) 111 : cluster [DBG] 5.1a deep-scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62504960 unmapped: 270336 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:38.919209+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 113 sent 111 num 2 unsent 2 sending 2
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:37:08.421484+0000 osd.1 (osd.1) 112 : cluster [DBG] 5.19 scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:37:08.435543+0000 osd.1 (osd.1) 113 : cluster [DBG] 5.19 scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 113) v1
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:37:08.421484+0000 osd.1 (osd.1) 112 : cluster [DBG] 5.19 scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:37:08.435543+0000 osd.1 (osd.1) 113 : cluster [DBG] 5.19 scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62513152 unmapped: 262144 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:39.919588+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.c scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.c scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62529536 unmapped: 245760 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:40.919722+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 115 sent 113 num 2 unsent 2 sending 2
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:37:10.366751+0000 osd.1 (osd.1) 114 : cluster [DBG] 5.c scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:37:10.380842+0000 osd.1 (osd.1) 115 : cluster [DBG] 5.c scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 115) v1
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:37:10.366751+0000 osd.1 (osd.1) 114 : cluster [DBG] 5.c scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:37:10.380842+0000 osd.1 (osd.1) 115 : cluster [DBG] 5.c scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 470716 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62537728 unmapped: 237568 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:41.919907+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62537728 unmapped: 237568 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:42.920060+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62545920 unmapped: 229376 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:43.920260+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62554112 unmapped: 221184 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:44.920384+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62554112 unmapped: 221184 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:45.920543+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 470716 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62570496 unmapped: 204800 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:46.920663+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62570496 unmapped: 204800 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:47.920776+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62570496 unmapped: 204800 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:48.920907+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62578688 unmapped: 196608 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:49.921078+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62578688 unmapped: 196608 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:50.921293+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.18 deep-scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.953562737s of 14.978596687s, submitted: 8
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.18 deep-scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 471864 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62578688 unmapped: 196608 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:51.921539+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 117 sent 115 num 2 unsent 2 sending 2
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:37:21.398317+0000 osd.1 (osd.1) 116 : cluster [DBG] 5.18 deep-scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:37:21.412412+0000 osd.1 (osd.1) 117 : cluster [DBG] 5.18 deep-scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 117) v1
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:37:21.398317+0000 osd.1 (osd.1) 116 : cluster [DBG] 5.18 deep-scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:37:21.412412+0000 osd.1 (osd.1) 117 : cluster [DBG] 5.18 deep-scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62586880 unmapped: 188416 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:52.921807+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62586880 unmapped: 188416 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:53.922096+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62595072 unmapped: 180224 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:54.922325+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62595072 unmapped: 180224 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:55.922482+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.2 deep-scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.2 deep-scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 473011 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62611456 unmapped: 163840 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:56.922670+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 119 sent 117 num 2 unsent 2 sending 2
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:37:26.383011+0000 osd.1 (osd.1) 118 : cluster [DBG] 6.2 deep-scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:37:26.397073+0000 osd.1 (osd.1) 119 : cluster [DBG] 6.2 deep-scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 119) v1
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:37:26.383011+0000 osd.1 (osd.1) 118 : cluster [DBG] 6.2 deep-scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:37:26.397073+0000 osd.1 (osd.1) 119 : cluster [DBG] 6.2 deep-scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62619648 unmapped: 155648 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:57.922895+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62619648 unmapped: 155648 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:58.923095+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62627840 unmapped: 147456 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:59.923296+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62627840 unmapped: 147456 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:00.923460+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 473011 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62644224 unmapped: 131072 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:01.923598+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62644224 unmapped: 131072 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:02.923782+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62644224 unmapped: 131072 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:03.923947+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.053456306s of 13.070212364s, submitted: 4
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62660608 unmapped: 114688 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:04.924106+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 121 sent 119 num 2 unsent 2 sending 2
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:37:34.468396+0000 osd.1 (osd.1) 120 : cluster [DBG] 6.6 scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:37:34.486014+0000 osd.1 (osd.1) 121 : cluster [DBG] 6.6 scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 121) v1
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:37:34.468396+0000 osd.1 (osd.1) 120 : cluster [DBG] 6.6 scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:37:34.486014+0000 osd.1 (osd.1) 121 : cluster [DBG] 6.6 scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62660608 unmapped: 114688 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:05.924554+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 474158 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62660608 unmapped: 114688 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:06.925965+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62668800 unmapped: 106496 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:07.926711+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62668800 unmapped: 106496 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:08.926907+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.e scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.e scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62685184 unmapped: 90112 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:09.927103+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 123 sent 121 num 2 unsent 2 sending 2
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:37:39.523649+0000 osd.1 (osd.1) 122 : cluster [DBG] 6.e scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:37:39.541341+0000 osd.1 (osd.1) 123 : cluster [DBG] 6.e scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 123) v1
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:37:39.523649+0000 osd.1 (osd.1) 122 : cluster [DBG] 6.e scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:37:39.541341+0000 osd.1 (osd.1) 123 : cluster [DBG] 6.e scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.c scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.c scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62685184 unmapped: 90112 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:10.927279+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 125 sent 123 num 2 unsent 2 sending 2
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:37:40.481572+0000 osd.1 (osd.1) 124 : cluster [DBG] 6.c scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:37:40.499231+0000 osd.1 (osd.1) 125 : cluster [DBG] 6.c scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 125) v1
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:37:40.481572+0000 osd.1 (osd.1) 124 : cluster [DBG] 6.c scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:37:40.499231+0000 osd.1 (osd.1) 125 : cluster [DBG] 6.c scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476452 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62701568 unmapped: 73728 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:11.927561+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62701568 unmapped: 73728 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:12.928119+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62701568 unmapped: 73728 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:13.928546+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62709760 unmapped: 65536 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:14.928724+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.003563881s of 11.035881042s, submitted: 6
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62709760 unmapped: 65536 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:15.928914+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 127 sent 125 num 2 unsent 2 sending 2
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:37:45.504493+0000 osd.1 (osd.1) 126 : cluster [DBG] 6.4 scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:37:45.532795+0000 osd.1 (osd.1) 127 : cluster [DBG] 6.4 scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 127) v1
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:37:45.504493+0000 osd.1 (osd.1) 126 : cluster [DBG] 6.4 scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:37:45.532795+0000 osd.1 (osd.1) 127 : cluster [DBG] 6.4 scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 477599 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62717952 unmapped: 57344 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:16.929123+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.b scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.b scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62734336 unmapped: 40960 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:17.929513+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 129 sent 127 num 2 unsent 2 sending 2
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:37:47.566577+0000 osd.1 (osd.1) 128 : cluster [DBG] 6.b scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:37:47.584273+0000 osd.1 (osd.1) 129 : cluster [DBG] 6.b scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 129) v1
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:37:47.566577+0000 osd.1 (osd.1) 128 : cluster [DBG] 6.b scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:37:47.584273+0000 osd.1 (osd.1) 129 : cluster [DBG] 6.b scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62742528 unmapped: 32768 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:18.929797+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.d scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.d scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62750720 unmapped: 24576 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:19.930017+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 131 sent 129 num 2 unsent 2 sending 2
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:37:49.555024+0000 osd.1 (osd.1) 130 : cluster [DBG] 6.d scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:37:49.576236+0000 osd.1 (osd.1) 131 : cluster [DBG] 6.d scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 131) v1
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:37:49.555024+0000 osd.1 (osd.1) 130 : cluster [DBG] 6.d scrub starts
Nov 25 23:58:31 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:37:49.576236+0000 osd.1 (osd.1) 131 : cluster [DBG] 6.d scrub ok
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62750720 unmapped: 24576 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:20.930322+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62758912 unmapped: 16384 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:21.930528+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62758912 unmapped: 16384 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:22.930821+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62758912 unmapped: 16384 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:23.930973+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62767104 unmapped: 8192 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:24.931165+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62767104 unmapped: 8192 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:25.931345+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62775296 unmapped: 0 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:26.931609+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62775296 unmapped: 0 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:27.931793+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62783488 unmapped: 1040384 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:28.932111+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62783488 unmapped: 1040384 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:29.932462+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62783488 unmapped: 1040384 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:30.933271+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62791680 unmapped: 1032192 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:31.933478+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62791680 unmapped: 1032192 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:32.933787+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62791680 unmapped: 1032192 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:33.933980+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62799872 unmapped: 1024000 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:34.934249+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62799872 unmapped: 1024000 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:35.934406+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62799872 unmapped: 1024000 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:36.934770+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62808064 unmapped: 1015808 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:37.935020+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62791680 unmapped: 1032192 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:38.935305+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62799872 unmapped: 1024000 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:39.935731+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62799872 unmapped: 1024000 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:40.935938+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62808064 unmapped: 1015808 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:41.936260+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62808064 unmapped: 1015808 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:42.936493+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62808064 unmapped: 1015808 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:43.936736+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62816256 unmapped: 1007616 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:44.936912+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62816256 unmapped: 1007616 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:45.937117+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62824448 unmapped: 999424 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:46.937434+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62824448 unmapped: 999424 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:47.937646+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62824448 unmapped: 999424 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:48.937854+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:49.938133+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62832640 unmapped: 991232 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:50.938359+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62832640 unmapped: 991232 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:51.938666+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62840832 unmapped: 983040 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:52.938947+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62849024 unmapped: 974848 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:53.939149+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62849024 unmapped: 974848 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:54.939343+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62857216 unmapped: 966656 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:55.939577+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62857216 unmapped: 966656 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:56.939813+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62857216 unmapped: 966656 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:57.940046+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62865408 unmapped: 958464 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:58.940301+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62865408 unmapped: 958464 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:59.940530+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62873600 unmapped: 950272 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:00.940739+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62873600 unmapped: 950272 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:01.940929+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62873600 unmapped: 950272 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:02.941125+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62881792 unmapped: 942080 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:03.941250+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62881792 unmapped: 942080 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:04.941394+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62889984 unmapped: 933888 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:05.941586+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62889984 unmapped: 933888 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:06.941790+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62889984 unmapped: 933888 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:07.941969+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62898176 unmapped: 925696 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:08.942209+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62898176 unmapped: 925696 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:09.942448+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62898176 unmapped: 925696 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:10.942578+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62906368 unmapped: 917504 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:11.942803+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62914560 unmapped: 909312 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:12.942982+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62914560 unmapped: 909312 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:13.943211+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62914560 unmapped: 909312 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:14.943393+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62922752 unmapped: 901120 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:15.943556+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62922752 unmapped: 901120 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:16.943696+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62930944 unmapped: 892928 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:17.943849+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62930944 unmapped: 892928 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:18.944032+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62930944 unmapped: 892928 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:19.944213+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62939136 unmapped: 884736 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:20.944362+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62939136 unmapped: 884736 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:21.944522+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62947328 unmapped: 876544 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:22.944632+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62947328 unmapped: 876544 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:23.944792+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62947328 unmapped: 876544 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:24.944940+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62955520 unmapped: 868352 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:25.945150+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62955520 unmapped: 868352 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:26.945396+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62963712 unmapped: 860160 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:27.945669+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62963712 unmapped: 860160 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:28.945854+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62963712 unmapped: 860160 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:29.946238+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62963712 unmapped: 860160 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:30.946441+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62971904 unmapped: 851968 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:31.946591+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62971904 unmapped: 851968 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:32.946778+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62971904 unmapped: 851968 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:33.946984+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62980096 unmapped: 843776 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:34.947238+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62980096 unmapped: 843776 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:35.947385+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62988288 unmapped: 835584 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:36.947551+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62988288 unmapped: 835584 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:37.948042+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62996480 unmapped: 827392 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:38.948221+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62996480 unmapped: 827392 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:39.948560+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62996480 unmapped: 827392 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:40.948851+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63004672 unmapped: 819200 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:41.949073+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63004672 unmapped: 819200 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:42.949255+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63012864 unmapped: 811008 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:43.949430+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63012864 unmapped: 811008 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:44.949549+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63012864 unmapped: 811008 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:45.949686+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63021056 unmapped: 802816 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:46.949884+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63021056 unmapped: 802816 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:47.950030+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63029248 unmapped: 794624 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:48.950255+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63029248 unmapped: 794624 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:49.950443+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63029248 unmapped: 794624 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:50.950595+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63037440 unmapped: 786432 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:51.950731+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63029248 unmapped: 794624 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:52.950893+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63029248 unmapped: 794624 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:53.951046+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63037440 unmapped: 786432 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:54.951234+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63037440 unmapped: 786432 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:55.951346+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63045632 unmapped: 778240 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:56.951485+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63045632 unmapped: 778240 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:57.951631+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63045632 unmapped: 778240 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:58.951786+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63053824 unmapped: 770048 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:59.951952+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63053824 unmapped: 770048 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:00.952066+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63062016 unmapped: 761856 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:01.952158+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63062016 unmapped: 761856 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:02.952295+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63062016 unmapped: 761856 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:03.952427+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63070208 unmapped: 753664 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:04.952558+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63070208 unmapped: 753664 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:05.952718+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63070208 unmapped: 753664 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:06.952930+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63078400 unmapped: 745472 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:07.954562+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63078400 unmapped: 745472 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:08.954675+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63086592 unmapped: 737280 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:09.954833+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63086592 unmapped: 737280 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:10.955018+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63094784 unmapped: 729088 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:11.955247+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63094784 unmapped: 729088 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:12.955396+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63094784 unmapped: 729088 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:13.955536+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63102976 unmapped: 720896 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:14.955660+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63102976 unmapped: 720896 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:15.955798+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63119360 unmapped: 704512 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:16.955911+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63119360 unmapped: 704512 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:17.956033+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63119360 unmapped: 704512 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:18.956143+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63127552 unmapped: 696320 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:19.956359+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63127552 unmapped: 696320 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:20.956493+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63135744 unmapped: 688128 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:21.956614+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63135744 unmapped: 688128 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:22.956790+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63135744 unmapped: 688128 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:23.956977+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63143936 unmapped: 679936 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:24.957262+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63143936 unmapped: 679936 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:25.957374+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63143936 unmapped: 679936 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:26.957532+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63152128 unmapped: 671744 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:27.957702+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63152128 unmapped: 671744 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:28.957865+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63160320 unmapped: 663552 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:29.958049+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63160320 unmapped: 663552 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:30.958225+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63160320 unmapped: 663552 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:31.958439+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63168512 unmapped: 655360 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:32.958607+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63168512 unmapped: 655360 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:33.958779+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63176704 unmapped: 647168 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:34.958909+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63176704 unmapped: 647168 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:35.959072+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63176704 unmapped: 647168 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:36.959254+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63184896 unmapped: 638976 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:37.959413+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63184896 unmapped: 638976 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:38.959604+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63184896 unmapped: 638976 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:39.959813+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63193088 unmapped: 630784 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:40.959972+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63193088 unmapped: 630784 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:41.960131+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63201280 unmapped: 622592 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:42.960256+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63201280 unmapped: 622592 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:43.960436+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63201280 unmapped: 622592 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:44.960584+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63209472 unmapped: 614400 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:45.960755+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63209472 unmapped: 614400 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:46.960889+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63209472 unmapped: 614400 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:47.961047+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63217664 unmapped: 606208 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:48.961262+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63217664 unmapped: 606208 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:49.961450+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63225856 unmapped: 598016 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:50.961625+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63225856 unmapped: 598016 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:51.961796+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63225856 unmapped: 598016 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:52.961946+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63234048 unmapped: 589824 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:53.962198+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63234048 unmapped: 589824 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:54.962381+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63234048 unmapped: 589824 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:55.962502+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63242240 unmapped: 581632 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:56.962652+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63242240 unmapped: 581632 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:57.962811+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63250432 unmapped: 573440 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:58.962974+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63250432 unmapped: 573440 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:59.963159+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63250432 unmapped: 573440 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:00.963342+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63250432 unmapped: 573440 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:01.963506+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63250432 unmapped: 573440 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:02.963730+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63250432 unmapped: 573440 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:03.963892+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63258624 unmapped: 565248 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:04.964062+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63258624 unmapped: 565248 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:05.964232+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63266816 unmapped: 557056 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:06.964344+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63266816 unmapped: 557056 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:07.964484+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63266816 unmapped: 557056 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:08.964624+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63275008 unmapped: 548864 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:09.964800+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63275008 unmapped: 548864 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:10.964966+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63283200 unmapped: 540672 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:11.965154+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63283200 unmapped: 540672 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:12.965310+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63283200 unmapped: 540672 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:13.965499+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63291392 unmapped: 532480 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:14.965668+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63291392 unmapped: 532480 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:15.965824+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63299584 unmapped: 524288 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:16.965974+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63299584 unmapped: 524288 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:17.966134+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63299584 unmapped: 524288 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:18.966254+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63307776 unmapped: 516096 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:19.966405+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63307776 unmapped: 516096 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:20.966635+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63307776 unmapped: 516096 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:21.966785+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63315968 unmapped: 507904 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:22.966979+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63315968 unmapped: 507904 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:23.967140+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63324160 unmapped: 499712 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:24.967277+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63324160 unmapped: 499712 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:25.967435+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63332352 unmapped: 491520 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:26.967969+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63340544 unmapped: 483328 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:27.968099+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63340544 unmapped: 483328 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:28.968367+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63340544 unmapped: 483328 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:29.968558+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63348736 unmapped: 475136 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:30.968700+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63348736 unmapped: 475136 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:31.968859+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63348736 unmapped: 475136 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:32.969220+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63356928 unmapped: 466944 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:33.969513+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63356928 unmapped: 466944 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:34.969717+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63365120 unmapped: 458752 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:35.969985+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63365120 unmapped: 458752 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:36.970159+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63373312 unmapped: 450560 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:37.970553+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63373312 unmapped: 450560 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:38.970890+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63373312 unmapped: 450560 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:39.971108+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63381504 unmapped: 442368 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:40.971236+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63389696 unmapped: 434176 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:41.971433+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63389696 unmapped: 434176 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:42.971572+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63397888 unmapped: 425984 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:43.971732+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63397888 unmapped: 425984 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:44.971846+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63406080 unmapped: 417792 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:45.971962+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63406080 unmapped: 417792 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:46.972073+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63406080 unmapped: 417792 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:47.972218+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63414272 unmapped: 409600 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:48.972369+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63414272 unmapped: 409600 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:49.972652+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63422464 unmapped: 401408 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:50.972883+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63422464 unmapped: 401408 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:51.973158+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63422464 unmapped: 401408 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:52.973333+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63430656 unmapped: 393216 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:53.973623+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63430656 unmapped: 393216 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:54.973803+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63438848 unmapped: 385024 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:55.973956+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63438848 unmapped: 385024 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:56.974103+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63438848 unmapped: 385024 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:57.974274+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63447040 unmapped: 376832 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:58.974450+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63447040 unmapped: 376832 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:59.975466+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63447040 unmapped: 376832 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:00.975738+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63455232 unmapped: 368640 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:01.976235+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63455232 unmapped: 368640 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:02.976531+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63463424 unmapped: 360448 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:03.976741+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63463424 unmapped: 360448 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:04.976997+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63463424 unmapped: 360448 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:05.977360+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63471616 unmapped: 352256 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:06.977807+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63471616 unmapped: 352256 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:07.978006+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63471616 unmapped: 352256 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:08.978214+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63479808 unmapped: 344064 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:09.978407+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63479808 unmapped: 344064 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:10.978559+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63479808 unmapped: 344064 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:11.978713+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63488000 unmapped: 335872 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:12.978869+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63488000 unmapped: 335872 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:13.979035+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63496192 unmapped: 327680 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:14.979158+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63496192 unmapped: 327680 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:15.979284+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63496192 unmapped: 327680 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:16.979411+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63504384 unmapped: 319488 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:17.979564+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63504384 unmapped: 319488 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:18.979728+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63512576 unmapped: 311296 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:19.979958+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63512576 unmapped: 311296 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:20.980161+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63512576 unmapped: 311296 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:21.980905+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63520768 unmapped: 303104 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:22.981098+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63520768 unmapped: 303104 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:23.982480+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63528960 unmapped: 294912 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:24.982874+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63528960 unmapped: 294912 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:25.983022+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63528960 unmapped: 294912 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:26.983213+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63537152 unmapped: 286720 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:27.983390+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63537152 unmapped: 286720 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:28.983720+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63537152 unmapped: 286720 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:29.984720+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63545344 unmapped: 278528 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:30.984837+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63545344 unmapped: 278528 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:31.985033+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63545344 unmapped: 278528 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:32.985219+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63553536 unmapped: 270336 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:33.985385+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63553536 unmapped: 270336 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:34.985555+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63561728 unmapped: 262144 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:35.985722+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63561728 unmapped: 262144 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:36.986298+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63561728 unmapped: 262144 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:37.986434+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63569920 unmapped: 253952 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:38.986667+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63569920 unmapped: 253952 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:39.986885+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63569920 unmapped: 253952 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:40.987081+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63578112 unmapped: 245760 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:41.987278+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63578112 unmapped: 245760 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:42.987489+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63586304 unmapped: 237568 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:43.987649+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63586304 unmapped: 237568 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:44.987817+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63586304 unmapped: 237568 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:45.987945+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63594496 unmapped: 229376 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:46.988097+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63594496 unmapped: 229376 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:47.988237+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63602688 unmapped: 221184 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:48.988411+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63602688 unmapped: 221184 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:49.988613+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63610880 unmapped: 212992 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:50.988735+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63610880 unmapped: 212992 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:51.988846+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63610880 unmapped: 212992 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:52.989160+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63619072 unmapped: 204800 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:53.989342+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63619072 unmapped: 204800 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:54.989441+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63619072 unmapped: 204800 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:55.989556+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63619072 unmapped: 204800 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:56.989683+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63627264 unmapped: 196608 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:57.989828+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63627264 unmapped: 196608 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:58.989984+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63635456 unmapped: 188416 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:59.990137+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63635456 unmapped: 188416 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:00.990290+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63635456 unmapped: 188416 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:01.990427+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63643648 unmapped: 180224 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:02.990609+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63643648 unmapped: 180224 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:03.990752+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63651840 unmapped: 172032 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:04.990914+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63651840 unmapped: 172032 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:05.991116+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63651840 unmapped: 172032 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:06.991902+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63660032 unmapped: 163840 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:07.992108+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63660032 unmapped: 163840 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:08.992264+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63668224 unmapped: 155648 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:09.992442+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63668224 unmapped: 155648 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:10.992581+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63676416 unmapped: 147456 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:11.992722+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63684608 unmapped: 139264 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:12.992866+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63684608 unmapped: 139264 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:13.993011+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63684608 unmapped: 139264 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:14.993248+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63692800 unmapped: 131072 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:15.993373+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63692800 unmapped: 131072 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:16.993573+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63692800 unmapped: 131072 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:17.993750+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63700992 unmapped: 122880 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:18.993863+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63700992 unmapped: 122880 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:19.994079+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63709184 unmapped: 114688 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:20.994230+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63709184 unmapped: 114688 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:21.994443+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63717376 unmapped: 106496 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:22.994590+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63717376 unmapped: 106496 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:23.994729+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63717376 unmapped: 106496 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:24.994870+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Cumulative writes: 4489 writes, 20K keys, 4489 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 4489 writes, 490 syncs, 9.16 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4489 writes, 20K keys, 4489 commit groups, 1.0 writes per commit group, ingest: 16.54 MB, 0.03 MB/s
                                           Interval WAL: 4489 writes, 490 syncs, 9.16 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613e96171f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613e96171f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613e96171f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613e96171f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613e96171f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613e96171f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613e96171f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613e9617090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613e9617090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613e9617090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613e96171f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613e96171f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63807488 unmapped: 16384 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:25.995072+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63815680 unmapped: 8192 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:26.995251+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63815680 unmapped: 8192 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:27.995380+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63823872 unmapped: 0 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:28.995501+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63823872 unmapped: 0 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:29.995644+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63832064 unmapped: 1040384 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:30.995758+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63832064 unmapped: 1040384 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:31.995898+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:32.996110+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63832064 unmapped: 1040384 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:33.996257+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63840256 unmapped: 1032192 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:34.996438+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63840256 unmapped: 1032192 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:35.996573+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63848448 unmapped: 1024000 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:36.996748+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63840256 unmapped: 1032192 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:37.996943+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63840256 unmapped: 1032192 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:38.997148+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63848448 unmapped: 1024000 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:39.997347+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63848448 unmapped: 1024000 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:40.997492+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63856640 unmapped: 1015808 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:41.997621+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63856640 unmapped: 1015808 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:42.997744+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63856640 unmapped: 1015808 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:43.997893+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63856640 unmapped: 1015808 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:44.998059+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63864832 unmapped: 1007616 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:45.998256+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63864832 unmapped: 1007616 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:46.998473+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63873024 unmapped: 999424 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:47.998641+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63873024 unmapped: 999424 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:48.998758+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63873024 unmapped: 999424 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:49.998911+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63881216 unmapped: 991232 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:50.999056+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63881216 unmapped: 991232 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:51.999248+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63889408 unmapped: 983040 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:52.999413+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63889408 unmapped: 983040 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:53.999576+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63889408 unmapped: 983040 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:54.999735+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63897600 unmapped: 974848 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:55.999892+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63897600 unmapped: 974848 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:57.000050+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63897600 unmapped: 974848 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:58.000197+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63905792 unmapped: 966656 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:59.000359+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63905792 unmapped: 966656 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:00.000553+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63913984 unmapped: 958464 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:01.000754+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63913984 unmapped: 958464 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:02.000953+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63913984 unmapped: 958464 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:03.001205+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63922176 unmapped: 950272 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:04.001368+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63922176 unmapped: 950272 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:05.001562+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63930368 unmapped: 942080 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:06.001817+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63930368 unmapped: 942080 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:07.002029+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63930368 unmapped: 942080 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:08.002225+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63938560 unmapped: 933888 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:09.002418+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63938560 unmapped: 933888 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:10.002595+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63946752 unmapped: 925696 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:11.002779+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63946752 unmapped: 925696 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:12.002939+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63946752 unmapped: 925696 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:13.003126+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63954944 unmapped: 917504 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:14.003228+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63954944 unmapped: 917504 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:15.003369+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63954944 unmapped: 917504 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:16.003491+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63963136 unmapped: 909312 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:17.003638+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63963136 unmapped: 909312 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:18.003762+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63971328 unmapped: 901120 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:19.003981+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63971328 unmapped: 901120 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:20.004272+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63971328 unmapped: 901120 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:21.004479+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63979520 unmapped: 892928 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:22.004693+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63979520 unmapped: 892928 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:23.004927+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63979520 unmapped: 892928 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:24.005146+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63987712 unmapped: 884736 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:25.005322+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63987712 unmapped: 884736 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:26.005503+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63995904 unmapped: 876544 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:27.005690+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63995904 unmapped: 876544 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:28.005885+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63995904 unmapped: 876544 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:29.006022+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64004096 unmapped: 868352 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:30.006164+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64004096 unmapped: 868352 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:31.006303+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64012288 unmapped: 860160 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:32.006439+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64012288 unmapped: 860160 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:33.006636+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64012288 unmapped: 860160 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:34.006755+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64020480 unmapped: 851968 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:35.006920+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64020480 unmapped: 851968 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:36.007077+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64020480 unmapped: 851968 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:37.007219+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64028672 unmapped: 843776 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:38.007349+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64028672 unmapped: 843776 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:39.007470+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64036864 unmapped: 835584 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:40.007620+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64036864 unmapped: 835584 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:41.007746+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64036864 unmapped: 835584 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:42.007894+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64045056 unmapped: 827392 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:43.008024+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64045056 unmapped: 827392 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:44.008151+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64045056 unmapped: 827392 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:45.008236+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64053248 unmapped: 819200 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:46.008379+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64053248 unmapped: 819200 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:47.008507+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64069632 unmapped: 802816 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:48.008627+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64069632 unmapped: 802816 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:49.008893+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64069632 unmapped: 802816 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:50.009091+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64077824 unmapped: 794624 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:51.009251+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64077824 unmapped: 794624 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:52.009412+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64086016 unmapped: 786432 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:53.009599+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64086016 unmapped: 786432 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:54.009754+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64086016 unmapped: 786432 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:55.009921+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64094208 unmapped: 778240 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:56.010071+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64094208 unmapped: 778240 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:57.010232+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64094208 unmapped: 778240 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:58.010534+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64102400 unmapped: 770048 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:59.010649+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64102400 unmapped: 770048 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:00.010822+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64102400 unmapped: 770048 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:01.010974+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64110592 unmapped: 761856 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:02.011108+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64110592 unmapped: 761856 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:03.011242+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64118784 unmapped: 753664 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:04.011367+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64118784 unmapped: 753664 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:05.011510+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64118784 unmapped: 753664 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:06.011673+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64118784 unmapped: 753664 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:07.011824+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64126976 unmapped: 745472 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:08.012030+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64126976 unmapped: 745472 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:09.012199+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64135168 unmapped: 737280 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:10.012411+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64135168 unmapped: 737280 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:11.012552+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64143360 unmapped: 729088 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:12.012741+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64143360 unmapped: 729088 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:13.012955+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64151552 unmapped: 720896 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:14.013108+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64151552 unmapped: 720896 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:15.013276+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64151552 unmapped: 720896 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:16.013433+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 712704 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:17.013557+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 712704 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:18.013723+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 704512 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:19.013859+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 704512 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:20.014051+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 704512 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:21.014214+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 704512 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:22.014362+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 704512 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:23.014530+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 704512 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:24.014668+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 704512 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:25.014793+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:26.014912+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:27.015246+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:28.015493+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:29.015670+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:30.015946+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:31.016090+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:32.016297+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:33.016549+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:34.016752+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:35.016899+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:36.017021+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:37.017228+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:38.017349+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:39.017540+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:40.017771+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:41.017903+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:42.018108+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:43.018257+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:44.018401+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:45.018590+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:46.018719+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:47.018868+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:48.019048+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:49.019222+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:50.019392+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:51.121456+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:52.121609+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:53.121731+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:54.121853+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:55.122019+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:56.122130+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:57.122228+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:58.122365+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:59.122524+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:00.122761+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:01.122927+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:02.123043+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:03.123221+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:04.123387+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:05.123531+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:06.123714+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:07.123895+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:08.124051+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:09.124258+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:10.124593+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:11.124791+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:12.125027+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:13.125269+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:14.125385+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:15.125495+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:16.125622+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:17.125749+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:18.125874+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:19.126004+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:20.126224+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:21.126384+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:22.126550+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:23.126733+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:24.126919+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:25.127074+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:26.127658+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:27.127780+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:28.127928+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:29.128118+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:30.128434+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:31.128593+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:32.131042+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:33.131229+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:34.131351+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:35.131546+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:36.131678+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:37.131803+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:38.131954+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:39.132133+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:40.132316+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:41.132535+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:42.132717+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:43.132903+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:44.133090+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:45.133240+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:46.133380+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:47.133538+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:48.133664+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:49.133833+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:50.133999+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:51.134159+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:52.134307+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:53.134437+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:54.134616+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:55.134750+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:56.134852+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:57.135014+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:58.135128+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:59.135312+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:00.135536+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:01.135705+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:02.135867+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:03.136016+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:04.136163+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:05.136336+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:06.136510+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:07.136677+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:08.136858+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:09.137017+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:10.137241+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:11.137361+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:12.137492+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:13.137672+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:14.137810+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:15.137942+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:16.138108+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:17.138264+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:18.138413+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:19.138509+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:20.138876+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:21.138994+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:22.139148+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:23.139253+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:24.139378+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:25.139518+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:26.139664+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:27.139760+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:28.139937+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:29.140093+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:30.140236+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:31.140396+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:32.140569+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:33.140779+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:34.140923+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:35.141071+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:36.141217+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:37.141406+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:38.141563+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:39.141670+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:40.141879+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:41.142055+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:42.142187+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:43.142308+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:44.142445+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:45.142713+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:46.142846+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:47.142999+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:48.143303+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:49.143426+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:50.144427+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:51.144574+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:52.144701+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:53.144875+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:54.145021+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:55.145411+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:56.145643+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:57.145841+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:58.146031+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:59.146226+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:00.146371+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:01.146502+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:02.146599+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:03.146758+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:04.146925+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:05.147136+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:06.147264+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:07.147454+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:08.147771+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:09.148012+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:10.148249+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:11.148450+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:12.148616+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:13.148734+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:14.148875+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:15.158162+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:16.158361+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:17.158499+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:18.158684+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:19.158917+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:20.159798+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:21.159947+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:22.160106+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:23.160287+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:24.160446+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:25.160589+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:26.160739+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:27.160876+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:28.161036+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:29.161228+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:30.161409+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:31.187242+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: mgrc ms_handle_reset ms_handle_reset con 0x5613ea45dc00
Nov 25 23:58:31 compute-0 ceph-osd[90055]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/855624559
Nov 25 23:58:31 compute-0 ceph-osd[90055]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/855624559,v1:192.168.122.100:6801/855624559]
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: get_auth_request con 0x5613eaec7000 auth_method 0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: mgrc handle_mgr_configure stats_period=5
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 ms_handle_reset con 0x5613eb896400 session 0x5613eb1c5860
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613eb896c00
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 ms_handle_reset con 0x5613eb897000 session 0x5613ebc8a5a0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613eb896400
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:32.187392+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:33.187546+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:34.187717+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:35.187882+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:36.188087+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:37.188242+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:38.188375+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:39.188542+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:40.188748+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:41.188856+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:42.188991+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:43.189142+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:44.189258+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:45.189463+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:46.189745+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:47.189937+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:48.190116+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:49.190291+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:50.190519+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:51.190676+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:52.190805+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:53.190961+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:54.191114+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:55.191282+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:56.191412+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:57.191558+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:58.191697+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:59.191819+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:00.191990+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:01.192143+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:02.192318+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:03.192437+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:04.192592+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:05.192709+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:06.192845+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:07.192957+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:08.193085+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:09.193232+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:10.193783+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:11.193936+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:12.194093+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:13.194234+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:14.194381+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:15.194527+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:16.194665+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:17.194806+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:18.194957+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:19.195104+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:20.195248+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:21.195370+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:22.195512+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:23.195662+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:24.195875+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:25.196073+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:26.196328+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:27.196445+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:28.196786+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:29.196954+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:30.197329+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:31.197569+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:32.197820+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:33.197979+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:34.198194+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:35.198341+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:36.198479+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:37.198609+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:38.198725+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:39.199001+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:40.199211+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:41.199367+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:42.199506+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:43.199664+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:44.199791+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:45.199955+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:46.200295+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:47.200499+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:48.200617+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:49.200779+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:50.200952+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:51.201090+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:52.201325+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:53.201485+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:54.201623+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:55.201924+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:56.202105+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:57.202586+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:58.202731+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:59.203027+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:00.203233+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:01.203737+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:02.204213+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:03.204591+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:04.205467+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:05.206255+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:06.206726+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:07.206961+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:08.207157+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:09.207374+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:10.207591+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:11.207820+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:12.207962+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:13.208227+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:14.208449+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:15.208669+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:16.208825+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:17.208950+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:18.209082+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:19.209219+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:20.209385+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:21.209562+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:22.209739+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:23.209949+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:24.210233+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:25.210508+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:26.210686+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:27.210849+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:28.211014+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:29.211239+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:30.211461+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:31.211630+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:32.211791+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:33.212014+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:34.212443+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:35.212630+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:36.212742+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:37.212866+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:38.213003+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:39.213134+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:40.213386+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:41.213568+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:42.213791+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:43.213969+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:44.214152+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:45.214322+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:46.214503+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:47.214838+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:48.214962+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:49.215113+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:50.215284+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:51.215392+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:52.215556+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:53.215690+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:54.215827+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:55.215953+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:56.216081+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:57.216265+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:58.216489+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:59.216677+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:00.217251+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:01.218421+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:02.218666+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:03.218936+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:04.219295+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:05.219845+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:06.219975+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:07.220124+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:08.220249+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:09.220423+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:10.220594+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:11.220750+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:12.220938+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:13.221112+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:14.221349+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:15.221522+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:16.221695+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:17.221916+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:18.265360+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:19.265527+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:20.265689+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:21.265838+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:22.265996+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:23.266155+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:24.266362+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:25.266480+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:26.266600+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:27.266729+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:28.266848+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:29.267224+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:30.267366+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:31.267517+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:32.267656+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:33.267814+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:34.268013+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:35.268220+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:36.268369+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:37.268515+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:38.268628+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:39.268823+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:40.269061+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:41.269228+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:42.269470+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:43.269705+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:44.269934+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:45.270070+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:46.270218+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:47.270382+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:48.270580+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:49.270733+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:50.270951+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:51.271102+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:54.050420+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:55.050588+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:56.050832+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:57.050987+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:58.051087+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:59.051220+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:00.051329+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:01.051472+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:02.051611+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:03.051753+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:04.053437+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:05.054339+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:06.056125+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:07.057136+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:08.057652+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:09.057889+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:10.058086+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:11.058968+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:12.059636+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:13.059967+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:14.060282+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:15.060411+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:16.060778+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:17.061090+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:18.061347+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:19.061699+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:20.061960+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:21.062257+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:22.062452+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:23.062608+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:24.062761+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:25.062903+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:26.063012+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:27.063240+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:28.063427+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:29.063661+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:30.063951+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:31.064133+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:32.064252+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:33.064368+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:34.064535+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:35.064692+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:36.064808+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:37.064944+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:38.065100+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:39.065358+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:40.065480+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:41.065622+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:42.065744+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:43.065871+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:44.066036+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:45.066164+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:46.066360+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:47.066467+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:48.066600+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:49.066707+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:50.066866+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:51.067090+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:52.365612+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:53.365754+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:54.365975+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:55.366093+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:56.366257+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:57.366416+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:58.366565+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:59.366744+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:00.366881+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:01.367061+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:02.367206+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:03.367378+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:04.367537+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:05.367707+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:06.368678+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:07.368844+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:08.369026+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:09.369253+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:10.369473+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:11.369728+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:12.369924+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:13.370113+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:14.370298+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:15.370480+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:16.370647+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:17.370843+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:18.370974+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:19.371239+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:20.371418+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:21.371641+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:22.371834+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:23.372009+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:24.372204+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:25.372383+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Cumulative writes: 4489 writes, 20K keys, 4489 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 4489 writes, 490 syncs, 9.16 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613e96171f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000107 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613e96171f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000107 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613e96171f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000107 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613e96171f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000107 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613e96171f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000107 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613e96171f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000107 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613e96171f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000107 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613e9617090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613e9617090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613e9617090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613e96171f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000107 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613e96171f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000107 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:26.372564+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:27.372722+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:28.372905+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:29.373074+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:30.373262+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:31.373504+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:32.373831+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:33.374024+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:34.374299+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:35.374491+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:36.374695+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:37.374872+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:38.375070+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:39.375254+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:40.375445+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:41.375651+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:42.375838+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:43.376028+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:44.376204+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:45.376371+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:46.376611+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:47.376770+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:48.376935+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:49.377074+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:50.377238+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:51.377428+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:52.377599+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:53.377822+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:54.378009+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:55.378230+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:56.378399+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:57.378568+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:58.378770+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:59.378967+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:00.379117+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:01.379375+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:02.379515+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:03.379712+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:04.379843+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:05.379995+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:06.380157+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:07.380393+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:08.380567+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:09.380712+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:10.380928+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:11.381155+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:12.381356+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:13.381496+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:14.381699+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:15.381868+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:16.382016+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:17.382232+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:18.382453+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:19.382658+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:20.382843+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:21.383053+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:22.383256+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:23.383434+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:24.383597+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:25.383750+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:26.383954+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:27.384113+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:28.384265+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:29.384399+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:30.384595+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:31.384838+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:32.385033+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:33.385304+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:34.385500+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:35.385643+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:36.385812+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:37.386011+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:38.386146+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:39.386322+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:40.386487+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:41.386678+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:42.386810+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:43.386971+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:44.387139+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:45.387288+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:46.387441+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:47.387589+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:48.387710+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:49.387929+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:50.388121+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:51.388376+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:52.388514+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:53.388680+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:54.388829+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:55.388951+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:56.389049+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:57.389266+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:58.389410+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:59.389568+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:00.389761+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:01.389971+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:02.390126+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:03.390304+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:04.390454+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:05.390625+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:06.390800+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:07.390997+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:08.391131+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:09.391317+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:10.391540+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:11.391750+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ece8c800
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _renew_subs
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 63 handle_osd_map epochs [64,64], i have 63, src has [1,64]
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 1016.414916992s of 1016.440307617s, submitted: 6
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 573440 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:12.391949+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _renew_subs
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 64 handle_osd_map epochs [65,65], i have 64, src has [1,65]
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 499712 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:13.392108+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 65 heartbeat osd_stat(store_statfs(0x4fe0a4000/0x0/0x4ffc00000, data 0xbb5dc/0x128000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64438272 unmapped: 17219584 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:14.392276+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _renew_subs
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 65 handle_osd_map epochs [66,66], i have 65, src has [1,66]
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 66 ms_handle_reset con 0x5613ece8c800 session 0x5613eb982b40
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64520192 unmapped: 17137664 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:15.392420+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 547922 data_alloc: 218103808 data_used: 24576
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64520192 unmapped: 17137664 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:16.392580+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64528384 unmapped: 17129472 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:17.392797+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ed314000
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64708608 unmapped: 16949248 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:18.393008+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 66 handle_osd_map epochs [66,67], i have 66, src has [1,67]
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 67 ms_handle_reset con 0x5613ed314000 session 0x5613ebd4de00
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64741376 unmapped: 16916480 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 67 heartbeat osd_stat(store_statfs(0x4fd09f000/0x0/0x4ffc00000, data 0x10be188/0x112e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:19.393240+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64741376 unmapped: 16916480 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:20.393529+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 67 handle_osd_map epochs [67,68], i have 67, src has [1,68]
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 610691 data_alloc: 218103808 data_used: 36864
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64741376 unmapped: 16916480 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:21.393809+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64741376 unmapped: 16916480 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:22.393982+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64741376 unmapped: 16916480 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:23.394232+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64741376 unmapped: 16916480 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:24.394438+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fd09c000/0x0/0x4ffc00000, data 0x10bf628/0x1131000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64741376 unmapped: 16916480 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:25.394665+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fd09c000/0x0/0x4ffc00000, data 0x10bf628/0x1131000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 610691 data_alloc: 218103808 data_used: 36864
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64741376 unmapped: 16916480 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:26.394846+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64741376 unmapped: 16916480 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:27.395075+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64741376 unmapped: 16916480 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:28.395329+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fd09c000/0x0/0x4ffc00000, data 0x10bf628/0x1131000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64741376 unmapped: 16916480 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:29.395516+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64741376 unmapped: 16916480 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:30.395698+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 610691 data_alloc: 218103808 data_used: 36864
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64741376 unmapped: 16916480 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:31.395895+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64741376 unmapped: 16916480 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:32.396069+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64741376 unmapped: 16916480 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:33.396281+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64741376 unmapped: 16916480 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:34.396458+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fd09c000/0x0/0x4ffc00000, data 0x10bf628/0x1131000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64741376 unmapped: 16916480 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:35.396616+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 610691 data_alloc: 218103808 data_used: 36864
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64749568 unmapped: 16908288 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:36.396786+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64749568 unmapped: 16908288 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:37.396939+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64749568 unmapped: 16908288 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:38.397130+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64749568 unmapped: 16908288 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:39.397267+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fd09c000/0x0/0x4ffc00000, data 0x10bf628/0x1131000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:40.397434+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64749568 unmapped: 16908288 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 610691 data_alloc: 218103808 data_used: 36864
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:41.397687+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64749568 unmapped: 16908288 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:42.397885+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64749568 unmapped: 16908288 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:43.398050+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64749568 unmapped: 16908288 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:44.398239+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64749568 unmapped: 16908288 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fd09c000/0x0/0x4ffc00000, data 0x10bf628/0x1131000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:45.398385+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64749568 unmapped: 16908288 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 610691 data_alloc: 218103808 data_used: 36864
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:46.398537+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64749568 unmapped: 16908288 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:47.398731+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64749568 unmapped: 16908288 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:48.398875+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64749568 unmapped: 16908288 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fd09c000/0x0/0x4ffc00000, data 0x10bf628/0x1131000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:49.399111+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64749568 unmapped: 16908288 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:50.399328+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64749568 unmapped: 16908288 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 610691 data_alloc: 218103808 data_used: 36864
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:51.399728+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64757760 unmapped: 16900096 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:52.400081+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64757760 unmapped: 16900096 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ed314400
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 40.834548950s of 41.041542053s, submitted: 53
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _renew_subs
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 68 handle_osd_map epochs [69,69], i have 68, src has [1,69]
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:53.400266+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 69 ms_handle_reset con 0x5613ed314400 session 0x5613eb995e00
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64774144 unmapped: 16883712 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:54.400619+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 69 heartbeat osd_stat(store_statfs(0x4fd099000/0x0/0x4ffc00000, data 0x10c0be2/0x1134000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64774144 unmapped: 16883712 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:55.400889+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64774144 unmapped: 16883712 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ed315c00
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 615480 data_alloc: 218103808 data_used: 45056
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 69 heartbeat osd_stat(store_statfs(0x4fc89a000/0x0/0x4ffc00000, data 0x18c0be2/0x1934000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:56.401035+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 16785408 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:57.401200+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 65060864 unmapped: 24993792 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 69 heartbeat osd_stat(store_statfs(0x4fb09a000/0x0/0x4ffc00000, data 0x30c0be2/0x3134000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:58.401328+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 65060864 unmapped: 24993792 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _renew_subs
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 69 handle_osd_map epochs [70,70], i have 69, src has [1,70]
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 70 ms_handle_reset con 0x5613ed315c00 session 0x5613ebe07e00
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:59.401461+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ed315800
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 65093632 unmapped: 24961024 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:00.401783+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _renew_subs
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 70 handle_osd_map epochs [71,71], i have 70, src has [1,71]
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 65093632 unmapped: 24961024 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 71 ms_handle_reset con 0x5613ed315800 session 0x5613eb982b40
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 629376 data_alloc: 218103808 data_used: 53248
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:01.402074+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ece8c800
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 65126400 unmapped: 24928256 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 71 heartbeat osd_stat(store_statfs(0x4fd093000/0x0/0x4ffc00000, data 0x10c37aa/0x113a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:02.402273+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _renew_subs
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 71 handle_osd_map epochs [72,72], i have 71, src has [1,72]
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 66183168 unmapped: 23871488 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 72 ms_handle_reset con 0x5613ece8c800 session 0x5613eb9823c0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:03.402476+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 66256896 unmapped: 23797760 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ed314000
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:04.402593+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _renew_subs
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 72 handle_osd_map epochs [73,73], i have 72, src has [1,73]
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.184603691s of 11.587786674s, submitted: 73
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 66273280 unmapped: 23781376 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 73 ms_handle_reset con 0x5613ed314000 session 0x5613ecd212c0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:05.403519+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ed314400
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 73 ms_handle_reset con 0x5613ed314400 session 0x5613ec2ad2c0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 66273280 unmapped: 23781376 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ed315c00
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 73 ms_handle_reset con 0x5613ed315c00 session 0x5613eb982000
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 636864 data_alloc: 218103808 data_used: 61440
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:06.403815+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ed315400
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _renew_subs
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 73 handle_osd_map epochs [74,74], i have 73, src has [1,74]
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 66306048 unmapped: 23748608 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613edb41400
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 74 heartbeat osd_stat(store_statfs(0x4fd084000/0x0/0x4ffc00000, data 0x10c80af/0x1149000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:07.403988+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 66396160 unmapped: 23658496 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 74 handle_osd_map epochs [74,75], i have 74, src has [1,75]
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _renew_subs
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 74 handle_osd_map epochs [75,75], i have 75, src has [1,75]
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 75 ms_handle_reset con 0x5613ed315400 session 0x5613eb982b40
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 75 ms_handle_reset con 0x5613edb41400 session 0x5613ecd463c0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ece8c800
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 75 ms_handle_reset con 0x5613ece8c800 session 0x5613ec2ad4a0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ed315c00
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 75 ms_handle_reset con 0x5613ed315c00 session 0x5613ec2e1e00
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:08.404166+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 66469888 unmapped: 23584768 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ee378000
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ee378c00
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:09.404371+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 66494464 unmapped: 23560192 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _renew_subs
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 75 handle_osd_map epochs [76,76], i have 75, src has [1,76]
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 76 ms_handle_reset con 0x5613ee378c00 session 0x5613eaf134a0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 76 ms_handle_reset con 0x5613ee378000 session 0x5613ea3f1e00
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ece8c800
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:10.404535+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ed315c00
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 67682304 unmapped: 22372352 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 76 handle_osd_map epochs [76,77], i have 76, src has [1,77]
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 77 ms_handle_reset con 0x5613ed315c00 session 0x5613eaf894a0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 662651 data_alloc: 218103808 data_used: 69632
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 77 ms_handle_reset con 0x5613ece8c800 session 0x5613ec2ac1e0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:11.404809+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 67805184 unmapped: 22249472 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 77 heartbeat osd_stat(store_statfs(0x4fcc6a000/0x0/0x4ffc00000, data 0x10cbf1a/0x1150000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e3f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:12.404994+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 67837952 unmapped: 22216704 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613edb41400
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 77 ms_handle_reset con 0x5613edb41400 session 0x5613ea5e7680
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ee378c00
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ee379000
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:13.405195+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 67870720 unmapped: 22183936 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _renew_subs
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 77 handle_osd_map epochs [78,78], i have 77, src has [1,78]
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:14.405342+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 67903488 unmapped: 22151168 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _renew_subs
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 78 handle_osd_map epochs [79,79], i have 78, src has [1,79]
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.674486160s of 10.168670654s, submitted: 142
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 79 ms_handle_reset con 0x5613ee379000 session 0x5613eaf89e00
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 79 ms_handle_reset con 0x5613ee378c00 session 0x5613ea5e74a0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ece8c800
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 79 ms_handle_reset con 0x5613ece8c800 session 0x5613eb003e00
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:15.405503+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 68009984 unmapped: 22044672 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 672501 data_alloc: 218103808 data_used: 77824
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:16.405655+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 68009984 unmapped: 22044672 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:17.405859+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 68009984 unmapped: 22044672 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 79 handle_osd_map epochs [80,80], i have 79, src has [1,80]
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 80 heartbeat osd_stat(store_statfs(0x4fcc65000/0x0/0x4ffc00000, data 0x10cebaf/0x1158000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e3f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:18.406027+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 22118400 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:19.406243+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 22118400 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ed315c00
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:20.406426+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 22011904 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _renew_subs
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 80 handle_osd_map epochs [81,81], i have 80, src has [1,81]
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 81 ms_handle_reset con 0x5613ed315c00 session 0x5613eac6d2c0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 683009 data_alloc: 218103808 data_used: 77824
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:21.406663+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 81 heartbeat osd_stat(store_statfs(0x4fcc5d000/0x0/0x4ffc00000, data 0x10d16d0/0x1160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e3f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613edb41400
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 69173248 unmapped: 20881408 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _renew_subs
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 81 handle_osd_map epochs [82,82], i have 81, src has [1,82]
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 82 ms_handle_reset con 0x5613edb41400 session 0x5613eaf894a0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 82 heartbeat osd_stat(store_statfs(0x4fcc5d000/0x0/0x4ffc00000, data 0x10d16d0/0x1160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e3f9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:22.406796+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 82 heartbeat osd_stat(store_statfs(0x4fbab8000/0x0/0x4ffc00000, data 0x10d2c9a/0x1164000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ee379000
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 69246976 unmapped: 20807680 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _renew_subs
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 82 handle_osd_map epochs [83,83], i have 82, src has [1,83]
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 83 ms_handle_reset con 0x5613ee379000 session 0x5613ecd1ab40
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:23.406943+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ee379400
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 69312512 unmapped: 20742144 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _renew_subs
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 83 handle_osd_map epochs [84,84], i have 83, src has [1,84]
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 84 ms_handle_reset con 0x5613ee379400 session 0x5613eb003860
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ece8c800
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:24.407081+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 69410816 unmapped: 20643840 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _renew_subs
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 84 handle_osd_map epochs [85,85], i have 84, src has [1,85]
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.775205612s of 10.068033218s, submitted: 94
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 85 ms_handle_reset con 0x5613ece8c800 session 0x5613ea5e6960
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 85 heartbeat osd_stat(store_statfs(0x4fbaab000/0x0/0x4ffc00000, data 0x10d86e0/0x1171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:25.407245+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ed315c00
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 69435392 unmapped: 20619264 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 85 heartbeat osd_stat(store_statfs(0x4fbaab000/0x0/0x4ffc00000, data 0x10d86e0/0x1171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _renew_subs
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 85 handle_osd_map epochs [86,86], i have 85, src has [1,86]
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 86 ms_handle_reset con 0x5613ed315c00 session 0x5613eb994960
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 710759 data_alloc: 218103808 data_used: 77824
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:26.407393+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 69419008 unmapped: 20635648 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613edb41400
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 86 ms_handle_reset con 0x5613edb41400 session 0x5613ecd1ad20
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:27.407545+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 69419008 unmapped: 20635648 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ee379000
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 86 ms_handle_reset con 0x5613ee379000 session 0x5613ecd1a780
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:28.407693+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 69419008 unmapped: 20635648 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613edb63000
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 86 ms_handle_reset con 0x5613edb63000 session 0x5613ecd1a000
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ece8c800
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ed315c00
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 86 heartbeat osd_stat(store_statfs(0x4fbaa6000/0x0/0x4ffc00000, data 0x10d9caa/0x1175000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:29.407813+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 69435392 unmapped: 20619264 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _renew_subs
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 86 handle_osd_map epochs [87,87], i have 86, src has [1,87]
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 87 ms_handle_reset con 0x5613ece8c800 session 0x5613ecd1b4a0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:30.407891+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613edb41400
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 69476352 unmapped: 20578304 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _renew_subs
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 87 handle_osd_map epochs [88,88], i have 87, src has [1,88]
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 88 ms_handle_reset con 0x5613ed315c00 session 0x5613eada63c0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 716900 data_alloc: 218103808 data_used: 77824
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 88 ms_handle_reset con 0x5613edb41400 session 0x5613eb0032c0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:31.408107+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613edb63000
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ee379000
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613edb63400
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 69623808 unmapped: 20430848 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 88 handle_osd_map epochs [88,89], i have 88, src has [1,89]
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 89 ms_handle_reset con 0x5613edb63000 session 0x5613ec2e0b40
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 89 heartbeat osd_stat(store_statfs(0x4fba9d000/0x0/0x4ffc00000, data 0x10dd738/0x117c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:32.408265+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613edb63800
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 20406272 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _renew_subs
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 89 handle_osd_map epochs [90,90], i have 89, src has [1,90]
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 90 ms_handle_reset con 0x5613edb63800 session 0x5613eb982b40
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:33.408425+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ece8c800
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 20316160 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ed315c00
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _renew_subs
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 90 handle_osd_map epochs [91,91], i have 90, src has [1,91]
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 91 ms_handle_reset con 0x5613ece8c800 session 0x5613eac6cb40
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:34.408621+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613edb41400
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613edb63000
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 69804032 unmapped: 20250624 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _renew_subs
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 91 handle_osd_map epochs [92,92], i have 91, src has [1,92]
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.562761307s of 10.121302605s, submitted: 134
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 92 ms_handle_reset con 0x5613edb63000 session 0x5613eada6000
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 92 ms_handle_reset con 0x5613ed315c00 session 0x5613eac6cf00
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 92 ms_handle_reset con 0x5613edb41400 session 0x5613ebe063c0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:35.408771+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 70975488 unmapped: 19079168 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 92 handle_osd_map epochs [93,93], i have 92, src has [1,93]
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613edb63c00
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 726803 data_alloc: 218103808 data_used: 90112
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 93 ms_handle_reset con 0x5613edb63c00 session 0x5613ebd4cd20
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba9a000/0x0/0x4ffc00000, data 0x10e215e/0x1183000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:36.408944+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ece8c800
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 70868992 unmapped: 19185664 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 93 ms_handle_reset con 0x5613ece8c800 session 0x5613ea5e7c20
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ed315c00
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _renew_subs
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 93 handle_osd_map epochs [94,94], i have 93, src has [1,94]
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 94 ms_handle_reset con 0x5613ed315c00 session 0x5613ec2ade00
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:37.409133+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 70860800 unmapped: 19193856 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613edb41400
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:38.409281+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 70860800 unmapped: 19193856 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _renew_subs
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 94 handle_osd_map epochs [95,95], i have 94, src has [1,95]
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 95 ms_handle_reset con 0x5613edb41400 session 0x5613eb983860
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:39.409432+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 70877184 unmapped: 19177472 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613edb63000
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:40.409568+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _renew_subs
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 95 handle_osd_map epochs [96,96], i have 95, src has [1,96]
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 96 ms_handle_reset con 0x5613edb63000 session 0x5613ea3f03c0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 70787072 unmapped: 19267584 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 738425 data_alloc: 218103808 data_used: 86016
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:41.409714+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 96 heartbeat osd_stat(store_statfs(0x4fba8f000/0x0/0x4ffc00000, data 0x10e67de/0x118d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 70787072 unmapped: 19267584 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 96 heartbeat osd_stat(store_statfs(0x4fba8f000/0x0/0x4ffc00000, data 0x10e67de/0x118d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:42.409888+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 70787072 unmapped: 19267584 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:43.410081+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613eb862400
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 96 ms_handle_reset con 0x5613eb862400 session 0x5613eaf12d20
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ece8c800
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 96 ms_handle_reset con 0x5613ece8c800 session 0x5613eaf12b40
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ed315c00
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 96 ms_handle_reset con 0x5613ed315c00 session 0x5613eaf125a0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 70787072 unmapped: 19267584 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613edb41400
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 96 ms_handle_reset con 0x5613edb41400 session 0x5613ecd1a000
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613edb63000
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613edb4c000
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 96 ms_handle_reset con 0x5613edb4c000 session 0x5613ecd1ad20
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 96 ms_handle_reset con 0x5613edb63000 session 0x5613ecd1a780
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ece8c800
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 96 ms_handle_reset con 0x5613ece8c800 session 0x5613eb994960
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ed315c00
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 96 ms_handle_reset con 0x5613ed315c00 session 0x5613eb994780
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613edb41400
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 96 ms_handle_reset con 0x5613edb41400 session 0x5613eb002b40
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:44.410499+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613edb4c000
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 96 ms_handle_reset con 0x5613edb4c000 session 0x5613eb003860
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 18186240 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:45.410626+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 18186240 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 738018 data_alloc: 218103808 data_used: 86016
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ecba0000
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _renew_subs
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 96 handle_osd_map epochs [97,97], i have 96, src has [1,97]
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.961263657s of 11.402002335s, submitted: 155
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:46.410752+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 97 ms_handle_reset con 0x5613ecba0000 session 0x5613eac6cf00
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72114176 unmapped: 17940480 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ece8c800
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ed315c00
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:47.410957+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72114176 unmapped: 17940480 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 97 heartbeat osd_stat(store_statfs(0x4fba69000/0x0/0x4ffc00000, data 0x110bcb6/0x11b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:48.411118+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 17924096 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:49.411245+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 97 heartbeat osd_stat(store_statfs(0x4fba69000/0x0/0x4ffc00000, data 0x110bcb6/0x11b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 17924096 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 97 heartbeat osd_stat(store_statfs(0x4fba69000/0x0/0x4ffc00000, data 0x110bcb6/0x11b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:50.411380+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613edb41400
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 97 ms_handle_reset con 0x5613edb41400 session 0x5613ec2ad4a0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613edb4c000
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72179712 unmapped: 17874944 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746531 data_alloc: 218103808 data_used: 102400
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:51.411526+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 97 handle_osd_map epochs [97,98], i have 97, src has [1,98]
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _renew_subs
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 97 handle_osd_map epochs [98,98], i have 98, src has [1,98]
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 17825792 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 98 ms_handle_reset con 0x5613edb4c000 session 0x5613eb994b40
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613edb63c00
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613edb62000
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 98 ms_handle_reset con 0x5613edb63c00 session 0x5613ea5e7680
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 98 ms_handle_reset con 0x5613edb62000 session 0x5613ecd1a000
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613edb62400
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 98 ms_handle_reset con 0x5613edb62400 session 0x5613eb983c20
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613edb41400
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:52.411676+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _renew_subs
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 98 handle_osd_map epochs [99,99], i have 98, src has [1,99]
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72433664 unmapped: 17620992 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 99 ms_handle_reset con 0x5613edb41400 session 0x5613ebd4d680
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613edb63c00
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:53.411844+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _renew_subs
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 99 handle_osd_map epochs [100,100], i have 99, src has [1,100]
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72458240 unmapped: 17596416 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 100 ms_handle_reset con 0x5613edb63c00 session 0x5613eada3680
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:54.412002+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ec2d8400
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 100 ms_handle_reset con 0x5613ec2d8400 session 0x5613eada25a0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ec2d9400
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 100 ms_handle_reset con 0x5613ec2d9400 session 0x5613eada2f00
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72523776 unmapped: 17530880 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ee36e000
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 100 ms_handle_reset con 0x5613ee36e000 session 0x5613eada30e0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ee36e000
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:55.412163+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 100 ms_handle_reset con 0x5613ee36e000 session 0x5613eaf89860
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 100 heartbeat osd_stat(store_statfs(0x4fba5e000/0x0/0x4ffc00000, data 0x110fe54/0x11bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 17506304 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ec2d8400
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 757056 data_alloc: 218103808 data_used: 110592
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:56.412396+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 17506304 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _renew_subs
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 100 handle_osd_map epochs [101,101], i have 100, src has [1,101]
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.367811203s of 10.547314644s, submitted: 63
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 101 ms_handle_reset con 0x5613ec2d8400 session 0x5613eb139860
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:57.412563+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 17489920 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:58.412720+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 101 ms_handle_reset con 0x5613ece8c800 session 0x5613ec2e1e00
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 101 ms_handle_reset con 0x5613ed315c00 session 0x5613ebd4de00
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72515584 unmapped: 17539072 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ec2d9400
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:59.412850+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 101 heartbeat osd_stat(store_statfs(0x4fba5e000/0x0/0x4ffc00000, data 0x111146e/0x11bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72515584 unmapped: 17539072 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _renew_subs
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 101 handle_osd_map epochs [102,102], i have 101, src has [1,102]
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 102 ms_handle_reset con 0x5613ec2d9400 session 0x5613eb994d20
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:00.413028+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 102 handle_osd_map epochs [102,103], i have 102, src has [1,103]
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72605696 unmapped: 17448960 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 763148 data_alloc: 218103808 data_used: 106496
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:01.413252+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72613888 unmapped: 17440768 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:02.413408+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72613888 unmapped: 17440768 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:03.413575+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 17424384 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 103 heartbeat osd_stat(store_statfs(0x4fba7b000/0x0/0x4ffc00000, data 0x10eff54/0x11a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:04.413750+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 17424384 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 103 ms_handle_reset con 0x5613ee379000 session 0x5613eada74a0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 103 ms_handle_reset con 0x5613edb63400 session 0x5613eb983a40
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ec2d8400
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 103 ms_handle_reset con 0x5613ec2d8400 session 0x5613eaf16000
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:05.413913+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 103 handle_osd_map epochs [103,104], i have 103, src has [1,104]
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 17416192 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 764913 data_alloc: 218103808 data_used: 114688
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:06.414043+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 104 heartbeat osd_stat(store_statfs(0x4fba7a000/0x0/0x4ffc00000, data 0x10f13e5/0x11a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ece8c800
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 104 ms_handle_reset con 0x5613ece8c800 session 0x5613eac0f860
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ed315c00
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.541474342s of 10.000211716s, submitted: 137
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 17416192 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:07.414163+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 17408000 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 104 handle_osd_map epochs [104,105], i have 104, src has [1,105]
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 105 ms_handle_reset con 0x5613ed315c00 session 0x5613eac0e780
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 105 heartbeat osd_stat(store_statfs(0x4fba7c000/0x0/0x4ffc00000, data 0x10f13c2/0x11a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:08.414332+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:09.414461+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:10.414607+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 105 heartbeat osd_stat(store_statfs(0x4fba79000/0x0/0x4ffc00000, data 0x10f298e/0x11a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765885 data_alloc: 218103808 data_used: 110592
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:11.414793+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:12.414933+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:13.415060+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:14.415221+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 105 heartbeat osd_stat(store_statfs(0x4fba79000/0x0/0x4ffc00000, data 0x10f298e/0x11a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:15.415369+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765885 data_alloc: 218103808 data_used: 110592
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:16.415565+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:17.415697+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:18.415809+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 105 handle_osd_map epochs [106,106], i have 105, src has [1,106]
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.993864059s of 12.120968819s, submitted: 33
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:19.421968+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 17416192 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:20.422134+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 17416192 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:21.422304+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 17416192 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:22.422459+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 17416192 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:23.422572+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 17416192 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:24.422748+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 17416192 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:25.422897+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 17416192 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:26.423236+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 17408000 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:27.426599+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 17408000 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:28.428873+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 17408000 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:29.429936+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 17408000 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:30.431015+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 17408000 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:31.431339+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 17408000 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:32.432429+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 17408000 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:33.433059+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 17408000 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:34.433848+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 17408000 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:35.434420+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 17408000 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:36.434899+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:37.435322+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:38.435447+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:39.435676+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:40.435828+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:41.436025+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:42.436165+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:43.436646+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:44.437022+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:45.437274+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:46.437446+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:47.437622+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:48.438026+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:49.438322+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:50.438661+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:51.438864+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:52.439131+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:53.439320+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:54.439631+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:55.439920+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:56.440121+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:57.440260+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:58.440413+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:59.440537+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:00.440639+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:01.440826+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:02.440933+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:03.441083+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:04.441255+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:05.441435+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:06.441560+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:07.441733+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:08.441860+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:09.442010+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:10.442150+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:11.442363+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:12.442506+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-mon[75654]: from='client.14792 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 23:58:31 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/2521465429' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 25 23:58:31 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/3747309463' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 25 23:58:31 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/3028224869' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 25 23:58:31 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/434467421' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:13.442659+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:14.442819+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:15.443005+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:16.443117+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:17.443216+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:18.443325+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:19.443453+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:20.443578+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:21.443717+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:22.443846+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:23.444103+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:24.444360+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:25.444524+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:26.444741+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:27.445005+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:28.445261+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:29.445486+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:30.445753+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:31.446410+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:32.451277+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:33.453751+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:34.457311+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:35.458674+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:36.459016+0000)
Nov 25 23:58:31 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2766270033' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:37.459263+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:38.459484+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:39.459699+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:40.459814+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:41.459947+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:42.460261+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:43.460473+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:44.460655+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:45.460879+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:46.461046+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:47.461276+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:48.461556+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:49.461945+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:50.462124+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:51.462270+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:52.462412+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:53.462581+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:54.462676+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:55.462780+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:56.462908+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:31 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:31 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:57.463016+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:58.463138+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72859648 unmapped: 17195008 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: do_command 'config diff' '{prefix=config diff}'
Nov 25 23:58:31 compute-0 ceph-osd[90055]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 25 23:58:31 compute-0 ceph-osd[90055]: do_command 'config show' '{prefix=config show}'
Nov 25 23:58:31 compute-0 ceph-osd[90055]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 25 23:58:31 compute-0 ceph-osd[90055]: do_command 'counter dump' '{prefix=counter dump}'
Nov 25 23:58:31 compute-0 ceph-osd[90055]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 25 23:58:31 compute-0 ceph-osd[90055]: do_command 'counter schema' '{prefix=counter schema}'
Nov 25 23:58:31 compute-0 ceph-osd[90055]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:59.463303+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73211904 unmapped: 16842752 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: tick
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 25 23:58:31 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:00.463409+0000)
Nov 25 23:58:31 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73277440 unmapped: 16777216 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:31 compute-0 ceph-osd[90055]: do_command 'log dump' '{prefix=log dump}'
Nov 25 23:58:31 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Nov 25 23:58:31 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3647276152' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 25 23:58:31 compute-0 rsyslogd[1001]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 23:58:31 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Nov 25 23:58:31 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1384574903' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 25 23:58:31 compute-0 rsyslogd[1001]: imjournal from <np0005535838:ceph-osd>: begin to drop messages due to rate-limiting
Nov 25 23:58:31 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Nov 25 23:58:31 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1807121259' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 25 23:58:31 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0) v1
Nov 25 23:58:31 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1127167271' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 25 23:58:31 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0) v1
Nov 25 23:58:31 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1519987033' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 25 23:58:32 compute-0 ceph-mon[75654]: pgmap v894: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:58:32 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/2766270033' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 25 23:58:32 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/3647276152' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 25 23:58:32 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/1384574903' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 25 23:58:32 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/1807121259' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 25 23:58:32 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/1127167271' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 25 23:58:32 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/1519987033' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 25 23:58:32 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Nov 25 23:58:32 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1015904010' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 25 23:58:32 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0) v1
Nov 25 23:58:32 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1241021847' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 25 23:58:32 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v895: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:58:32 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd utilization"} v 0) v1
Nov 25 23:58:32 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2043055087' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 25 23:58:32 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14826 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 23:58:33 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14828 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 23:58:33 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14830 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 23:58:33 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/1015904010' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 25 23:58:33 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/1241021847' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 25 23:58:33 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/2043055087' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 25 23:58:33 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14834 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 23:58:33 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14832 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 23:58:33 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14836 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 23:58:34 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14840 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 23:58:34 compute-0 ceph-mon[75654]: pgmap v895: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:58:34 compute-0 ceph-mon[75654]: from='client.14826 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 23:58:34 compute-0 ceph-mon[75654]: from='client.14828 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 23:58:34 compute-0 ceph-mon[75654]: from='client.14830 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 23:58:34 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "quorum_status"} v 0) v1
Nov 25 23:58:34 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1930788129' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 25 23:58:34 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14844 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 23:58:34 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v896: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:58:34 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions"} v 0) v1
Nov 25 23:58:34 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2919947581' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 25 23:58:34 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14848 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.13( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000011 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.13( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.13( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.13( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.13( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000202 1 0.000067
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.13( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.14(unlocked)] enter Initial
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.14( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000030 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.14( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.14( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000012 1 0.000020
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.14( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.14( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.14( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.14( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000019 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.14( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.14( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.14( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.14( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000149 1 0.000060
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.14( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.11(unlocked)] enter Initial
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.11( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000118 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.11( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.11( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000028 1 0.000047
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.11( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.11( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.11( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.11( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000016 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.11( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.11( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.11( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.11( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000174 1 0.000097
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.11( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.15(unlocked)] enter Initial
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.15( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000098 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.15( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.15( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000023 1 0.000052
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.15( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.15( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.15( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.15( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000015 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.15( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.15( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.15( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.15( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000154 1 0.000098
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.15( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.f(unlocked)] enter Initial
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.f( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000109 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.f( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.f( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000021 1 0.000094
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.f( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.f( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.f( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.f( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000016 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.f( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.f( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.f( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.f( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000136 1 0.000087
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.f( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.18(unlocked)] enter Initial
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.18( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000049 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.18( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.18( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000019 1 0.000032
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.18( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.18( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.18( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.18( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000014 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.18( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.18( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.18( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.18( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000175 1 0.000077
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.18( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.7(unlocked)] enter Initial
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.7( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000081 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.7( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.7( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000018 1 0.000033
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.7( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.7( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.7( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.7( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000014 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.7( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.7( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.7( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.7( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000276 1 0.000082
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.7( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.2(unlocked)] enter Initial
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.2( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000052 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.2( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.2( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000015 1 0.000027
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.2( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.2( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.2( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.2( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000011 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.2( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.2( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.2( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.2( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000125 1 0.000069
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.2( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: unregistering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: unregistering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: unregistering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: unregistering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: unregistering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: unregistering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: unregistering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: unregistering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: unregistering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.5(unlocked)] enter Initial
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.5( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.001201 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.5( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.5( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000016 1 0.000053
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.5( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.5( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.5( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.5( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000010 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.5( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.5( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.5( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.5( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000162 1 0.000078
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.5( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: unregistering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: unregistering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: unregistering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: unregistering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: unregistering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.17(unlocked)] enter Initial
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.17( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000048 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.17( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.17( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000137 1 0.000149
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.17( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.17( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.17( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.17( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000013 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.17( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.17( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.17( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.17( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000119 1 0.000073
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.17( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.15(unlocked)] enter Initial
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.15( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000064 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.15( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.15( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000018 1 0.000041
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.15( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.15( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.15( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.15( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000020 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.15( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.15( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.15( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.15( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000097 1 0.000072
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.15( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.13(unlocked)] enter Initial
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.13( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000062 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.13( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.13( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000025 1 0.000036
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.13( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.13( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.13( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.13( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000012 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.13( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.13( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.13( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.13( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000130 1 0.000064
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.13( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.4(unlocked)] enter Initial
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.4( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000079 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.4( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.4( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000042 1 0.000061
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.4( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.4( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.4( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.4( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000010 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.4( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.4( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.4( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.4( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000125 1 0.000099
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.4( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.12(unlocked)] enter Initial
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.12( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000059 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.12( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.12( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000010 1 0.000021
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.12( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.12( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.12( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.12( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000009 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.12( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.12( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.12( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.12( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000085 1 0.000050
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.12( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.f(unlocked)] enter Initial
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.f( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000068 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.f( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.f( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000019 1 0.000034
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.f( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.f( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.f( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.f( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000015 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.f( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.f( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.f( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.f( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000262 1 0.000116
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.f( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.19(unlocked)] enter Initial
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.19( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000082 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.19( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.19( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000015 1 0.000026
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.19( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.19( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.19( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.19( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000010 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.19( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.19( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.19( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.19( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000096 1 0.000057
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.19( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.3(unlocked)] enter Initial
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.3( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000026 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.3( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.3( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000010 1 0.000016
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.3( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.3( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.3( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.3( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000009 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.3( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.3( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.3( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.3( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000093 1 0.000053
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.3( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.9(unlocked)] enter Initial
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.9( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000063 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.9( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.9( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000038 1 0.000057
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.9( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.9( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.9( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.9( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000010 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.9( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.9( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.9( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.9( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000136 1 0.000069
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.9( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.2(unlocked)] enter Initial
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.2( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000057 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.2( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=0 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.2( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000039 1 0.000062
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.2( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.2( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.2( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.2( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000015 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.2( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.2( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.2( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.2( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000163 1 0.000092
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.2( empty local-lis/les=0/0 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.c(unlocked)] enter Initial
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.c( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000038 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.c( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.c( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000012 1 0.000022
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.c( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.c( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.c( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.c( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000009 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.c( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.c( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.c( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.c( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000162 1 0.000053
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.c( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.8(unlocked)] enter Initial
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.8( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000072 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.8( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.8( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000019 1 0.000033
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.8( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.8( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.8( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.8( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000043 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.8( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.8( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.8( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.8( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000135 1 0.000113
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.8( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.b(unlocked)] enter Initial
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.b( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000045 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.b( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.b( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000017 1 0.000028
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.b( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.b( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.b( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.b( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000014 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.b( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.b( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.b( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.b( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000177 1 0.000125
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.b( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.4(unlocked)] enter Initial
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.4( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000054 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.4( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.4( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000017 1 0.000029
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.4( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.4( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.4( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.4( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000014 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.4( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.4( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.4( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.4( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000110 1 0.000095
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.4( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.f(unlocked)] enter Initial
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.f( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000043 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.f( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.f( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000017 1 0.000040
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.f( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.f( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.f( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.f( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000012 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.f( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.f( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.f( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.f( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000108 1 0.000072
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.f( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.1(unlocked)] enter Initial
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.1( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000040 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.1( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.1( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000016 1 0.000027
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.1( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.1( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.1( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.1( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000015 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.1( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.1( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.1( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.1( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000127 1 0.000155
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.1( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.3(unlocked)] enter Initial
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.3( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000049 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.3( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.3( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000018 1 0.000030
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.3( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.3( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.3( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.3( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000014 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.3( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.3( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.3( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.3( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000146 1 0.000101
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.3( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.6(unlocked)] enter Initial
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.6( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000041 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.6( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.6( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000015 1 0.000026
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.6( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.6( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.6( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.6( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000014 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.6( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.6( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.6( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.6( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000218 1 0.000072
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.6( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.3(unlocked)] enter Initial
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.3( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000060 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.3( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.3( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000016 1 0.000029
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.3( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.3( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.3( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.3( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000014 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.3( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.3( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.3( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.3( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000146 1 0.000077
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.3( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.a(unlocked)] enter Initial
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.a( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000048 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.a( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.a( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000015 1 0.000032
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.a( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.a( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.a( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.a( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000015 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.a( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.a( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.a( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.a( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000136 1 0.000074
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.a( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.1d(unlocked)] enter Initial
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.1d( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000095 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.1d( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.1d( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000019 1 0.000042
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.1d( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.1d( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.1d( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.1d( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000016 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.1d( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.1d( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.1d( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.1d( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000128 1 0.000081
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.1d( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.1b(unlocked)] enter Initial
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.1b( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000083 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.1b( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.1b( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000018 1 0.000040
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.1b( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.1b( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.1b( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.1b( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000014 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.1b( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.1b( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.1b( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.1b( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000167 1 0.000078
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.1b( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.1c(unlocked)] enter Initial
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.1c( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000180 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.1c( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.1c( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000020 1 0.000047
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.1c( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.1c( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.1c( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.1c( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000014 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.1c( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.1c( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.1c( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.1c( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000207 1 0.000080
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.1c( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.6(unlocked)] enter Initial
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.6( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000057 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.6( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.6( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000016 1 0.000027
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.6( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.6( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.6( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.6( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000015 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.6( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.6( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.6( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.6( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000134 1 0.000079
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.6( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.9(unlocked)] enter Initial
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.9( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000060 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.9( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.9( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000044 1 0.000067
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.9( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.9( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.9( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.9( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000024 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.9( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.9( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.9( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.9( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000151 1 0.000116
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.9( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.1f(unlocked)] enter Initial
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.1f( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000058 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.1f( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.1f( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000017 1 0.000030
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.1f( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.1f( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.1f( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.1f( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000024 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.1f( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.1f( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.1f( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.1f( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000188 1 0.000121
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.1f( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.18(unlocked)] enter Initial
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.18( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000030 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.18( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.18( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000016 1 0.000024
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.18( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.18( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.18( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.18( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.18( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.18( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.18( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.18( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000073 1 0.000044
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.18( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.1b(unlocked)] enter Initial
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.1b( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000019 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.1b( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=0 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.1b( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000010
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.1b( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.1b( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.1b( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.1b( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.1b( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.1b( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.1b( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.1b( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000050 1 0.000039
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.1b( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.1f(unlocked)] enter Initial
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.1f( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000026 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.1f( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.1f( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000008 1 0.000013
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.1f( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.1f( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.1f( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.1f( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.1f( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.1f( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.1f( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.1f( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000080 1 0.000044
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.1f( empty local-lis/les=0/0 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:35 compute-0 ceph-osd[89044]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.1e( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.024736 2 0.000088
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.1e( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.1e( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.1e( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:35 compute-0 ceph-osd[89044]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.023808 2 0.000054
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:35 compute-0 ceph-osd[89044]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.024669 2 0.000042
Nov 25 23:58:35 compute-0 ceph-osd[89044]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.14( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.023442 2 0.000044
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.14( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.14( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000011 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.1f(unlocked)] enter Initial
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.14( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.1f( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000026 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.1f( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=0 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.1f( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000008 1 0.000012
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.1f( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.1f( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.1f( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.1f( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000007 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.1f( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.1f( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.1f( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.1f( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000080 1 0.000122
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.1f( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:35 compute-0 ceph-osd[89044]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.17( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.016432 2 0.000089
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.17( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.17( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000010 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.17( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:35 compute-0 ceph-osd[89044]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.13( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.016044 2 0.000064
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.13( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.13( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.13( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.f( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.013593 2 0.000056
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.f( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.f( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000010 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.f( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:35 compute-0 ceph-osd[89044]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.c( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.011385 2 0.000047
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.c( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.c( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.c( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:35 compute-0 ceph-osd[89044]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.12( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.015469 2 0.000039
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.12( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.024243 2 0.000067
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.12( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.12( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000010 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:35 compute-0 ceph-osd[89044]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.f( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.009324 2 0.000059
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.f( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.f( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.f( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:35 compute-0 ceph-osd[89044]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.15( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.023776 2 0.000075
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.15( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.15( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.15( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:35 compute-0 ceph-osd[89044]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.1( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.008940 2 0.000066
Nov 25 23:58:35 compute-0 ceph-osd[89044]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.3( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.008436 2 0.000062
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.1( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.3( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.3( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.3( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.1( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000008 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.1( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:35 compute-0 ceph-osd[89044]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.6( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.008008 2 0.000077
Nov 25 23:58:35 compute-0 ceph-osd[89044]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.6( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.022959 2 0.000067
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:35 compute-0 ceph-osd[89044]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.6( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000010 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.6( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.022364 2 0.000079
Nov 25 23:58:35 compute-0 ceph-osd[89044]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.9( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.012827 2 0.000092
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000009 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.9( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.9( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.9( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:35 compute-0 ceph-osd[89044]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.3( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.007647 2 0.000079
Nov 25 23:58:35 compute-0 ceph-osd[89044]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.3( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.1b( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.005902 2 0.000107
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.3( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000019 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.4( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.010018 2 0.000077
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.1b( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.3( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.4( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.4( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.1b( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000013 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.1b( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:35 compute-0 ceph-osd[89044]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.6( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.004896 2 0.000072
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.9( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.004092 2 0.000078
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.6( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.9( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.6( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000013 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.9( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000010 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.6( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.9( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:35 compute-0 ceph-osd[89044]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.1f( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.003455 2 0.000179
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.1f( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.18( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.003168 2 0.000033
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.1f( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000009 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.1f( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.18( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.18( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000010 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.18( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:35 compute-0 ceph-osd[89044]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.1b( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.003109 2 0.000023
Nov 25 23:58:35 compute-0 ceph-osd[89044]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.1b( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.1f( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.002906 2 0.000040
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.1b( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000009 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.1f( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.1b( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.1f( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000010 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.1f( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:35 compute-0 ceph-osd[89044]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.15( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.017502 2 0.000072
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.a( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.007574 2 0.000058
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.a( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.a( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000010 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.a( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.15( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.15( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000059 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[3.15( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:35 compute-0 ceph-osd[89044]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.7( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.021947 2 0.000092
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.7( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.7( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000010 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.7( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:35 compute-0 ceph-osd[89044]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.021756 2 0.000070
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000019 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[7.4( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:35 compute-0 ceph-osd[89044]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.4( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.017257 2 0.000079
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.4( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.4( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.4( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:35 compute-0 ceph-osd[89044]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.014966 2 0.000045
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:35 compute-0 ceph-osd[89044]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.3( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.014722 2 0.000054
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000009 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.3( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.3( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.3( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:35 compute-0 ceph-osd[89044]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.2( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.013453 2 0.000161
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.2( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.2( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.2( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:35 compute-0 ceph-osd[89044]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.012216 2 0.000096
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.011727 2 0.000076
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000008 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:35 compute-0 ceph-osd[89044]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.007891 2 0.000097
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:35 compute-0 ceph-osd[89044]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.5( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.019774 2 0.000095
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.006569 2 0.000069
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.5( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.5( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000009 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[5.5( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:35 compute-0 ceph-osd[89044]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.003178 2 0.000063
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 42 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:33:38.520516+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 59301888 unmapped: 1540096 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 42 handle_osd_map epochs [42,43], i have 42, src has [1,43]
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 42 handle_osd_map epochs [43,43], i have 43, src has [1,43]
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.973286 2 0.000060
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.995902 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.18( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.972048 2 0.000079
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.987201 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.972010 2 0.000022
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.980081 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.1d( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.19( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[5.4( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.972225 2 0.000033
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[5.4( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.989656 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[5.4( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[5.4( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.972022 2 0.000021
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.978842 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.1c( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.973577 2 0.000025
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.996721 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.f( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[5.5( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.972110 2 0.000075
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.972500 2 0.000073
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.994451 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[5.5( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.992116 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[5.5( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[5.5( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[5.1e( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.975724 2 0.000042
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[5.1e( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.000678 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[5.1e( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[5.1e( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.972244 2 0.000021
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.975535 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.1f( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.2( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[5.3( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.972552 2 0.000025
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[5.3( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.987408 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[5.3( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[5.3( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[5.2( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.972526 2 0.000036
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.972547 2 0.000028
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.984500 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.b( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.975900 2 0.000028
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[5.2( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.986244 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.000706 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[5.2( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[5.15( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.974142 2 0.000025
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[5.15( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.998122 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[5.15( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.972515 2 0.000046
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.985059 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[5.2( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.8( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[5.14( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.975984 2 0.000075
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[5.14( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.999641 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[5.14( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[5.14( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.16( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[5.15( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[5.7( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.973157 2 0.000091
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.976174 2 0.000035
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[5.7( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.995528 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.974417 2 0.000096
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.000229 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[5.7( empty local-lis/les=37/40 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.1f( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.973700 2 0.000063
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.1f( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.976753 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.1f( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[5.7( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.998901 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.1f( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.1b( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.973744 2 0.000066
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.1b( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.977044 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.1b( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.1b( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.13( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.12( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.974658 2 0.000035
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.11( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.12( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.990257 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.12( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.12( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.15( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.973850 2 0.000120
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.15( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.991568 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.15( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.15( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.13( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.975004 2 0.000041
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.13( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.991225 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.13( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.13( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.17( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.975551 2 0.000085
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.17( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.992179 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.17( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.17( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.9( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.974997 2 0.000057
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.9( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.979322 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.9( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.a( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.974657 2 0.000075
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.9( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.a( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.982450 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.a( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.a( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.f( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.975570 2 0.000021
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.f( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.985038 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.f( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.f( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.3( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.975300 2 0.000076
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.3( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.983175 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.3( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.3( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.6( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.975533 2 0.000079
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.6( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.983863 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.6( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.3( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.975662 2 0.000024
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.3( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.984285 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.3( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.3( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.6( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.6( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.975437 2 0.000062
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.9( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.975685 2 0.000031
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.6( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.980588 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.6( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.6( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.18( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.975380 2 0.000067
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.18( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.978682 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.18( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.18( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.9( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.988840 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.9( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.9( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.1( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.975999 2 0.000054
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.1( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.985160 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.1( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.1( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.c( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.976302 2 0.000028
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.c( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.987890 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.c( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.c( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.f( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.976422 2 0.000065
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.f( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.990346 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.4( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.975876 2 0.001034
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.f( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.f( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.4( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.986106 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.4( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.4( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.1f( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.975753 2 0.000064
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.1f( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.979459 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.1f( empty local-lis/les=39/40 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.1f( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.1b( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.976005 2 0.000075
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.1b( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.982189 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.1b( empty local-lis/les=35/36 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.1b( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 43 handle_osd_map epochs [43,43], i have 43, src has [1,43]
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.021082 7 0.000141
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[6.7( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.021650 7 0.000124
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[6.7( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[6.7( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[6.5( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.022065 7 0.000108
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[6.5( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[6.5( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[6.3( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.020450 7 0.000982
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[6.3( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[6.3( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.021096 7 0.000127
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.021474 7 0.000182
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.18( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.18( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005917 4 0.000078
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.18( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.18( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000028 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.18( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[5.4( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.1d( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[5.1e( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[5.5( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.f( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[5.4( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.016864 4 0.000088
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[5.4( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.1d( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.016927 4 0.000092
Nov 25 23:58:35 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0) v1
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[5.4( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.1d( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[5.4( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.1d( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.1d( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[5.1e( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.016704 4 0.000077
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[5.1e( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[5.1e( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[5.1e( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[5.5( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.016741 4 0.000151
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[5.5( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[5.5( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Nov 25 23:58:35 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1726622350' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[5.5( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.1c( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.1c( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.016976 4 0.000045
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.1c( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.1c( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.1c( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.1f( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.f( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.016988 4 0.000149
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.2( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.f( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.f( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000012 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.f( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[5.3( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.b( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[5.2( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.8( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.19( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[5.14( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[5.15( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.16( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[5.7( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=37/37 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.1f( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.13( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.11( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.15( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.1b( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.13( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.9( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.a( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.2( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.017252 4 0.000252
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.2( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.2( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.1f( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.017320 4 0.000087
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.2( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.1f( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.1f( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.1f( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.f( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.3( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.3( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.6( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.b( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.017286 4 0.000054
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.b( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[5.3( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.017344 4 0.000084
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.18( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.b( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.b( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[5.3( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[5.3( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000017 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.9( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[5.3( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.6( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.1( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.c( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.f( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.4( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.1f( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.1b( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.17( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[5.2( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.017400 4 0.000297
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[5.2( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[5.2( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[5.2( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.8( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.017471 4 0.000281
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.19( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.018104 4 0.000186
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.8( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.19( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[5.14( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.017442 4 0.000054
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.8( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000014 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[5.14( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.19( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[5.14( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.8( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[5.14( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.19( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[5.15( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.017583 4 0.000190
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[5.15( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[5.15( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[5.15( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.16( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.017594 4 0.000210
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.16( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.16( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.16( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[5.7( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.017412 4 0.000217
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.13( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.017300 4 0.000238
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[5.7( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.13( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.13( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.13( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.1f( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.017369 4 0.000189
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.1f( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[5.7( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000017 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.1f( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[5.7( empty local-lis/les=42/43 n=0 ec=37/20 lis/c=42/37 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.1f( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.11( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.017370 4 0.000291
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.11( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.11( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[2.11( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.15( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.017200 4 0.000049
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.15( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.15( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.15( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.1b( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.017402 4 0.000275
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.1b( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.1b( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.1b( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.13( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.017183 4 0.000077
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.13( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.13( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.9( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.016494 4 0.000115
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.13( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.9( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.9( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.9( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.a( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.016505 4 0.000043
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.a( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.3( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.016373 4 0.000051
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.a( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.3( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.a( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.f( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.016441 4 0.000075
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.3( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.3( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.f( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.3( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.016280 4 0.000038
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.3( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.3( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.3( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.f( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000015 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.f( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.6( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.016104 4 0.000124
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.18( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.016078 4 0.000048
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.18( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.6( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.12( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=35/35 les/c/f=36/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.18( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.18( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.6( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.6( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.6( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.016357 4 0.000112
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.1( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.016024 4 0.000034
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.9( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.016067 4 0.000190
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.6( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.6( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.6( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.c( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.015925 4 0.000081
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.9( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.c( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.c( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.c( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.f( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.015923 4 0.000065
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.9( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000018 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.f( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.9( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.4( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.015909 4 0.000097
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.4( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.f( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000012 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.4( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.f( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.4( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.036066 7 0.000106
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.1b( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.015835 4 0.000062
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.1b( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.1b( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.1b( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.17( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.017407 4 0.000033
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.17( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.1f( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.015905 4 0.000186
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.17( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.1f( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.17( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.1f( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[7.1f( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/39 les/c/f=43/40/0 sis=42) [0] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.1( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.1( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000139 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.1( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.12( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.017594 4 0.001067
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.12( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.12( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000016 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[3.12( empty local-lis/les=42/43 n=0 ec=35/16 lis/c=42/35 les/c/f=43/36/0 sis=42) [0] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000213 1 0.000041
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.1c] failed. State was: unregistering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.041012 7 0.000100
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.041027 7 0.000077
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.041976 7 0.000210
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.041595 7 0.000291
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.042813 7 0.000105
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.042550 7 0.000184
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000240 1 0.000058
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: unregistering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.040225 7 0.001211
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.043116 7 0.000286
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000272 1 0.000028
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.13] failed. State was: unregistering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000364 1 0.000020
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.1] failed. State was: unregistering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000369 1 0.000025
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.e] failed. State was: unregistering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000426 1 0.000021
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.1a] failed. State was: unregistering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000566 1 0.000077
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.a] failed. State was: unregistering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000721 1 0.000021
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.18] failed. State was: unregistering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000857 1 0.000018
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.1b] failed. State was: unregistering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.048562 7 0.000129
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.048309 7 0.000254
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000043 1 0.000048
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: unregistering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.050838 7 0.000069
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[6.9( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.050725 7 0.000116
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[6.9( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[6.9( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.048310 7 0.000371
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.050087 7 0.000087
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.050279 7 0.000063
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.050792 7 0.000116
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000249 1 0.000075
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: unregistering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[6.1( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.049669 7 0.000107
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[6.1( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[6.1( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.050049 7 0.000054
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.049771 7 0.000074
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000214 1 0.000023
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.049441 7 0.000131
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.049239 7 0.000076
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: unregistering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[6.9( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000373 1 0.000022
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[6.9( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: unregistering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000427 1 0.000034
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: unregistering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000510 1 0.000017
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: unregistering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000532 1 0.000023
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.5] failed. State was: unregistering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000574 1 0.000019
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: unregistering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[6.1( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000657 1 0.000020
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[6.1( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: unregistering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000730 1 0.000019
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: unregistering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.001000 1 0.000019
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.2] failed. State was: unregistering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.001046 1 0.000018
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.d] failed. State was: unregistering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.001090 1 0.000016
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.f( empty local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: unregistering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.1c( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.020062 1 0.000048
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.1c( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.020309 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.1c( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.056436 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.11( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.017875 1 0.000036
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.11( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.018150 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.11( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.059252 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.1c] failed. State was: unregistering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 luod=0'0 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.040379 2 0.000041
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 luod=0'0 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.040412 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 luod=0'0 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 luod=0'0 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 luod=0'0 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000080 1 0.000071
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 luod=0'0 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: unregistering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: unregistering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.13( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.026655 1 0.000083
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.13( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.026961 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.13( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.068029 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.13] failed. State was: unregistering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.1( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.033840 1 0.000041
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.1( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.034226 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.1( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.076266 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.1] failed. State was: unregistering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.e( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.038641 1 0.000022
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.e( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.039027 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.e( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.080684 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.e] failed. State was: unregistering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.1a( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.045762 1 0.000028
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.1a( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.046210 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.1a( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.088826 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.1a] failed. State was: unregistering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.a( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.053050 1 0.000033
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.a( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.053644 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.a( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.096546 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.a] failed. State was: unregistering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.18( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.060274 1 0.000038
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.18( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.061029 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.18( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.101878 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.18] failed. State was: not registered w/ OSD
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.1b( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.067481 1 0.000062
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.1b( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.068376 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.1b( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [2] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.111522 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.1b] failed. State was: not registered w/ OSD
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.10( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.068417 1 0.000031
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.10( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.068482 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.10( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.117126 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.12( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.075646 1 0.000046
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.12( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.075922 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.12( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.124332 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.8( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.082927 1 0.000082
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.8( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.083167 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.8( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.134047 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: not registered w/ OSD
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[6.9( v 32'39 (0'0,32'39] lb MIN local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.090194 1 0.000042
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[6.9( v 32'39 (0'0,32'39] lb MIN local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.090603 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[6.9( v 32'39 (0'0,32'39] lb MIN local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.141390 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.14( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.097568 1 0.000042
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.14( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.098029 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.14( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.146659 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: not registered w/ OSD
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.9( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.104892 1 0.000022
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.9( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.105429 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.9( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.155550 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: not registered w/ OSD
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.5( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.112154 1 0.000026
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.5( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.112711 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.5( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.163026 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.5] failed. State was: not registered w/ OSD
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.7( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.119562 1 0.000130
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.7( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.120266 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.7( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.171122 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[6.1( v 32'39 (0'0,32'39] lb MIN local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.126826 1 0.000024
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[6.1( v 32'39 (0'0,32'39] lb MIN local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.127506 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[6.1( v 32'39 (0'0,32'39] lb MIN local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.177238 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.4( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.134123 1 0.000191
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.4( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.134874 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.4( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.184957 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.2( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.143758 1 0.000027
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.2( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.144784 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.2( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.194606 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.d( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.148639 1 0.000049
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.d( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.149718 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.d( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.199220 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.2] failed. State was: not registered w/ OSD
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.d] failed. State was: not registered w/ OSD
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.f( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.155908 1 0.000023
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.f( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.157022 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[4.f( empty lb MIN local-lis/les=37/38 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=-1 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.206291 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[6.b( v 32'39 (0'0,32'39] lb MIN local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[39,42)/1 luod=0'0 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.286227 2 0.000196
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[6.b( v 32'39 (0'0,32'39] lb MIN local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 luod=0'0 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete 0.286357 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[6.b( v 32'39 (0'0,32'39] lb MIN local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 luod=0'0 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 1.347929 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[6.5( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 luod=0'0 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.330250 2 0.000018
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[6.5( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 luod=0'0 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.330312 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[6.5( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 luod=0'0 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[6.5( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 luod=0'0 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[6.5( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 luod=0'0 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000166 1 0.000151
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[6.5( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 luod=0'0 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 43 heartbeat osd_stat(store_statfs(0x4fe154000/0x0/0x4ffc00000, data 0x2dbdf/0x78000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [0,0,0,0,0,1])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:33:39.520654+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  log_queue is 2 last_log 7 sent 5 num 2 unsent 2 sending 2
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:34:09.360449+0000 osd.0 (osd.0) 6 : cluster [DBG] 4.3 scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:34:09.507823+0000 osd.0 (osd.0) 7 : cluster [DBG] 4.3 scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[6.3( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 luod=0'0 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.475990 2 0.000042
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[6.3( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 luod=0'0 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.476043 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[6.3( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 luod=0'0 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[6.3( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 luod=0'0 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[6.3( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 luod=0'0 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000094 1 0.000079
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[6.3( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 luod=0'0 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[6.5( v 32'39 (0'0,32'39] lb MIN local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[39,42)/1 luod=0'0 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.149787 2 0.000301
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[6.5( v 32'39 (0'0,32'39] lb MIN local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 luod=0'0 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete 0.150014 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[6.5( v 32'39 (0'0,32'39] lb MIN local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 luod=0'0 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 1.502473 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.5] failed. State was: not registered w/ OSD
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 luod=0'0 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.617550 2 0.000116
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 luod=0'0 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.617643 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 luod=0'0 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 luod=0'0 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 luod=0'0 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000123 1 0.000147
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 luod=0'0 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[6.3( v 32'39 (0'0,32'39] lb MIN local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[39,42)/1 luod=0'0 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.142312 2 0.000347
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[6.3( v 32'39 (0'0,32'39] lb MIN local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 luod=0'0 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete 0.142482 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[6.3( v 32'39 (0'0,32'39] lb MIN local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 luod=0'0 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 1.639910 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.3] failed. State was: not registered w/ OSD
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 59727872 unmapped: 1114112 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.727334023s of 10.054935455s, submitted: 493
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[6.7( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 luod=0'0 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.690684 2 0.000029
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[6.7( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 luod=0'0 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.690719 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[6.7( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 luod=0'0 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[6.7( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 luod=0'0 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[6.7( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 luod=0'0 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000103 1 0.000077
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[6.7( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 luod=0'0 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[6.d( v 32'39 (0'0,32'39] lb MIN local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[39,42)/1 luod=0'0 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.140817 2 0.000255
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[6.d( v 32'39 (0'0,32'39] lb MIN local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 luod=0'0 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete 0.141020 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[6.d( v 32'39 (0'0,32'39] lb MIN local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 luod=0'0 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 1.779835 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[6.7( v 32'39 (0'0,32'39] lb MIN local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[39,42)/1 luod=0'0 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.142485 2 0.000193
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[6.7( v 32'39 (0'0,32'39] lb MIN local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 luod=0'0 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete 0.142655 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[6.7( v 32'39 (0'0,32'39] lb MIN local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 luod=0'0 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 1.855107 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.7] failed. State was: not registered w/ OSD
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 luod=0'0 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.911721 2 0.000071
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 luod=0'0 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.911780 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 luod=0'0 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 luod=0'0 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 luod=0'0 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000124 1 0.000158
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 luod=0'0 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[6.f( v 32'39 (0'0,32'39] lb MIN local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 DELETING pi=[39,42)/1 luod=0'0 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.028649 2 0.000220
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[6.f( v 32'39 (0'0,32'39] lb MIN local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 luod=0'0 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete 0.028862 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 43 pg[6.f( v 32'39 (0'0,32'39] lb MIN local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=42) [1] r=-1 lpr=42 pi=[39,42)/1 luod=0'0 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 1.962232 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 43 handle_osd_map epochs [43,44], i have 43, src has [1,44]
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client handle_log_ack log(last 7) v1
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:34:09.360449+0000 osd.0 (osd.0) 6 : cluster [DBG] 4.3 scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:34:09.507823+0000 osd.0 (osd.0) 7 : cluster [DBG] 4.3 scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 44 pg[6.e( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=32'39 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 5.489499 7 0.000060
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 44 pg[6.2( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=32'39 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 5.489250 7 0.000048
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 44 pg[6.2( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 5.497834 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 44 pg[6.e( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 5.497860 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 44 pg[6.2( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 6.507709 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 44 pg[6.e( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 6.507194 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 44 pg[6.2( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 6.507752 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 44 pg[6.2( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: unregistering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 44 pg[6.2( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=10.510518074s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 active pruub 89.130073547s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: unregistering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 44 pg[6.2( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=10.510458946s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.130073547s@ mbc={}] exit Reset 0.000118 1 0.000161
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 44 pg[6.2( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=10.510458946s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.130073547s@ mbc={}] enter Started
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 44 pg[6.2( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=10.510458946s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.130073547s@ mbc={}] enter Start
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 44 pg[6.2( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=10.510458946s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.130073547s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 44 pg[6.2( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=10.510458946s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.130073547s@ mbc={}] exit Start 0.000011 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 44 pg[6.2( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=10.510458946s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.130073547s@ mbc={}] enter Started/Stray
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 44 pg[6.6( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=32'39 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 5.489787 7 0.000160
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 44 pg[6.6( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 5.498190 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 44 pg[6.6( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 6.508623 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 44 pg[6.6( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 6.508666 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 44 pg[6.6( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: unregistering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 44 pg[6.6( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=10.509833336s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 active pruub 89.129592896s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: unregistering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 44 pg[6.6( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=10.509752274s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.129592896s@ mbc={}] exit Reset 0.000142 1 0.000184
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 44 pg[6.6( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=10.509752274s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.129592896s@ mbc={}] enter Started
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 44 pg[6.6( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=10.509752274s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.129592896s@ mbc={}] enter Start
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 44 pg[6.6( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=10.509752274s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.129592896s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 44 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=32'39 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 5.494101 7 0.000086
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 44 pg[6.6( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=10.509752274s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.129592896s@ mbc={}] exit Start 0.000025 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 44 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 5.498793 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 44 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 6.508542 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 44 pg[6.6( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=10.509752274s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.129592896s@ mbc={}] enter Started/Stray
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 44 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 6.508619 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 44 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: unregistering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 44 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=10.505555153s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 active pruub 89.125503540s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: unregistering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 44 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=10.505515099s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.125503540s@ mbc={}] exit Reset 0.000072 1 0.000104
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 44 pg[6.e( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 6.507242 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 44 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=10.505515099s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.125503540s@ mbc={}] enter Started
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 44 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=10.505515099s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.125503540s@ mbc={}] enter Start
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 44 pg[6.e( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 44 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=10.505515099s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.125503540s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 44 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=10.505515099s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.125503540s@ mbc={}] exit Start 0.000018 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: unregistering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 44 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=10.505515099s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.125503540s@ mbc={}] enter Started/Stray
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 44 pg[6.e( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=10.509609222s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 active pruub 89.129646301s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: unregistering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 44 pg[6.e( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=10.509572029s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.129646301s@ mbc={}] exit Reset 0.000065 1 0.000650
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 44 pg[6.e( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=10.509572029s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.129646301s@ mbc={}] enter Started
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 44 pg[6.e( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=10.509572029s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.129646301s@ mbc={}] enter Start
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 44 pg[6.e( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=10.509572029s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.129646301s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 44 pg[6.e( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=10.509572029s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.129646301s@ mbc={}] exit Start 0.000021 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 44 pg[6.e( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44 pruub=10.509572029s) [1] r=-1 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 89.129646301s@ mbc={}] enter Started/Stray
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: unregistering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: unregistering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 44 handle_osd_map epochs [44,44], i have 44, src has [1,44]
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: unregistering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: unregistering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: unregistering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: unregistering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: unregistering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: unregistering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 4.6 deep-scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 4.6 deep-scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:33:40.520921+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  log_queue is 2 last_log 9 sent 7 num 2 unsent 2 sending 2
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:34:10.225259+0000 osd.0 (osd.0) 8 : cluster [DBG] 4.6 deep-scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:34:10.239220+0000 osd.0 (osd.0) 9 : cluster [DBG] 4.6 deep-scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 59703296 unmapped: 1138688 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client handle_log_ack log(last 9) v1
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:34:10.225259+0000 osd.0 (osd.0) 8 : cluster [DBG] 4.6 deep-scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:34:10.239220+0000 osd.0 (osd.0) 9 : cluster [DBG] 4.6 deep-scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 44 handle_osd_map epochs [45,45], i have 44, src has [1,45]
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 45 pg[6.6( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.018545 6 0.000135
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 45 pg[6.6( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 45 pg[6.6( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 45 pg[6.e( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.018344 6 0.000226
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 45 pg[6.e( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 45 pg[6.e( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 45 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.027736 7 0.000107
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 45 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 45 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 45 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000112 1 0.000088
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 45 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 45 pg[6.2( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.035557 7 0.000234
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 45 pg[6.2( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 45 pg[6.2( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 45 pg[6.2( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000144 1 0.000089
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 45 pg[6.2( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 45 pg[6.a( v 32'39 (0'0,32'39] lb MIN local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 DELETING pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.011596 1 0.000085
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 45 pg[6.a( v 32'39 (0'0,32'39] lb MIN local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.011778 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 45 pg[6.a( v 32'39 (0'0,32'39] lb MIN local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.039593 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 45 pg[6.6( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 luod=0'0 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.025329 3 0.000079
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 45 pg[6.6( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 luod=0'0 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.025367 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 45 pg[6.6( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 luod=0'0 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 45 pg[6.6( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 luod=0'0 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 45 pg[6.6( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 luod=0'0 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000120 1 0.000078
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 45 pg[6.6( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 luod=0'0 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.a] failed. State was: not registered w/ OSD
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 45 pg[6.2( v 32'39 (0'0,32'39] lb MIN local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 DELETING pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.067589 1 0.000064
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 45 pg[6.2( v 32'39 (0'0,32'39] lb MIN local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.067800 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 45 pg[6.2( v 32'39 (0'0,32'39] lb MIN local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.103436 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.2] failed. State was: not registered w/ OSD
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 45 pg[6.e( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 luod=0'0 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.091754 3 0.000075
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 45 pg[6.e( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 luod=0'0 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.091843 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 45 pg[6.e( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 luod=0'0 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 45 pg[6.e( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 luod=0'0 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 45 pg[6.e( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 luod=0'0 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000118 1 0.000140
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 45 pg[6.e( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 luod=0'0 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 45 pg[6.6( v 32'39 (0'0,32'39] lb MIN local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 DELETING pi=[39,44)/1 luod=0'0 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.073829 2 0.000209
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 45 pg[6.6( v 32'39 (0'0,32'39] lb MIN local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 luod=0'0 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete 0.074045 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 45 pg[6.6( v 32'39 (0'0,32'39] lb MIN local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 luod=0'0 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 1.118059 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 45 pg[6.e( v 32'39 (0'0,32'39] lb MIN local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 DELETING pi=[39,44)/1 luod=0'0 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.022147 2 0.000232
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 45 pg[6.e( v 32'39 (0'0,32'39] lb MIN local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 luod=0'0 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete 0.022326 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 45 pg[6.e( v 32'39 (0'0,32'39] lb MIN local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=44) [1] r=-1 lpr=44 pi=[39,44)/1 luod=0'0 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 1.132609 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.e] failed. State was: not registered w/ OSD
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:33:41.521122+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 59711488 unmapped: 1130496 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 336264 data_alloc: 218103808 data_used: 8192
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 45 handle_osd_map epochs [46,46], i have 45, src has [1,46]
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 46 pg[6.7(unlocked)] enter Initial
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 46 pg[6.7( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=0 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000082 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 46 pg[6.7( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=0 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 46 pg[6.7( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000019 1 0.000044
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 46 pg[6.7( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 46 pg[6.7( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 46 pg[6.7( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 46 pg[6.7( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000019 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 46 pg[6.7( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 46 pg[6.7( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 46 pg[6.7( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 46 pg[6.7( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000240 1 0.000078
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 46 pg[6.7( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 46 pg[6.b(unlocked)] enter Initial
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 46 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=0 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000062 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 46 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=0 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 46 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000019 1 0.000035
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 46 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 46 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 46 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 46 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000011 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 46 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 46 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 46 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 46 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000169 1 0.000064
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 46 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 46 pg[6.3(unlocked)] enter Initial
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 46 pg[6.3( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=0 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000117 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 46 pg[6.3( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=0 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 46 pg[6.3( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000018 1 0.000055
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 46 pg[6.3( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 46 pg[6.3( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 46 pg[6.3( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 46 pg[6.3( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 46 pg[6.3( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 46 pg[6.3( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 46 pg[6.3( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 46 pg[6.3( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000145 1 0.000046
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 46 pg[6.3( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 46 pg[6.f(unlocked)] enter Initial
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 46 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=0 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000056 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 46 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=0 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 46 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000017 1 0.000038
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 46 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 46 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 46 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 46 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000011 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 46 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 46 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 46 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 46 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000118 1 0.000066
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 46 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:35 compute-0 ceph-osd[89044]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 46 pg[6.7( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.002002 2 0.000120
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 46 pg[6.7( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 46 pg[6.7( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000009 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 46 pg[6.7( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:35 compute-0 ceph-osd[89044]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 46 pg[6.b( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.002282 2 0.000372
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 46 pg[6.b( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 46 pg[6.b( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 46 pg[6.b( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:35 compute-0 ceph-osd[89044]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 46 pg[6.f( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/GetLog 0.001963 2 0.000086
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 46 pg[6.3( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetLog 0.002313 2 0.000070
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 46 pg[6.f( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 peering m=3 mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 46 pg[6.3( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 46 pg[6.f( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/GetMissing 0.000013 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 46 pg[6.3( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetMissing 0.000018 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 46 pg[6.f( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 peering m=3 mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 46 pg[6.3( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:33:42.521301+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 59842560 unmapped: 999424 heap: 60841984 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 46 handle_osd_map epochs [46,47], i have 46, src has [1,47]
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 46 handle_osd_map epochs [47,47], i have 47, src has [1,47]
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 47 pg[6.3( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/WaitUpThru 1.012724 2 0.000090
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 47 pg[6.7( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 1.013902 2 0.000098
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 47 pg[6.7( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 1.016228 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 47 pg[6.3( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering 1.015310 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 47 pg[6.7( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 47 pg[6.3( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 unknown m=2 mbc={}] enter Started/Primary/Active
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 47 pg[6.b( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 1.013225 2 0.000063
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 47 pg[6.b( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 1.015800 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 47 pg[6.b( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 47 pg[6.7( v 32'39 lc 32'11 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=32'39 lcod 0'0 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 47 pg[6.3( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=46/47 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 activating+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Activating
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 47 pg[6.b( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 47 pg[6.f( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/WaitUpThru 1.013010 2 0.000135
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 47 pg[6.f( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering 1.015201 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 47 pg[6.f( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 unknown m=3 mbc={}] enter Started/Primary/Active
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 47 pg[6.f( v 32'39 lc 32'1 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=32'39 lcod 0'0 mlcod 0'0 activating+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/Activating
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 47 pg[6.b( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 47 pg[6.7( v 32'39 lc 32'11 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 47 pg[6.b( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/42 les/c/f=47/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.004562 4 0.000276
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 47 pg[6.b( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/42 les/c/f=47/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 47 pg[6.7( v 32'39 lc 32'11 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/42 les/c/f=47/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.004662 4 0.000207
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 47 pg[6.7( v 32'39 lc 32'11 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/42 les/c/f=47/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 47 pg[6.b( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/42 les/c/f=47/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000173 1 0.000089
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 47 pg[6.b( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/42 les/c/f=47/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 47 pg[6.b( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/42 les/c/f=47/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000004 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 47 pg[6.b( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/42 les/c/f=47/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 47 pg[6.f( v 32'39 lc 32'1 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 47 pg[6.3( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=46/47 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 47 pg[6.f( v 32'39 lc 32'1 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/42 les/c/f=47/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/Activating 0.008133 5 0.000335
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 47 pg[6.f( v 32'39 lc 32'1 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/42 les/c/f=47/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 47 pg[6.3( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=46/47 n=2 ec=39/22 lis/c=46/42 les/c/f=47/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/Activating 0.008483 5 0.000408
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 47 pg[6.3( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=46/47 n=2 ec=39/22 lis/c=46/42 les/c/f=47/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 47 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/42 les/c/f=47/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.010107 2 0.000051
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 47 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/42 les/c/f=47/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 47 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/42 les/c/f=47/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 47 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/42 les/c/f=47/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 47 pg[6.7( v 32'39 lc 32'11 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/42 les/c/f=47/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.010343 2 0.000064
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 47 pg[6.7( v 32'39 lc 32'11 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/42 les/c/f=47/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 25 23:58:35 compute-0 ceph-mon[75654]: from='client.14834 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 23:58:35 compute-0 ceph-mon[75654]: from='client.14832 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 23:58:35 compute-0 ceph-mon[75654]: from='client.14836 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 23:58:35 compute-0 ceph-mon[75654]: from='client.14840 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 23:58:35 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/1930788129' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 25 23:58:35 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/2919947581' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 47 pg[6.7( v 32'39 lc 32'11 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/42 les/c/f=47/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000004 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 47 pg[6.7( v 32'39 lc 32'11 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/42 les/c/f=47/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 47 pg[6.7( v 32'39 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/42 les/c/f=47/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.058587 1 0.000037
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 47 pg[6.7( v 32'39 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/42 les/c/f=47/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 47 pg[6.7( v 32'39 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/42 les/c/f=47/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000010 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 47 pg[6.7( v 32'39 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/42 les/c/f=47/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 47 pg[6.f( v 32'39 lc 32'1 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/42 les/c/f=47/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.065306 1 0.000077
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 47 pg[6.f( v 32'39 lc 32'1 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/42 les/c/f=47/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 47 pg[6.f( v 32'39 lc 32'1 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/42 les/c/f=47/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000061 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 47 pg[6.f( v 32'39 lc 32'1 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/42 les/c/f=47/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/Recovering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 47 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/42 les/c/f=47/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.208131 1 0.000314
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 47 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/42 les/c/f=47/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 47 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/42 les/c/f=47/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000016 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 47 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/42 les/c/f=47/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 47 pg[6.3( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=46/47 n=2 ec=39/22 lis/c=46/42 les/c/f=47/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.273627 1 0.000127
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 47 pg[6.3( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=46/47 n=2 ec=39/22 lis/c=46/42 les/c/f=47/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 47 pg[6.3( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=46/47 n=2 ec=39/22 lis/c=46/42 les/c/f=47/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000026 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 47 pg[6.3( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=46/47 n=2 ec=39/22 lis/c=46/42 les/c/f=47/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Recovering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:33:43.521496+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 47 pg[6.3( v 32'39 (0'0,32'39] local-lis/les=46/47 n=2 ec=39/22 lis/c=46/42 les/c/f=47/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.124630 1 0.000156
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 47 pg[6.3( v 32'39 (0'0,32'39] local-lis/les=46/47 n=2 ec=39/22 lis/c=46/42 les/c/f=47/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 47 pg[6.3( v 32'39 (0'0,32'39] local-lis/les=46/47 n=2 ec=39/22 lis/c=46/42 les/c/f=47/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000037 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 47 pg[6.3( v 32'39 (0'0,32'39] local-lis/les=46/47 n=2 ec=39/22 lis/c=46/42 les/c/f=47/43/0 sis=46) [0] r=0 lpr=46 pi=[42,46)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 59965440 unmapped: 1925120 heap: 61890560 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:33:44.521697+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 60006400 unmapped: 1884160 heap: 61890560 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 4.b scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 4.b scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 47 heartbeat osd_stat(store_statfs(0x4fe149000/0x0/0x4ffc00000, data 0x33025/0x82000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:33:45.521909+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  log_queue is 2 last_log 11 sent 9 num 2 unsent 2 sending 2
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:34:15.252955+0000 osd.0 (osd.0) 10 : cluster [DBG] 4.b scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:34:15.267154+0000 osd.0 (osd.0) 11 : cluster [DBG] 4.b scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 60006400 unmapped: 1884160 heap: 61890560 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client handle_log_ack log(last 11) v1
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:34:15.252955+0000 osd.0 (osd.0) 10 : cluster [DBG] 4.b scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:34:15.267154+0000 osd.0 (osd.0) 11 : cluster [DBG] 4.b scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:33:46.522225+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 60006400 unmapped: 1884160 heap: 61890560 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 358595 data_alloc: 218103808 data_used: 8192
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:33:47.522387+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 1875968 heap: 61890560 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 4.c scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 4.c scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:33:48.522558+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  log_queue is 2 last_log 13 sent 11 num 2 unsent 2 sending 2
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:34:18.304083+0000 osd.0 (osd.0) 12 : cluster [DBG] 4.c scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:34:18.318372+0000 osd.0 (osd.0) 13 : cluster [DBG] 4.c scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 60022784 unmapped: 1867776 heap: 61890560 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client handle_log_ack log(last 13) v1
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:34:18.304083+0000 osd.0 (osd.0) 12 : cluster [DBG] 4.c scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:34:18.318372+0000 osd.0 (osd.0) 13 : cluster [DBG] 4.c scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 47 handle_osd_map epochs [48,48], i have 47, src has [1,48]
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 48 pg[6.c( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=32'39 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 14.718921 20 0.000107
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 48 pg[6.c( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 14.727443 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 48 pg[6.c( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 15.737631 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 48 pg[6.c( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 15.737675 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 48 pg[6.c( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: unregistering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 48 pg[6.c( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48 pruub=9.281056404s) [1] r=-1 lpr=48 pi=[39,48)/1 crt=32'39 lcod 0'0 mlcod 0'0 active pruub 97.130294800s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: unregistering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 48 pg[6.c( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48 pruub=9.281001091s) [1] r=-1 lpr=48 pi=[39,48)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 97.130294800s@ mbc={}] exit Reset 0.000089 1 0.000133
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 48 pg[6.c( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48 pruub=9.281001091s) [1] r=-1 lpr=48 pi=[39,48)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 97.130294800s@ mbc={}] enter Started
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 48 pg[6.c( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48 pruub=9.281001091s) [1] r=-1 lpr=48 pi=[39,48)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 97.130294800s@ mbc={}] enter Start
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 48 pg[6.c( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48 pruub=9.281001091s) [1] r=-1 lpr=48 pi=[39,48)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 97.130294800s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 48 pg[6.c( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48 pruub=9.281001091s) [1] r=-1 lpr=48 pi=[39,48)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 97.130294800s@ mbc={}] exit Start 0.000032 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 48 pg[6.c( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48 pruub=9.281001091s) [1] r=-1 lpr=48 pi=[39,48)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 97.130294800s@ mbc={}] enter Started/Stray
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 48 pg[6.4( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=32'39 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 14.719843 20 0.000146
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 48 pg[6.4( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 14.728548 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 48 pg[6.4( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 15.737666 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 48 pg[6.4( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 15.737719 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 48 pg[6.4( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: unregistering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 48 pg[6.4( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48 pruub=9.279815674s) [1] r=-1 lpr=48 pi=[39,48)/1 crt=32'39 lcod 0'0 mlcod 0'0 active pruub 97.129615784s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: unregistering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 48 pg[6.4( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48 pruub=9.279746056s) [1] r=-1 lpr=48 pi=[39,48)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 97.129615784s@ mbc={}] exit Reset 0.000128 1 0.000195
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 48 pg[6.4( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48 pruub=9.279746056s) [1] r=-1 lpr=48 pi=[39,48)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 97.129615784s@ mbc={}] enter Started
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 48 pg[6.4( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48 pruub=9.279746056s) [1] r=-1 lpr=48 pi=[39,48)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 97.129615784s@ mbc={}] enter Start
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 48 pg[6.4( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48 pruub=9.279746056s) [1] r=-1 lpr=48 pi=[39,48)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 97.129615784s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 48 pg[6.4( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48 pruub=9.279746056s) [1] r=-1 lpr=48 pi=[39,48)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 97.129615784s@ mbc={}] exit Start 0.000016 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 48 pg[6.4( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48 pruub=9.279746056s) [1] r=-1 lpr=48 pi=[39,48)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 97.129615784s@ mbc={}] enter Started/Stray
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: unregistering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: unregistering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: unregistering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: unregistering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:33:49.522844+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 60039168 unmapped: 1851392 heap: 61890560 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 48 heartbeat osd_stat(store_statfs(0x4fe14c000/0x0/0x4ffc00000, data 0x33025/0x82000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 4.15 deep-scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:33:50.523074+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  log_queue is 1 last_log 14 sent 13 num 1 unsent 1 sending 1
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:34:20.316122+0000 osd.0 (osd.0) 14 : cluster [DBG] 4.15 deep-scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 60055552 unmapped: 1835008 heap: 61890560 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.901387215s of 11.124527931s, submitted: 70
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 4.15 deep-scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 48 handle_osd_map epochs [48,49], i have 48, src has [1,49]
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 49 pg[6.c( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=-1 lpr=48 pi=[39,48)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.596076 7 0.000141
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 49 pg[6.4( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=-1 lpr=48 pi=[39,48)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.595477 7 0.000391
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 49 pg[6.c( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=-1 lpr=48 pi=[39,48)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 49 pg[6.c( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=-1 lpr=48 pi=[39,48)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 49 pg[6.4( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=-1 lpr=48 pi=[39,48)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 49 pg[6.4( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=-1 lpr=48 pi=[39,48)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 49 handle_osd_map epochs [49,49], i have 49, src has [1,49]
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 49 handle_osd_map epochs [49,49], i have 49, src has [1,49]
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 49 pg[6.5(unlocked)] enter Initial
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 49 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=0 lpr=0 pi=[42,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000044 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 49 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=0 lpr=0 pi=[42,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 49 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=0 lpr=49 pi=[42,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000010 1 0.000020
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 49 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=0 lpr=49 pi=[42,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 49 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=0 lpr=49 pi=[42,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 49 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=0 lpr=49 pi=[42,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 49 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=0 lpr=49 pi=[42,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000007 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 49 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=0 lpr=49 pi=[42,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 49 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=0 lpr=49 pi=[42,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 49 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=0 lpr=49 pi=[42,49)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 49 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=0 lpr=49 pi=[42,49)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000104 1 0.000073
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 49 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=0 lpr=49 pi=[42,49)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 49 pg[6.d(unlocked)] enter Initial
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 49 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=0 lpr=0 pi=[42,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000036 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 49 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=0 lpr=0 pi=[42,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 49 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=0 lpr=49 pi=[42,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000014
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 49 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=0 lpr=49 pi=[42,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 49 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=0 lpr=49 pi=[42,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 49 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=0 lpr=49 pi=[42,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 49 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=0 lpr=49 pi=[42,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 49 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=0 lpr=49 pi=[42,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 49 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=0 lpr=49 pi=[42,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 49 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=0 lpr=49 pi=[42,49)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 49 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=0 lpr=49 pi=[42,49)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000064 1 0.000039
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 49 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=0 lpr=49 pi=[42,49)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:35 compute-0 ceph-osd[89044]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 49 pg[6.5( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=0 lpr=49 pi=[42,49)/1 crt=32'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetLog 0.000657 2 0.000045
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 49 pg[6.5( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=0 lpr=49 pi=[42,49)/1 crt=32'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 49 pg[6.5( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=0 lpr=49 pi=[42,49)/1 crt=32'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 49 pg[6.5( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=0 lpr=49 pi=[42,49)/1 crt=32'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:35 compute-0 ceph-osd[89044]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 49 pg[6.d( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=0 lpr=49 pi=[42,49)/1 crt=32'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetLog 0.000743 2 0.000033
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 49 pg[6.d( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=0 lpr=49 pi=[42,49)/1 crt=32'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 49 pg[6.d( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=0 lpr=49 pi=[42,49)/1 crt=32'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetMissing 0.000010 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 49 pg[6.d( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=0 lpr=49 pi=[42,49)/1 crt=32'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 49 pg[6.c( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=-1 lpr=48 pi=[39,48)/1 luod=0'0 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.049924 2 0.000032
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 49 pg[6.c( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=-1 lpr=48 pi=[39,48)/1 luod=0'0 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.049961 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 49 pg[6.c( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=-1 lpr=48 pi=[39,48)/1 luod=0'0 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 49 pg[6.c( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=-1 lpr=48 pi=[39,48)/1 luod=0'0 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 49 pg[6.c( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=-1 lpr=48 pi=[39,48)/1 luod=0'0 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000191 1 0.000095
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 49 pg[6.c( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=-1 lpr=48 pi=[39,48)/1 luod=0'0 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 49 pg[6.c( v 32'39 (0'0,32'39] lb MIN local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=-1 lpr=48 DELETING pi=[39,48)/1 luod=0'0 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.121599 2 0.000210
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 49 pg[6.c( v 32'39 (0'0,32'39] lb MIN local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=-1 lpr=48 pi=[39,48)/1 luod=0'0 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete 0.121871 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 49 pg[6.c( v 32'39 (0'0,32'39] lb MIN local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=-1 lpr=48 pi=[39,48)/1 luod=0'0 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 1.767981 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 49 heartbeat osd_stat(store_statfs(0x4fe148000/0x0/0x4ffc00000, data 0x34628/0x85000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [0,0,0,0,0,0,1])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 49 pg[6.4( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=-1 lpr=48 pi=[39,48)/1 luod=0'0 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.303201 2 0.000056
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 49 pg[6.4( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=-1 lpr=48 pi=[39,48)/1 luod=0'0 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.303248 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 49 pg[6.4( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=-1 lpr=48 pi=[39,48)/1 luod=0'0 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 49 pg[6.4( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=-1 lpr=48 pi=[39,48)/1 luod=0'0 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 49 pg[6.4( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=-1 lpr=48 pi=[39,48)/1 luod=0'0 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000148 1 0.000153
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 49 pg[6.4( v 32'39 (0'0,32'39] local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=-1 lpr=48 pi=[39,48)/1 luod=0'0 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 49 pg[6.4( v 32'39 (0'0,32'39] lb MIN local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=-1 lpr=48 DELETING pi=[39,48)/1 luod=0'0 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.033304 2 0.000325
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 49 pg[6.4( v 32'39 (0'0,32'39] lb MIN local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=-1 lpr=48 pi=[39,48)/1 luod=0'0 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/ToDelete 0.033543 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 49 pg[6.4( v 32'39 (0'0,32'39] lb MIN local-lis/les=39/41 n=2 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=48) [1] r=-1 lpr=48 pi=[39,48)/1 luod=0'0 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 1.932382 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:33:51.523309+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  log_queue is 2 last_log 15 sent 14 num 2 unsent 1 sending 1
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:34:20.918056+0000 osd.0 (osd.0) 15 : cluster [DBG] 4.15 deep-scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 60129280 unmapped: 1761280 heap: 61890560 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 360530 data_alloc: 218103808 data_used: 24576
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client handle_log_ack log(last 14) v1
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:34:20.316122+0000 osd.0 (osd.0) 14 : cluster [DBG] 4.15 deep-scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client handle_log_ack log(last 15) v1
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:34:20.918056+0000 osd.0 (osd.0) 15 : cluster [DBG] 4.15 deep-scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 49 handle_osd_map epochs [49,50], i have 49, src has [1,50]
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 49 handle_osd_map epochs [49,50], i have 50, src has [1,50]
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 50 pg[6.d( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=0 lpr=49 pi=[42,49)/1 crt=32'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/WaitUpThru 1.002793 2 0.000078
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 50 pg[6.d( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=0 lpr=49 pi=[42,49)/1 crt=32'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering 1.003673 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 50 pg[6.d( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=0 lpr=49 pi=[42,49)/1 crt=32'39 mlcod 0'0 unknown m=2 mbc={}] enter Started/Primary/Active
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 50 pg[6.d( v 32'39 lc 32'7 (0'0,32'39] local-lis/les=49/50 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=0 lpr=49 pi=[42,49)/1 crt=32'39 lcod 0'0 mlcod 0'0 activating+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Activating
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 50 pg[6.5( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=0 lpr=49 pi=[42,49)/1 crt=32'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/WaitUpThru 1.004000 2 0.000062
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 50 pg[6.5( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=0 lpr=49 pi=[42,49)/1 crt=32'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering 1.004835 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 50 pg[6.5( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=42/43 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=0 lpr=49 pi=[42,49)/1 crt=32'39 mlcod 0'0 unknown m=2 mbc={}] enter Started/Primary/Active
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 50 pg[6.5( v 32'39 lc 32'6 (0'0,32'39] local-lis/les=49/50 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=0 lpr=49 pi=[42,49)/1 crt=32'39 lcod 0'0 mlcod 0'0 activating+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Activating
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 50 pg[6.d( v 32'39 lc 32'7 (0'0,32'39] local-lis/les=49/50 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=0 lpr=49 pi=[42,49)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 50 pg[6.d( v 32'39 lc 32'7 (0'0,32'39] local-lis/les=49/50 n=1 ec=39/22 lis/c=49/42 les/c/f=50/43/0 sis=49) [0] r=0 lpr=49 pi=[42,49)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/Activating 0.005568 3 0.000255
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 50 pg[6.d( v 32'39 lc 32'7 (0'0,32'39] local-lis/les=49/50 n=1 ec=39/22 lis/c=49/42 les/c/f=50/43/0 sis=49) [0] r=0 lpr=49 pi=[42,49)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 50 pg[6.d( v 32'39 lc 32'7 (0'0,32'39] local-lis/les=49/50 n=1 ec=39/22 lis/c=49/42 les/c/f=50/43/0 sis=49) [0] r=0 lpr=49 pi=[42,49)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000126 1 0.000090
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 50 pg[6.d( v 32'39 lc 32'7 (0'0,32'39] local-lis/les=49/50 n=1 ec=39/22 lis/c=49/42 les/c/f=50/43/0 sis=49) [0] r=0 lpr=49 pi=[42,49)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 50 pg[6.d( v 32'39 lc 32'7 (0'0,32'39] local-lis/les=49/50 n=1 ec=39/22 lis/c=49/42 les/c/f=50/43/0 sis=49) [0] r=0 lpr=49 pi=[42,49)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000011 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 50 pg[6.d( v 32'39 lc 32'7 (0'0,32'39] local-lis/les=49/50 n=1 ec=39/22 lis/c=49/42 les/c/f=50/43/0 sis=49) [0] r=0 lpr=49 pi=[42,49)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Recovering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 50 pg[6.5( v 32'39 lc 32'6 (0'0,32'39] local-lis/les=49/50 n=2 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=49) [0] r=0 lpr=49 pi=[42,49)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 50 pg[6.5( v 32'39 lc 32'6 (0'0,32'39] local-lis/les=49/50 n=2 ec=39/22 lis/c=49/42 les/c/f=50/43/0 sis=49) [0] r=0 lpr=49 pi=[42,49)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/Activating 0.008965 4 0.000253
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 50 pg[6.5( v 32'39 lc 32'6 (0'0,32'39] local-lis/les=49/50 n=2 ec=39/22 lis/c=49/42 les/c/f=50/43/0 sis=49) [0] r=0 lpr=49 pi=[42,49)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 50 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=49/50 n=1 ec=39/22 lis/c=49/42 les/c/f=50/43/0 sis=49) [0] r=0 lpr=49 pi=[42,49)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.069039 3 0.000137
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 50 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=49/50 n=1 ec=39/22 lis/c=49/42 les/c/f=50/43/0 sis=49) [0] r=0 lpr=49 pi=[42,49)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 50 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=49/50 n=1 ec=39/22 lis/c=49/42 les/c/f=50/43/0 sis=49) [0] r=0 lpr=49 pi=[42,49)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000016 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 50 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=49/50 n=1 ec=39/22 lis/c=49/42 les/c/f=50/43/0 sis=49) [0] r=0 lpr=49 pi=[42,49)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 50 pg[6.5( v 32'39 lc 32'6 (0'0,32'39] local-lis/les=49/50 n=2 ec=39/22 lis/c=49/42 les/c/f=50/43/0 sis=49) [0] r=0 lpr=49 pi=[42,49)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.065269 2 0.000116
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 50 pg[6.5( v 32'39 lc 32'6 (0'0,32'39] local-lis/les=49/50 n=2 ec=39/22 lis/c=49/42 les/c/f=50/43/0 sis=49) [0] r=0 lpr=49 pi=[42,49)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 50 pg[6.5( v 32'39 lc 32'6 (0'0,32'39] local-lis/les=49/50 n=2 ec=39/22 lis/c=49/42 les/c/f=50/43/0 sis=49) [0] r=0 lpr=49 pi=[42,49)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000040 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 50 pg[6.5( v 32'39 lc 32'6 (0'0,32'39] local-lis/les=49/50 n=2 ec=39/22 lis/c=49/42 les/c/f=50/43/0 sis=49) [0] r=0 lpr=49 pi=[42,49)/1 crt=32'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Recovering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 50 pg[6.5( v 32'39 (0'0,32'39] local-lis/les=49/50 n=2 ec=39/22 lis/c=49/42 les/c/f=50/43/0 sis=49) [0] r=0 lpr=49 pi=[42,49)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.125328 1 0.000217
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 50 pg[6.5( v 32'39 (0'0,32'39] local-lis/les=49/50 n=2 ec=39/22 lis/c=49/42 les/c/f=50/43/0 sis=49) [0] r=0 lpr=49 pi=[42,49)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 50 pg[6.5( v 32'39 (0'0,32'39] local-lis/les=49/50 n=2 ec=39/22 lis/c=49/42 les/c/f=50/43/0 sis=49) [0] r=0 lpr=49 pi=[42,49)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000056 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 50 pg[6.5( v 32'39 (0'0,32'39] local-lis/les=49/50 n=2 ec=39/22 lis/c=49/42 les/c/f=50/43/0 sis=49) [0] r=0 lpr=49 pi=[42,49)/1 crt=32'39 mlcod 32'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 50 heartbeat osd_stat(store_statfs(0x4fe142000/0x0/0x4ffc00000, data 0x36b27/0x8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:33:52.523536+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 60137472 unmapped: 1753088 heap: 61890560 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 50 heartbeat osd_stat(store_statfs(0x4fe142000/0x0/0x4ffc00000, data 0x36b27/0x8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 50 handle_osd_map epochs [50,51], i have 50, src has [1,51]
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:33:53.523743+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 60194816 unmapped: 1695744 heap: 61890560 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:33:54.523882+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 60203008 unmapped: 1687552 heap: 61890560 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:33:55.524054+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  log_queue is 2 last_log 17 sent 15 num 2 unsent 2 sending 2
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:34:25.266703+0000 osd.0 (osd.0) 16 : cluster [DBG] 4.16 scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:34:25.280862+0000 osd.0 (osd.0) 17 : cluster [DBG] 4.16 scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 51 heartbeat osd_stat(store_statfs(0x4fe13e000/0x0/0x4ffc00000, data 0x3812a/0x8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 51 handle_osd_map epochs [52,52], i have 51, src has [1,52]
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 60276736 unmapped: 1613824 heap: 61890560 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client handle_log_ack log(last 17) v1
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:34:25.266703+0000 osd.0 (osd.0) 16 : cluster [DBG] 4.16 scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:34:25.280862+0000 osd.0 (osd.0) 17 : cluster [DBG] 4.16 scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:33:56.524292+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  log_queue is 2 last_log 19 sent 17 num 2 unsent 2 sending 2
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:34:26.231324+0000 osd.0 (osd.0) 18 : cluster [DBG] 4.17 scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:34:26.245397+0000 osd.0 (osd.0) 19 : cluster [DBG] 4.17 scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 60309504 unmapped: 1581056 heap: 61890560 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 377104 data_alloc: 218103808 data_used: 24576
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client handle_log_ack log(last 19) v1
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:34:26.231324+0000 osd.0 (osd.0) 18 : cluster [DBG] 4.17 scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:34:26.245397+0000 osd.0 (osd.0) 19 : cluster [DBG] 4.17 scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:33:57.524583+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 60284928 unmapped: 1605632 heap: 61890560 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 4.19 deep-scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 4.19 deep-scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:33:58.524748+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  log_queue is 2 last_log 21 sent 19 num 2 unsent 2 sending 2
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:34:28.246117+0000 osd.0 (osd.0) 20 : cluster [DBG] 4.19 deep-scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:34:28.260214+0000 osd.0 (osd.0) 21 : cluster [DBG] 4.19 deep-scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 60284928 unmapped: 1605632 heap: 61890560 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client handle_log_ack log(last 21) v1
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:34:28.246117+0000 osd.0 (osd.0) 20 : cluster [DBG] 4.19 deep-scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:34:28.260214+0000 osd.0 (osd.0) 21 : cluster [DBG] 4.19 deep-scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 52 handle_osd_map epochs [53,54], i have 52, src has [1,54]
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 54 pg[6.9(unlocked)] enter Initial
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 54 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=0 pi=[42,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000082 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 54 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=0 pi=[42,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 54 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000023 1 0.000047
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 54 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 54 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 54 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 54 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000011 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 54 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 54 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 54 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 54 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000226 1 0.000072
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 54 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 53 pg[6.8( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=32'39 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 24.418652 37 0.000135
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 53 pg[6.8( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 24.427509 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 53 pg[6.8( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 25.438114 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 53 pg[6.8( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 25.438160 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 53 pg[6.8( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=39) [0] r=0 lpr=39 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: unregistering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 53 pg[6.8( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=53 pruub=15.581254959s) [2] r=-1 lpr=53 pi=[39,53)/1 crt=32'39 lcod 0'0 mlcod 0'0 active pruub 113.130058289s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: unregistering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 54 pg[6.8( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=53 pruub=15.581200600s) [2] r=-1 lpr=53 pi=[39,53)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.130058289s@ mbc={}] exit Reset 0.000090 2 0.000138
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 54 pg[6.8( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=53 pruub=15.581200600s) [2] r=-1 lpr=53 pi=[39,53)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.130058289s@ mbc={}] enter Started
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 54 pg[6.8( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=53 pruub=15.581200600s) [2] r=-1 lpr=53 pi=[39,53)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.130058289s@ mbc={}] enter Start
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 54 pg[6.8( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=53 pruub=15.581200600s) [2] r=-1 lpr=53 pi=[39,53)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.130058289s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 54 pg[6.8( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=53 pruub=15.581200600s) [2] r=-1 lpr=53 pi=[39,53)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.130058289s@ mbc={}] exit Start 0.000014 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 54 pg[6.8( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=53 pruub=15.581200600s) [2] r=-1 lpr=53 pi=[39,53)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.130058289s@ mbc={}] enter Started/Stray
Nov 25 23:58:35 compute-0 ceph-osd[89044]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 54 pg[6.9( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=32'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.002005 2 0.000068
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 54 pg[6.9( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=32'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 54 pg[6.9( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=32'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000010 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 54 pg[6.9( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=32'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 54 handle_osd_map epochs [53,54], i have 54, src has [1,54]
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: unregistering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: unregistering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:33:59.524951+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 60350464 unmapped: 1540096 heap: 61890560 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 54 handle_osd_map epochs [54,55], i have 54, src has [1,55]
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 55 pg[6.9( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=32'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.009147 2 0.000125
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 55 pg[6.9( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=32'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.011515 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 55 pg[6.9( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=42/43 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=32'39 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 55 pg[6.9( v 32'39 (0'0,32'39] local-lis/les=54/55 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=32'39 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 55 pg[6.9( v 32'39 (0'0,32'39] local-lis/les=54/55 n=1 ec=39/22 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 55 pg[6.9( v 32'39 (0'0,32'39] local-lis/les=54/55 n=1 ec=39/22 lis/c=54/42 les/c/f=55/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005320 3 0.000219
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 55 pg[6.9( v 32'39 (0'0,32'39] local-lis/les=54/55 n=1 ec=39/22 lis/c=54/42 les/c/f=55/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 55 pg[6.9( v 32'39 (0'0,32'39] local-lis/les=54/55 n=1 ec=39/22 lis/c=54/42 les/c/f=55/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000016 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 55 pg[6.9( v 32'39 (0'0,32'39] local-lis/les=54/55 n=1 ec=39/22 lis/c=54/42 les/c/f=55/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 55 pg[6.8( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=53) [2] r=-1 lpr=53 pi=[39,53)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.021981 7 0.000167
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 55 pg[6.8( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=53) [2] r=-1 lpr=53 pi=[39,53)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 55 pg[6.8( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=53) [2] r=-1 lpr=53 pi=[39,53)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 55 pg[6.8( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=53) [2] r=-1 lpr=53 pi=[39,53)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000174 1 0.000167
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 55 pg[6.8( v 32'39 (0'0,32'39] local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=53) [2] r=-1 lpr=53 pi=[39,53)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: unregistering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 55 pg[6.8( v 32'39 (0'0,32'39] lb MIN local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[39,53)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.008746 1 0.000089
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 55 pg[6.8( v 32'39 (0'0,32'39] lb MIN local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=53) [2] r=-1 lpr=53 pi=[39,53)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.009019 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 55 pg[6.8( v 32'39 (0'0,32'39] lb MIN local-lis/les=39/41 n=1 ec=39/22 lis/c=39/39 les/c/f=41/41/0 sis=53) [2] r=-1 lpr=53 pi=[39,53)/1 crt=32'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.031095 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.8] failed. State was: unregistering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 4.1d scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 4.1d scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:00.525246+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  log_queue is 2 last_log 23 sent 21 num 2 unsent 2 sending 2
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:34:30.167525+0000 osd.0 (osd.0) 22 : cluster [DBG] 4.1d scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:34:30.181655+0000 osd.0 (osd.0) 23 : cluster [DBG] 4.1d scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 55 handle_osd_map epochs [55,55], i have 55, src has [1,55]
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 55 pg[6.a(unlocked)] enter Initial
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 55 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=0 pi=[44,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000115 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 55 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=0 pi=[44,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 55 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000049 1 0.000096
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 55 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 55 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 55 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 55 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000357 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 55 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 55 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 55 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 55 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000289 1 0.000668
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 55 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/22 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 55 pg[6.a( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=44/45 n=1 ec=39/22 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=32'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001042 2 0.000303
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 55 pg[6.a( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=44/45 n=1 ec=39/22 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=32'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 55 pg[6.a( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=44/45 n=1 ec=39/22 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=32'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000033 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 55 pg[6.a( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=44/45 n=1 ec=39/22 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=32'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 61423616 unmapped: 466944 heap: 61890560 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 55 heartbeat osd_stat(store_statfs(0x4fe133000/0x0/0x4ffc00000, data 0x3d946/0x99000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client handle_log_ack log(last 23) v1
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:34:30.167525+0000 osd.0 (osd.0) 22 : cluster [DBG] 4.1d scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:34:30.181655+0000 osd.0 (osd.0) 23 : cluster [DBG] 4.1d scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 55 handle_osd_map epochs [56,56], i have 55, src has [1,56]
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.958192825s of 10.151649475s, submitted: 49
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 56 pg[6.a( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=44/45 n=1 ec=39/22 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=32'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.529355 2 0.000235
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 56 pg[6.a( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=44/45 n=1 ec=39/22 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=32'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.530908 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 56 pg[6.a( v 32'39 lc 0'0 (0'0,32'39] local-lis/les=44/45 n=1 ec=39/22 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=32'39 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 56 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=55/56 n=1 ec=39/22 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=32'39 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 56 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=55/56 n=1 ec=39/22 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 56 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=55/56 n=1 ec=39/22 lis/c=55/44 les/c/f=56/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004069 3 0.000238
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 56 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=55/56 n=1 ec=39/22 lis/c=55/44 les/c/f=56/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 56 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=55/56 n=1 ec=39/22 lis/c=55/44 les/c/f=56/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000047 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 56 pg[6.a( v 32'39 (0'0,32'39] local-lis/les=55/56 n=1 ec=39/22 lis/c=55/44 les/c/f=56/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=32'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 56 handle_osd_map epochs [56,56], i have 56, src has [1,56]
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:01.525521+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  log_queue is 2 last_log 25 sent 23 num 2 unsent 2 sending 2
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:34:31.139873+0000 osd.0 (osd.0) 24 : cluster [DBG] 4.1e scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:34:31.154047+0000 osd.0 (osd.0) 25 : cluster [DBG] 4.1e scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 61431808 unmapped: 458752 heap: 61890560 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 396862 data_alloc: 218103808 data_used: 32768
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client handle_log_ack log(last 25) v1
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:34:31.139873+0000 osd.0 (osd.0) 24 : cluster [DBG] 4.1e scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:34:31.154047+0000 osd.0 (osd.0) 25 : cluster [DBG] 4.1e scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:02.525730+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 61521920 unmapped: 368640 heap: 61890560 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:03.525921+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 61530112 unmapped: 360448 heap: 61890560 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 56 heartbeat osd_stat(store_statfs(0x4fe12f000/0x0/0x4ffc00000, data 0x3edb6/0x9c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 56 handle_osd_map epochs [57,57], i have 56, src has [1,57]
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 57 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=46) [0] r=0 lpr=46 crt=32'39 mlcod 32'39 active+clean] exit Started/Primary/Active/Clean 20.742012 28 0.000118
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 57 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=46) [0] r=0 lpr=46 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started/Primary/Active 20.757026 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 57 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=46) [0] r=0 lpr=46 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started/Primary 21.772854 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 57 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=46) [0] r=0 lpr=46 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started 21.772890 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 57 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=46) [0] r=0 lpr=46 crt=32'39 mlcod 32'39 active mbc={255={}}] enter Reset
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: unregistering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 57 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=57 pruub=11.247831345s) [1] r=-1 lpr=57 pi=[46,57)/1 crt=32'39 mlcod 32'39 active pruub 113.683471680s@ mbc={255={}}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: unregistering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 57 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=57 pruub=11.247759819s) [1] r=-1 lpr=57 pi=[46,57)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 113.683471680s@ mbc={}] exit Reset 0.000136 1 0.000200
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 57 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=57 pruub=11.247759819s) [1] r=-1 lpr=57 pi=[46,57)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 113.683471680s@ mbc={}] enter Started
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 57 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=57 pruub=11.247759819s) [1] r=-1 lpr=57 pi=[46,57)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 113.683471680s@ mbc={}] enter Start
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 57 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=57 pruub=11.247759819s) [1] r=-1 lpr=57 pi=[46,57)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 113.683471680s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 57 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=57 pruub=11.247759819s) [1] r=-1 lpr=57 pi=[46,57)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 113.683471680s@ mbc={}] exit Start 0.000015 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 57 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=57 pruub=11.247759819s) [1] r=-1 lpr=57 pi=[46,57)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 113.683471680s@ mbc={}] enter Started/Stray
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: unregistering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: unregistering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 57 handle_osd_map epochs [57,57], i have 57, src has [1,57]
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 57 handle_osd_map epochs [57,58], i have 57, src has [1,58]
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 58 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=-1 lpr=57 pi=[46,57)/1 crt=32'39 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.176846 7 0.000248
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 58 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=-1 lpr=57 pi=[46,57)/1 crt=32'39 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 58 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=-1 lpr=57 pi=[46,57)/1 crt=32'39 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 58 handle_osd_map epochs [58,58], i have 58, src has [1,58]
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 58 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=-1 lpr=57 pi=[46,57)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.022403 2 0.000155
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 58 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=-1 lpr=57 pi=[46,57)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.022488 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 58 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=-1 lpr=57 pi=[46,57)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] enter Started/ToDelete
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 58 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=-1 lpr=57 pi=[46,57)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 58 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=-1 lpr=57 pi=[46,57)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000214 1 0.000117
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 58 pg[6.b( v 32'39 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=-1 lpr=57 pi=[46,57)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 58 pg[6.b( v 32'39 (0'0,32'39] lb MIN local-lis/les=46/47 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=-1 lpr=57 DELETING pi=[46,57)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.022479 2 0.000328
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 58 pg[6.b( v 32'39 (0'0,32'39] lb MIN local-lis/les=46/47 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=-1 lpr=57 pi=[46,57)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] exit Started/ToDelete 0.022823 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 58 pg[6.b( v 32'39 (0'0,32'39] lb MIN local-lis/les=46/47 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=-1 lpr=57 pi=[46,57)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] exit Started 0.222313 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:04.526079+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  log_queue is 2 last_log 27 sent 25 num 2 unsent 2 sending 2
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:34:34.122414+0000 osd.0 (osd.0) 26 : cluster [DBG] 4.1f scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:34:34.136554+0000 osd.0 (osd.0) 27 : cluster [DBG] 4.1f scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 61579264 unmapped: 311296 heap: 61890560 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client handle_log_ack log(last 27) v1
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:34:34.122414+0000 osd.0 (osd.0) 26 : cluster [DBG] 4.1f scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:34:34.136554+0000 osd.0 (osd.0) 27 : cluster [DBG] 4.1f scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:05.526343+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 61579264 unmapped: 311296 heap: 61890560 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:06.526525+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 58 handle_osd_map epochs [59,59], i have 58, src has [1,59]
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 59 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=49/50 n=1 ec=39/22 lis/c=49/49 les/c/f=50/50/0 sis=49) [0] r=0 lpr=49 crt=32'39 mlcod 32'39 active+clean] exit Started/Primary/Active/Clean 14.699705 24 0.000260
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 59 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=49/50 n=1 ec=39/22 lis/c=49/49 les/c/f=50/50/0 sis=49) [0] r=0 lpr=49 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started/Primary/Active 14.774720 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 59 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=49/50 n=1 ec=39/22 lis/c=49/49 les/c/f=50/50/0 sis=49) [0] r=0 lpr=49 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started/Primary 15.778414 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 59 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=49/50 n=1 ec=39/22 lis/c=49/49 les/c/f=50/50/0 sis=49) [0] r=0 lpr=49 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started 15.778441 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 59 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=49/50 n=1 ec=39/22 lis/c=49/49 les/c/f=50/50/0 sis=49) [0] r=0 lpr=49 crt=32'39 mlcod 32'39 active mbc={255={}}] enter Reset
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: unregistering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 59 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=49/50 n=1 ec=39/22 lis/c=49/49 les/c/f=50/50/0 sis=59 pruub=9.230822563s) [1] r=-1 lpr=59 pi=[49,59)/1 crt=32'39 mlcod 32'39 active pruub 114.456108093s@ mbc={255={}}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: unregistering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 59 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=49/50 n=1 ec=39/22 lis/c=49/49 les/c/f=50/50/0 sis=59 pruub=9.230740547s) [1] r=-1 lpr=59 pi=[49,59)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 114.456108093s@ mbc={}] exit Reset 0.000123 1 0.000185
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 59 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=49/50 n=1 ec=39/22 lis/c=49/49 les/c/f=50/50/0 sis=59 pruub=9.230740547s) [1] r=-1 lpr=59 pi=[49,59)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 114.456108093s@ mbc={}] enter Started
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 59 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=49/50 n=1 ec=39/22 lis/c=49/49 les/c/f=50/50/0 sis=59 pruub=9.230740547s) [1] r=-1 lpr=59 pi=[49,59)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 114.456108093s@ mbc={}] enter Start
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 59 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=49/50 n=1 ec=39/22 lis/c=49/49 les/c/f=50/50/0 sis=59 pruub=9.230740547s) [1] r=-1 lpr=59 pi=[49,59)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 114.456108093s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 59 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=49/50 n=1 ec=39/22 lis/c=49/49 les/c/f=50/50/0 sis=59 pruub=9.230740547s) [1] r=-1 lpr=59 pi=[49,59)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 114.456108093s@ mbc={}] exit Start 0.000008 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 59 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=49/50 n=1 ec=39/22 lis/c=49/49 les/c/f=50/50/0 sis=59 pruub=9.230740547s) [1] r=-1 lpr=59 pi=[49,59)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 114.456108093s@ mbc={}] enter Started/Stray
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: unregistering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 59 handle_osd_map epochs [59,59], i have 59, src has [1,59]
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: unregistering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 59 heartbeat osd_stat(store_statfs(0x4fe12a000/0x0/0x4ffc00000, data 0x419c4/0xa2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 61587456 unmapped: 303104 heap: 61890560 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 402242 data_alloc: 218103808 data_used: 32768
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 59 handle_osd_map epochs [60,60], i have 59, src has [1,60]
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 60 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=49/50 n=1 ec=39/22 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=-1 lpr=59 pi=[49,59)/1 crt=32'39 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.426864 6 0.000129
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 60 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=49/50 n=1 ec=39/22 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=-1 lpr=59 pi=[49,59)/1 crt=32'39 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 60 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=49/50 n=1 ec=39/22 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=-1 lpr=59 pi=[49,59)/1 crt=32'39 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 60 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=49/50 n=1 ec=39/22 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=-1 lpr=59 pi=[49,59)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.102255 3 0.000061
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 60 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=49/50 n=1 ec=39/22 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=-1 lpr=59 pi=[49,59)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.102330 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 60 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=49/50 n=1 ec=39/22 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=-1 lpr=59 pi=[49,59)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] enter Started/ToDelete
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 60 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=49/50 n=1 ec=39/22 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=-1 lpr=59 pi=[49,59)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 60 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=49/50 n=1 ec=39/22 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=-1 lpr=59 pi=[49,59)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000125 1 0.000133
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 60 pg[6.d( v 32'39 (0'0,32'39] local-lis/les=49/50 n=1 ec=39/22 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=-1 lpr=59 pi=[49,59)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 60 pg[6.d( v 32'39 (0'0,32'39] lb MIN local-lis/les=49/50 n=1 ec=39/22 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=-1 lpr=59 DELETING pi=[49,59)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.019158 2 0.000215
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 60 pg[6.d( v 32'39 (0'0,32'39] lb MIN local-lis/les=49/50 n=1 ec=39/22 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=-1 lpr=59 pi=[49,59)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] exit Started/ToDelete 0.019365 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 60 pg[6.d( v 32'39 (0'0,32'39] lb MIN local-lis/les=49/50 n=1 ec=39/22 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=-1 lpr=59 pi=[49,59)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] exit Started 0.548618 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.d] failed. State was: not registered w/ OSD
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:07.526676+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  log_queue is 2 last_log 29 sent 27 num 2 unsent 2 sending 2
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:34:37.102454+0000 osd.0 (osd.0) 28 : cluster [DBG] 5.1e scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:34:37.116550+0000 osd.0 (osd.0) 29 : cluster [DBG] 5.1e scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 61628416 unmapped: 262144 heap: 61890560 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client handle_log_ack log(last 29) v1
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:34:37.102454+0000 osd.0 (osd.0) 28 : cluster [DBG] 5.1e scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:34:37.116550+0000 osd.0 (osd.0) 29 : cluster [DBG] 5.1e scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:08.526893+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 61644800 unmapped: 245760 heap: 61890560 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 60 handle_osd_map epochs [61,61], i have 60, src has [1,61]
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:09.527049+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 61677568 unmapped: 212992 heap: 61890560 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 2.13 scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 2.13 scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 61 heartbeat osd_stat(store_statfs(0x4fe121000/0x0/0x4ffc00000, data 0x459f4/0xaa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 61 handle_osd_map epochs [62,62], i have 61, src has [1,62]
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 62 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=46) [0] r=0 lpr=46 crt=32'39 mlcod 32'39 active+clean] exit Started/Primary/Active/Clean 26.875573 44 0.000199
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 62 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=46) [0] r=0 lpr=46 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started/Primary/Active 27.157555 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 62 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=46) [0] r=0 lpr=46 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started/Primary 28.172783 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 62 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=46) [0] r=0 lpr=46 crt=32'39 mlcod 32'39 active mbc={255={}}] exit Started 28.172850 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 62 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=46) [0] r=0 lpr=46 crt=32'39 mlcod 32'39 active mbc={255={}}] enter Reset
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: unregistering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 62 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=62 pruub=12.850687027s) [2] r=-1 lpr=62 pi=[46,62)/1 crt=32'39 mlcod 32'39 active pruub 121.687095642s@ mbc={255={}}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: unregistering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 62 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=62 pruub=12.850586891s) [2] r=-1 lpr=62 pi=[46,62)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 121.687095642s@ mbc={}] exit Reset 0.000151 1 0.000259
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 62 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=62 pruub=12.850586891s) [2] r=-1 lpr=62 pi=[46,62)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 121.687095642s@ mbc={}] enter Started
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 62 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=62 pruub=12.850586891s) [2] r=-1 lpr=62 pi=[46,62)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 121.687095642s@ mbc={}] enter Start
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 62 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=62 pruub=12.850586891s) [2] r=-1 lpr=62 pi=[46,62)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 121.687095642s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 62 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=62 pruub=12.850586891s) [2] r=-1 lpr=62 pi=[46,62)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 121.687095642s@ mbc={}] exit Start 0.000015 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 62 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=62 pruub=12.850586891s) [2] r=-1 lpr=62 pi=[46,62)/1 crt=32'39 mlcod 0'0 unknown NOTIFY pruub 121.687095642s@ mbc={}] enter Started/Stray
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: unregistering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: unregistering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:10.527187+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  log_queue is 2 last_log 31 sent 29 num 2 unsent 2 sending 2
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:34:40.041326+0000 osd.0 (osd.0) 30 : cluster [DBG] 2.13 scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:34:40.055423+0000 osd.0 (osd.0) 31 : cluster [DBG] 2.13 scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 188416 heap: 61890560 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client handle_log_ack log(last 31) v1
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:34:40.041326+0000 osd.0 (osd.0) 30 : cluster [DBG] 2.13 scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:34:40.055423+0000 osd.0 (osd.0) 31 : cluster [DBG] 2.13 scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 62 handle_osd_map epochs [63,63], i have 62, src has [1,63]
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.972566605s of 10.122732162s, submitted: 34
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 63 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=62) [2] r=-1 lpr=62 pi=[46,62)/1 crt=32'39 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.863977 6 0.000134
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 63 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=62) [2] r=-1 lpr=62 pi=[46,62)/1 crt=32'39 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 63 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=62) [2] r=-1 lpr=62 pi=[46,62)/1 crt=32'39 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 63 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=62) [2] r=-1 lpr=62 pi=[46,62)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.131510 3 0.000063
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 63 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=62) [2] r=-1 lpr=62 pi=[46,62)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.131565 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 63 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=62) [2] r=-1 lpr=62 pi=[46,62)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] enter Started/ToDelete
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 63 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=62) [2] r=-1 lpr=62 pi=[46,62)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReseved
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 63 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=62) [2] r=-1 lpr=62 pi=[46,62)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReseved 0.000140 1 0.000112
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 63 pg[6.f( v 32'39 (0'0,32'39] local-lis/les=46/47 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=62) [2] r=-1 lpr=62 pi=[46,62)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 63 pg[6.f( v 32'39 (0'0,32'39] lb MIN local-lis/les=46/47 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=62) [2] r=-1 lpr=62 DELETING pi=[46,62)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.026357 2 0.000390
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 63 pg[6.f( v 32'39 (0'0,32'39] lb MIN local-lis/les=46/47 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=62) [2] r=-1 lpr=62 pi=[46,62)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] exit Started/ToDelete 0.026672 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 pg_epoch: 63 pg[6.f( v 32'39 (0'0,32'39] lb MIN local-lis/les=46/47 n=1 ec=39/22 lis/c=46/46 les/c/f=47/47/0 sis=62) [2] r=-1 lpr=62 pi=[46,62)/1 luod=0'0 crt=32'39 mlcod 0'0 active mbc={}] exit Started 1.022340 0 0.000000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:11.527424+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  log_queue is 2 last_log 33 sent 31 num 2 unsent 2 sending 2
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:34:41.046889+0000 osd.0 (osd.0) 32 : cluster [DBG] 2.16 scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:34:41.061019+0000 osd.0 (osd.0) 33 : cluster [DBG] 2.16 scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 61726720 unmapped: 163840 heap: 61890560 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 407392 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client handle_log_ack log(last 33) v1
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:34:41.046889+0000 osd.0 (osd.0) 32 : cluster [DBG] 2.16 scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:34:41.061019+0000 osd.0 (osd.0) 33 : cluster [DBG] 2.16 scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:12.527644+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 61743104 unmapped: 147456 heap: 61890560 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:13.527868+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  log_queue is 2 last_log 35 sent 33 num 2 unsent 2 sending 2
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:34:43.062873+0000 osd.0 (osd.0) 34 : cluster [DBG] 5.14 scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:34:43.076949+0000 osd.0 (osd.0) 35 : cluster [DBG] 5.14 scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 61743104 unmapped: 147456 heap: 61890560 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client handle_log_ack log(last 35) v1
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:34:43.062873+0000 osd.0 (osd.0) 34 : cluster [DBG] 5.14 scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:34:43.076949+0000 osd.0 (osd.0) 35 : cluster [DBG] 5.14 scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:14.528112+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  log_queue is 2 last_log 37 sent 35 num 2 unsent 2 sending 2
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:34:44.094314+0000 osd.0 (osd.0) 36 : cluster [DBG] 3.17 scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:34:44.108543+0000 osd.0 (osd.0) 37 : cluster [DBG] 3.17 scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 61759488 unmapped: 131072 heap: 61890560 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client handle_log_ack log(last 37) v1
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:34:44.094314+0000 osd.0 (osd.0) 36 : cluster [DBG] 3.17 scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:34:44.108543+0000 osd.0 (osd.0) 37 : cluster [DBG] 3.17 scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:15.528711+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 61767680 unmapped: 122880 heap: 61890560 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:16.528842+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 61767680 unmapped: 122880 heap: 61890560 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 408808 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:17.528986+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 61767680 unmapped: 122880 heap: 61890560 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:18.529214+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  log_queue is 2 last_log 39 sent 37 num 2 unsent 2 sending 2
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:34:48.060713+0000 osd.0 (osd.0) 38 : cluster [DBG] 7.13 scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:34:48.074761+0000 osd.0 (osd.0) 39 : cluster [DBG] 7.13 scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 61775872 unmapped: 114688 heap: 61890560 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 3.f scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 3.f scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client handle_log_ack log(last 39) v1
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:34:48.060713+0000 osd.0 (osd.0) 38 : cluster [DBG] 7.13 scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:34:48.074761+0000 osd.0 (osd.0) 39 : cluster [DBG] 7.13 scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:19.529492+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  log_queue is 2 last_log 41 sent 39 num 2 unsent 2 sending 2
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:34:49.037817+0000 osd.0 (osd.0) 40 : cluster [DBG] 3.f scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:34:49.051939+0000 osd.0 (osd.0) 41 : cluster [DBG] 3.f scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 61784064 unmapped: 106496 heap: 61890560 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 3.c scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 3.c scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client handle_log_ack log(last 41) v1
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:34:49.037817+0000 osd.0 (osd.0) 40 : cluster [DBG] 3.f scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:34:49.051939+0000 osd.0 (osd.0) 41 : cluster [DBG] 3.f scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:20.529733+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  log_queue is 2 last_log 43 sent 41 num 2 unsent 2 sending 2
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:34:50.041832+0000 osd.0 (osd.0) 42 : cluster [DBG] 3.c scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:34:50.055882+0000 osd.0 (osd.0) 43 : cluster [DBG] 3.c scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 61792256 unmapped: 98304 heap: 61890560 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client handle_log_ack log(last 43) v1
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:34:50.041832+0000 osd.0 (osd.0) 42 : cluster [DBG] 3.c scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:34:50.055882+0000 osd.0 (osd.0) 43 : cluster [DBG] 3.c scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:21.529948+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 61792256 unmapped: 98304 heap: 61890560 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 412250 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:22.530083+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 61800448 unmapped: 90112 heap: 61890560 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:23.530278+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 61800448 unmapped: 90112 heap: 61890560 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.868285179s of 12.929282188s, submitted: 15
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:24.530437+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  log_queue is 2 last_log 45 sent 43 num 2 unsent 2 sending 2
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:34:54.114355+0000 osd.0 (osd.0) 44 : cluster [DBG] 2.11 scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:34:54.128437+0000 osd.0 (osd.0) 45 : cluster [DBG] 2.11 scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 61808640 unmapped: 81920 heap: 61890560 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client handle_log_ack log(last 45) v1
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:34:54.114355+0000 osd.0 (osd.0) 44 : cluster [DBG] 2.11 scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:34:54.128437+0000 osd.0 (osd.0) 45 : cluster [DBG] 2.11 scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:25.530620+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  log_queue is 2 last_log 47 sent 45 num 2 unsent 2 sending 2
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:34:55.152844+0000 osd.0 (osd.0) 46 : cluster [DBG] 3.12 scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:34:55.166945+0000 osd.0 (osd.0) 47 : cluster [DBG] 3.12 scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 61808640 unmapped: 81920 heap: 61890560 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 7.f scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 7.f scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client handle_log_ack log(last 47) v1
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:34:55.152844+0000 osd.0 (osd.0) 46 : cluster [DBG] 3.12 scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:34:55.166945+0000 osd.0 (osd.0) 47 : cluster [DBG] 3.12 scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:26.530789+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  log_queue is 2 last_log 49 sent 47 num 2 unsent 2 sending 2
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:34:56.134334+0000 osd.0 (osd.0) 48 : cluster [DBG] 7.f scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:34:56.148404+0000 osd.0 (osd.0) 49 : cluster [DBG] 7.f scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 61816832 unmapped: 73728 heap: 61890560 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 415693 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client handle_log_ack log(last 49) v1
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:34:56.134334+0000 osd.0 (osd.0) 48 : cluster [DBG] 7.f scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:34:56.148404+0000 osd.0 (osd.0) 49 : cluster [DBG] 7.f scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:27.531003+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 32768 heap: 61890560 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:28.531122+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 32768 heap: 61890560 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:29.531267+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 24576 heap: 61890560 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:30.531395+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 24576 heap: 61890560 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:31.531510+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 24576 heap: 61890560 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 415693 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:32.531619+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 61874176 unmapped: 16384 heap: 61890560 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:33.531721+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 61874176 unmapped: 16384 heap: 61890560 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.039699554s of 10.055684090s, submitted: 6
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:34.531852+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  log_queue is 2 last_log 51 sent 49 num 2 unsent 2 sending 2
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:35:04.170064+0000 osd.0 (osd.0) 50 : cluster [DBG] 5.15 scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:35:04.184188+0000 osd.0 (osd.0) 51 : cluster [DBG] 5.15 scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 8192 heap: 61890560 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client handle_log_ack log(last 51) v1
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:35:04.170064+0000 osd.0 (osd.0) 50 : cluster [DBG] 5.15 scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:35:04.184188+0000 osd.0 (osd.0) 51 : cluster [DBG] 5.15 scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:35.532098+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  log_queue is 2 last_log 53 sent 51 num 2 unsent 2 sending 2
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:35:05.175605+0000 osd.0 (osd.0) 52 : cluster [DBG] 3.3 scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:35:05.189836+0000 osd.0 (osd.0) 53 : cluster [DBG] 3.3 scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 8192 heap: 61890560 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client handle_log_ack log(last 53) v1
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:35:05.175605+0000 osd.0 (osd.0) 52 : cluster [DBG] 3.3 scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:35:05.189836+0000 osd.0 (osd.0) 53 : cluster [DBG] 3.3 scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:36.532423+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 8192 heap: 61890560 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 417988 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:37.532617+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  log_queue is 2 last_log 55 sent 53 num 2 unsent 2 sending 2
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:35:07.221119+0000 osd.0 (osd.0) 54 : cluster [DBG] 3.6 scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:35:07.235238+0000 osd.0 (osd.0) 55 : cluster [DBG] 3.6 scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 61906944 unmapped: 1032192 heap: 62939136 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client handle_log_ack log(last 55) v1
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:35:07.221119+0000 osd.0 (osd.0) 54 : cluster [DBG] 3.6 scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:35:07.235238+0000 osd.0 (osd.0) 55 : cluster [DBG] 3.6 scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:38.532802+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 61906944 unmapped: 1032192 heap: 62939136 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:39.532973+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 61906944 unmapped: 1032192 heap: 62939136 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:40.533199+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 61915136 unmapped: 1024000 heap: 62939136 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:41.533358+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 61915136 unmapped: 1024000 heap: 62939136 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 419135 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 2.f scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 2.f scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:42.533531+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  log_queue is 2 last_log 57 sent 55 num 2 unsent 2 sending 2
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:35:12.069763+0000 osd.0 (osd.0) 56 : cluster [DBG] 2.f scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:35:12.083875+0000 osd.0 (osd.0) 57 : cluster [DBG] 2.f scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 61939712 unmapped: 999424 heap: 62939136 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client handle_log_ack log(last 57) v1
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:35:12.069763+0000 osd.0 (osd.0) 56 : cluster [DBG] 2.f scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:35:12.083875+0000 osd.0 (osd.0) 57 : cluster [DBG] 2.f scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:43.533715+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  log_queue is 2 last_log 59 sent 57 num 2 unsent 2 sending 2
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:35:13.029311+0000 osd.0 (osd.0) 58 : cluster [DBG] 2.18 scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:35:13.043334+0000 osd.0 (osd.0) 59 : cluster [DBG] 2.18 scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 61939712 unmapped: 999424 heap: 62939136 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client handle_log_ack log(last 59) v1
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:35:13.029311+0000 osd.0 (osd.0) 58 : cluster [DBG] 2.18 scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:35:13.043334+0000 osd.0 (osd.0) 59 : cluster [DBG] 2.18 scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:44.533908+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  log_queue is 2 last_log 61 sent 59 num 2 unsent 2 sending 2
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:35:14.009829+0000 osd.0 (osd.0) 60 : cluster [DBG] 7.9 scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:35:14.023897+0000 osd.0 (osd.0) 61 : cluster [DBG] 7.9 scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 61947904 unmapped: 991232 heap: 62939136 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client handle_log_ack log(last 61) v1
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:35:14.009829+0000 osd.0 (osd.0) 60 : cluster [DBG] 7.9 scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:35:14.023897+0000 osd.0 (osd.0) 61 : cluster [DBG] 7.9 scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:45.534247+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 61947904 unmapped: 991232 heap: 62939136 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:46.534355+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 61947904 unmapped: 991232 heap: 62939136 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 422577 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:47.534549+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 61956096 unmapped: 983040 heap: 62939136 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:48.534728+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 61956096 unmapped: 983040 heap: 62939136 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:49.534893+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 61964288 unmapped: 974848 heap: 62939136 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:50.535069+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.703536987s of 16.757547379s, submitted: 12
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 61964288 unmapped: 974848 heap: 62939136 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:51.535325+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  log_queue is 2 last_log 63 sent 61 num 2 unsent 2 sending 2
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:35:20.927526+0000 osd.0 (osd.0) 62 : cluster [DBG] 7.3 scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:35:20.941728+0000 osd.0 (osd.0) 63 : cluster [DBG] 7.3 scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 61980672 unmapped: 958464 heap: 62939136 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 424871 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client handle_log_ack log(last 63) v1
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:35:20.927526+0000 osd.0 (osd.0) 62 : cluster [DBG] 7.3 scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:35:20.941728+0000 osd.0 (osd.0) 63 : cluster [DBG] 7.3 scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:52.535591+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  log_queue is 2 last_log 65 sent 63 num 2 unsent 2 sending 2
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:35:21.952688+0000 osd.0 (osd.0) 64 : cluster [DBG] 7.4 scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:35:21.966794+0000 osd.0 (osd.0) 65 : cluster [DBG] 7.4 scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 61980672 unmapped: 958464 heap: 62939136 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client handle_log_ack log(last 65) v1
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:35:21.952688+0000 osd.0 (osd.0) 64 : cluster [DBG] 7.4 scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:35:21.966794+0000 osd.0 (osd.0) 65 : cluster [DBG] 7.4 scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:53.535952+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 61980672 unmapped: 958464 heap: 62939136 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:54.536108+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 61988864 unmapped: 950272 heap: 62939136 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:55.536301+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 61988864 unmapped: 950272 heap: 62939136 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:56.536542+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 3.1b scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 3.1b scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 61997056 unmapped: 942080 heap: 62939136 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 426019 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:57.536724+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  log_queue is 2 last_log 67 sent 65 num 2 unsent 2 sending 2
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:35:26.965564+0000 osd.0 (osd.0) 66 : cluster [DBG] 3.1b scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:35:26.979720+0000 osd.0 (osd.0) 67 : cluster [DBG] 3.1b scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 61997056 unmapped: 942080 heap: 62939136 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client handle_log_ack log(last 67) v1
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:35:26.965564+0000 osd.0 (osd.0) 66 : cluster [DBG] 3.1b scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:35:26.979720+0000 osd.0 (osd.0) 67 : cluster [DBG] 3.1b scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:58.536952+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 61997056 unmapped: 942080 heap: 62939136 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:59.537377+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62005248 unmapped: 933888 heap: 62939136 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:00.537700+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  log_queue is 2 last_log 69 sent 67 num 2 unsent 2 sending 2
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:35:29.956036+0000 osd.0 (osd.0) 68 : cluster [DBG] 7.6 scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:35:29.970085+0000 osd.0 (osd.0) 69 : cluster [DBG] 7.6 scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client handle_log_ack log(last 69) v1
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:35:29.956036+0000 osd.0 (osd.0) 68 : cluster [DBG] 7.6 scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:35:29.970085+0000 osd.0 (osd.0) 69 : cluster [DBG] 7.6 scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62005248 unmapped: 933888 heap: 62939136 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:01.538086+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62013440 unmapped: 925696 heap: 62939136 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 427166 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:02.538506+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62013440 unmapped: 925696 heap: 62939136 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:03.538660+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62013440 unmapped: 925696 heap: 62939136 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:04.538985+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62021632 unmapped: 917504 heap: 62939136 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:05.539305+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 3.9 deep-scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.938228607s of 14.966608047s, submitted: 8
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 3.9 deep-scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63078400 unmapped: 909312 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:06.539515+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  log_queue is 2 last_log 71 sent 69 num 2 unsent 2 sending 2
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:35:35.894235+0000 osd.0 (osd.0) 70 : cluster [DBG] 3.9 deep-scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:35:35.908357+0000 osd.0 (osd.0) 71 : cluster [DBG] 3.9 deep-scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client handle_log_ack log(last 71) v1
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:35:35.894235+0000 osd.0 (osd.0) 70 : cluster [DBG] 3.9 deep-scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:35:35.908357+0000 osd.0 (osd.0) 71 : cluster [DBG] 3.9 deep-scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62038016 unmapped: 1949696 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 429461 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:07.539768+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  log_queue is 2 last_log 73 sent 71 num 2 unsent 2 sending 2
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:35:36.846778+0000 osd.0 (osd.0) 72 : cluster [DBG] 7.1f scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:35:36.860678+0000 osd.0 (osd.0) 73 : cluster [DBG] 7.1f scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client handle_log_ack log(last 73) v1
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:35:36.846778+0000 osd.0 (osd.0) 72 : cluster [DBG] 7.1f scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:35:36.860678+0000 osd.0 (osd.0) 73 : cluster [DBG] 7.1f scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62038016 unmapped: 1949696 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:08.540017+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62046208 unmapped: 1941504 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:09.540257+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62054400 unmapped: 1933312 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:10.540617+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62054400 unmapped: 1933312 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:11.540776+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62062592 unmapped: 1925120 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 429461 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:12.540968+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62062592 unmapped: 1925120 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:13.541115+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62062592 unmapped: 1925120 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:14.541351+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62078976 unmapped: 1908736 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:15.541736+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  log_queue is 2 last_log 75 sent 73 num 2 unsent 2 sending 2
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:35:44.752147+0000 osd.0 (osd.0) 74 : cluster [DBG] 7.18 scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:35:44.765712+0000 osd.0 (osd.0) 75 : cluster [DBG] 7.18 scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62078976 unmapped: 1908736 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client handle_log_ack log(last 75) v1
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:35:44.752147+0000 osd.0 (osd.0) 74 : cluster [DBG] 7.18 scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:35:44.765712+0000 osd.0 (osd.0) 75 : cluster [DBG] 7.18 scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:16.541954+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62087168 unmapped: 1900544 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430609 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:17.542095+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62087168 unmapped: 1900544 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:18.542271+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62095360 unmapped: 1892352 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:19.542512+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62095360 unmapped: 1892352 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:20.542645+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62095360 unmapped: 1892352 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:21.542801+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62103552 unmapped: 1884160 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430609 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:22.543261+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62111744 unmapped: 1875968 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:23.543597+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62128128 unmapped: 1859584 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:24.543760+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62128128 unmapped: 1859584 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:25.544438+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62128128 unmapped: 1859584 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:26.544594+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.821636200s of 20.845338821s, submitted: 6
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62136320 unmapped: 1851392 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 431757 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:27.544821+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  log_queue is 2 last_log 77 sent 75 num 2 unsent 2 sending 2
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:35:56.739510+0000 osd.0 (osd.0) 76 : cluster [DBG] 7.1b scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:35:56.753605+0000 osd.0 (osd.0) 77 : cluster [DBG] 7.1b scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62144512 unmapped: 1843200 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client handle_log_ack log(last 77) v1
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:35:56.739510+0000 osd.0 (osd.0) 76 : cluster [DBG] 7.1b scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:35:56.753605+0000 osd.0 (osd.0) 77 : cluster [DBG] 7.1b scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:28.545068+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  log_queue is 2 last_log 79 sent 77 num 2 unsent 2 sending 2
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:35:57.771527+0000 osd.0 (osd.0) 78 : cluster [DBG] 3.1f scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:35:57.785608+0000 osd.0 (osd.0) 79 : cluster [DBG] 3.1f scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62160896 unmapped: 1826816 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client handle_log_ack log(last 79) v1
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:35:57.771527+0000 osd.0 (osd.0) 78 : cluster [DBG] 3.1f scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:35:57.785608+0000 osd.0 (osd.0) 79 : cluster [DBG] 3.1f scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:29.545306+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62160896 unmapped: 1826816 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:30.545469+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62160896 unmapped: 1826816 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:31.545617+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62169088 unmapped: 1818624 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 432905 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:32.545783+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62177280 unmapped: 1810432 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:33.546045+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  log_queue is 2 last_log 81 sent 79 num 2 unsent 2 sending 2
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:36:02.754567+0000 osd.0 (osd.0) 80 : cluster [DBG] 3.15 scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:36:02.768605+0000 osd.0 (osd.0) 81 : cluster [DBG] 3.15 scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 3.a scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 3.a scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62177280 unmapped: 1810432 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client handle_log_ack log(last 81) v1
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:36:02.754567+0000 osd.0 (osd.0) 80 : cluster [DBG] 3.15 scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:36:02.768605+0000 osd.0 (osd.0) 81 : cluster [DBG] 3.15 scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:34.546509+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  log_queue is 2 last_log 83 sent 81 num 2 unsent 2 sending 2
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:36:03.745709+0000 osd.0 (osd.0) 82 : cluster [DBG] 3.a scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:36:03.759940+0000 osd.0 (osd.0) 83 : cluster [DBG] 3.a scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62193664 unmapped: 1794048 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client handle_log_ack log(last 83) v1
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:36:03.745709+0000 osd.0 (osd.0) 82 : cluster [DBG] 3.a scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:36:03.759940+0000 osd.0 (osd.0) 83 : cluster [DBG] 3.a scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:35.546723+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  log_queue is 2 last_log 85 sent 83 num 2 unsent 2 sending 2
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:36:04.741719+0000 osd.0 (osd.0) 84 : cluster [DBG] 5.7 scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:36:04.756200+0000 osd.0 (osd.0) 85 : cluster [DBG] 5.7 scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62193664 unmapped: 1794048 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client handle_log_ack log(last 85) v1
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:36:04.741719+0000 osd.0 (osd.0) 84 : cluster [DBG] 5.7 scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:36:04.756200+0000 osd.0 (osd.0) 85 : cluster [DBG] 5.7 scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:36.546957+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62201856 unmapped: 1785856 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 436347 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:37.547113+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62201856 unmapped: 1785856 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:38.547273+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.878055573s of 11.916460991s, submitted: 10
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62210048 unmapped: 1777664 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:39.547477+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  log_queue is 2 last_log 87 sent 85 num 2 unsent 2 sending 2
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:36:08.656138+0000 osd.0 (osd.0) 86 : cluster [DBG] 2.2 scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:36:08.670264+0000 osd.0 (osd.0) 87 : cluster [DBG] 2.2 scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62210048 unmapped: 1777664 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client handle_log_ack log(last 87) v1
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:36:08.656138+0000 osd.0 (osd.0) 86 : cluster [DBG] 2.2 scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:36:08.670264+0000 osd.0 (osd.0) 87 : cluster [DBG] 2.2 scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:40.547747+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62210048 unmapped: 1777664 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:41.548009+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62226432 unmapped: 1761280 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 437494 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:42.548229+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62226432 unmapped: 1761280 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:43.548479+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 5.4 scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 5.4 scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62234624 unmapped: 1753088 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:44.548652+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  log_queue is 2 last_log 89 sent 87 num 2 unsent 2 sending 2
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:36:13.555094+0000 osd.0 (osd.0) 88 : cluster [DBG] 5.4 scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:36:13.569223+0000 osd.0 (osd.0) 89 : cluster [DBG] 5.4 scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62234624 unmapped: 1753088 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client handle_log_ack log(last 89) v1
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:36:13.555094+0000 osd.0 (osd.0) 88 : cluster [DBG] 5.4 scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:36:13.569223+0000 osd.0 (osd.0) 89 : cluster [DBG] 5.4 scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:45.548940+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62234624 unmapped: 1753088 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:46.549151+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62242816 unmapped: 1744896 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 438641 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:47.549339+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62242816 unmapped: 1744896 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:48.549497+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  log_queue is 2 last_log 91 sent 89 num 2 unsent 2 sending 2
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:36:18.494319+0000 osd.0 (osd.0) 90 : cluster [DBG] 2.19 scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:36:18.508549+0000 osd.0 (osd.0) 91 : cluster [DBG] 2.19 scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62275584 unmapped: 1712128 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client handle_log_ack log(last 91) v1
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:36:18.494319+0000 osd.0 (osd.0) 90 : cluster [DBG] 2.19 scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:36:18.508549+0000 osd.0 (osd.0) 91 : cluster [DBG] 2.19 scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.816717148s of 10.837341309s, submitted: 6
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:49.549674+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  log_queue is 2 last_log 93 sent 91 num 2 unsent 2 sending 2
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:36:19.493594+0000 osd.0 (osd.0) 92 : cluster [DBG] 5.3 scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:36:19.507597+0000 osd.0 (osd.0) 93 : cluster [DBG] 5.3 scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62275584 unmapped: 1712128 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client handle_log_ack log(last 93) v1
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:36:19.493594+0000 osd.0 (osd.0) 92 : cluster [DBG] 5.3 scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:36:19.507597+0000 osd.0 (osd.0) 93 : cluster [DBG] 5.3 scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:50.549893+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  log_queue is 2 last_log 95 sent 93 num 2 unsent 2 sending 2
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:36:20.510540+0000 osd.0 (osd.0) 94 : cluster [DBG] 5.2 scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:36:20.524631+0000 osd.0 (osd.0) 95 : cluster [DBG] 5.2 scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62275584 unmapped: 1712128 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client handle_log_ack log(last 95) v1
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:36:20.510540+0000 osd.0 (osd.0) 94 : cluster [DBG] 5.2 scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:36:20.524631+0000 osd.0 (osd.0) 95 : cluster [DBG] 5.2 scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 2.8 deep-scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 2.8 deep-scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:51.550105+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  log_queue is 2 last_log 97 sent 95 num 2 unsent 2 sending 2
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:36:21.509040+0000 osd.0 (osd.0) 96 : cluster [DBG] 2.8 deep-scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:36:21.523090+0000 osd.0 (osd.0) 97 : cluster [DBG] 2.8 deep-scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62291968 unmapped: 1695744 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 443230 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client handle_log_ack log(last 97) v1
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:36:21.509040+0000 osd.0 (osd.0) 96 : cluster [DBG] 2.8 deep-scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:36:21.523090+0000 osd.0 (osd.0) 97 : cluster [DBG] 2.8 deep-scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:52.550388+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62300160 unmapped: 1687552 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:53.550586+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62308352 unmapped: 1679360 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:54.550771+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62308352 unmapped: 1679360 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:55.551001+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62316544 unmapped: 1671168 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:56.551207+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62316544 unmapped: 1671168 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 443230 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:57.551375+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62316544 unmapped: 1671168 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:58.551600+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62324736 unmapped: 1662976 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:59.551832+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62324736 unmapped: 1662976 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:00.552074+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62332928 unmapped: 1654784 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:01.554534+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 2.b scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.046909332s of 12.069302559s, submitted: 6
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 2.b scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62341120 unmapped: 1646592 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 444377 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:02.555412+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  log_queue is 4 last_log 101 sent 97 num 4 unsent 4 sending 4
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:36:31.562756+0000 osd.0 (osd.0) 98 : cluster [DBG] 2.b scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:36:31.577003+0000 osd.0 (osd.0) 99 : cluster [DBG] 2.b scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:36:32.517161+0000 osd.0 (osd.0) 100 : cluster [DBG] 2.1d scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:36:32.531390+0000 osd.0 (osd.0) 101 : cluster [DBG] 2.1d scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62365696 unmapped: 1622016 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client handle_log_ack log(last 101) v1
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:36:31.562756+0000 osd.0 (osd.0) 98 : cluster [DBG] 2.b scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:36:31.577003+0000 osd.0 (osd.0) 99 : cluster [DBG] 2.b scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:36:32.517161+0000 osd.0 (osd.0) 100 : cluster [DBG] 2.1d scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:36:32.531390+0000 osd.0 (osd.0) 101 : cluster [DBG] 2.1d scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:03.555817+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  log_queue is 2 last_log 103 sent 101 num 2 unsent 2 sending 2
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:36:33.535826+0000 osd.0 (osd.0) 102 : cluster [DBG] 5.5 scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:36:33.549663+0000 osd.0 (osd.0) 103 : cluster [DBG] 5.5 scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62382080 unmapped: 1605632 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client handle_log_ack log(last 103) v1
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:36:33.535826+0000 osd.0 (osd.0) 102 : cluster [DBG] 5.5 scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:36:33.549663+0000 osd.0 (osd.0) 103 : cluster [DBG] 5.5 scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:04.557111+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62382080 unmapped: 1605632 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:05.558156+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62382080 unmapped: 1605632 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:06.559003+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 446672 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62406656 unmapped: 1581056 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:07.559820+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  log_queue is 2 last_log 105 sent 103 num 2 unsent 2 sending 2
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:36:37.513508+0000 osd.0 (osd.0) 104 : cluster [DBG] 2.1c scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:36:37.527617+0000 osd.0 (osd.0) 105 : cluster [DBG] 2.1c scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62406656 unmapped: 1581056 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client handle_log_ack log(last 105) v1
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:36:37.513508+0000 osd.0 (osd.0) 104 : cluster [DBG] 2.1c scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:36:37.527617+0000 osd.0 (osd.0) 105 : cluster [DBG] 2.1c scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:08.560063+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62414848 unmapped: 1572864 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:09.560250+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62423040 unmapped: 1564672 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:10.561103+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62431232 unmapped: 1556480 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:11.561295+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 447820 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62431232 unmapped: 1556480 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.907520294s of 10.934170723s, submitted: 8
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:12.562947+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  log_queue is 2 last_log 107 sent 105 num 2 unsent 2 sending 2
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:36:42.496951+0000 osd.0 (osd.0) 106 : cluster [DBG] 2.1f scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:36:42.511098+0000 osd.0 (osd.0) 107 : cluster [DBG] 2.1f scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62431232 unmapped: 1556480 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 6.7 deep-scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 6.7 deep-scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:13.564848+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  log_queue is 4 last_log 109 sent 107 num 4 unsent 2 sending 2
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:36:43.458943+0000 osd.0 (osd.0) 108 : cluster [DBG] 6.7 deep-scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:36:43.476594+0000 osd.0 (osd.0) 109 : cluster [DBG] 6.7 deep-scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client handle_log_ack log(last 107) v1
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:36:42.496951+0000 osd.0 (osd.0) 106 : cluster [DBG] 2.1f scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:36:42.511098+0000 osd.0 (osd.0) 107 : cluster [DBG] 2.1f scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62439424 unmapped: 1548288 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:14.566303+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client handle_log_ack log(last 109) v1
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:36:43.458943+0000 osd.0 (osd.0) 108 : cluster [DBG] 6.7 deep-scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:36:43.476594+0000 osd.0 (osd.0) 109 : cluster [DBG] 6.7 deep-scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62439424 unmapped: 1548288 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:15.568866+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  log_queue is 2 last_log 111 sent 109 num 2 unsent 2 sending 2
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:36:45.393115+0000 osd.0 (osd.0) 110 : cluster [DBG] 6.3 scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:36:45.414149+0000 osd.0 (osd.0) 111 : cluster [DBG] 6.3 scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client handle_log_ack log(last 111) v1
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:36:45.393115+0000 osd.0 (osd.0) 110 : cluster [DBG] 6.3 scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:36:45.414149+0000 osd.0 (osd.0) 111 : cluster [DBG] 6.3 scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62447616 unmapped: 1540096 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:16.570120+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 451262 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62447616 unmapped: 1540096 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:17.570875+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62447616 unmapped: 1540096 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:18.572972+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62455808 unmapped: 1531904 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:19.573408+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62455808 unmapped: 1531904 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:20.573991+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62472192 unmapped: 1515520 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:21.574282+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  log_queue is 2 last_log 113 sent 111 num 2 unsent 2 sending 2
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:36:51.288689+0000 osd.0 (osd.0) 112 : cluster [DBG] 6.5 scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:36:51.309913+0000 osd.0 (osd.0) 113 : cluster [DBG] 6.5 scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client handle_log_ack log(last 113) v1
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:36:51.288689+0000 osd.0 (osd.0) 112 : cluster [DBG] 6.5 scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:36:51.309913+0000 osd.0 (osd.0) 113 : cluster [DBG] 6.5 scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 452409 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62480384 unmapped: 1507328 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:22.575125+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62480384 unmapped: 1507328 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:23.575364+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 6.9 deep-scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.736191750s of 11.767599106s, submitted: 8
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 6.9 deep-scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62480384 unmapped: 1507328 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:24.575532+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  log_queue is 2 last_log 115 sent 113 num 2 unsent 2 sending 2
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:36:54.264779+0000 osd.0 (osd.0) 114 : cluster [DBG] 6.9 deep-scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:36:54.278847+0000 osd.0 (osd.0) 115 : cluster [DBG] 6.9 deep-scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client handle_log_ack log(last 115) v1
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:36:54.264779+0000 osd.0 (osd.0) 114 : cluster [DBG] 6.9 deep-scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:36:54.278847+0000 osd.0 (osd.0) 115 : cluster [DBG] 6.9 deep-scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 6.a scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 6.a scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62488576 unmapped: 1499136 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:25.575842+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  log_queue is 2 last_log 117 sent 115 num 2 unsent 2 sending 2
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:36:55.276921+0000 osd.0 (osd.0) 116 : cluster [DBG] 6.a scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:36:55.291016+0000 osd.0 (osd.0) 117 : cluster [DBG] 6.a scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client handle_log_ack log(last 117) v1
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:36:55.276921+0000 osd.0 (osd.0) 116 : cluster [DBG] 6.a scrub starts
Nov 25 23:58:35 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:36:55.291016+0000 osd.0 (osd.0) 117 : cluster [DBG] 6.a scrub ok
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62488576 unmapped: 1499136 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:26.576275+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62496768 unmapped: 1490944 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:27.576389+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62496768 unmapped: 1490944 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:28.576768+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62504960 unmapped: 1482752 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:29.576954+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62504960 unmapped: 1482752 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:30.577240+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62504960 unmapped: 1482752 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:31.578322+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62513152 unmapped: 1474560 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:32.578588+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62513152 unmapped: 1474560 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:33.578790+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62513152 unmapped: 1474560 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:34.579035+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62521344 unmapped: 1466368 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:35.579584+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62521344 unmapped: 1466368 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:36.579827+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62529536 unmapped: 1458176 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:37.580034+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62537728 unmapped: 1449984 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:38.580325+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62537728 unmapped: 1449984 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:39.580612+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62545920 unmapped: 1441792 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:40.580852+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62545920 unmapped: 1441792 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:41.581067+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62545920 unmapped: 1441792 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:42.581302+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62554112 unmapped: 1433600 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:43.581558+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62554112 unmapped: 1433600 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:44.582694+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62562304 unmapped: 1425408 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:45.582885+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62562304 unmapped: 1425408 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:46.583062+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62562304 unmapped: 1425408 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:47.583309+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62570496 unmapped: 1417216 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:48.583493+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62570496 unmapped: 1417216 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:49.583674+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62578688 unmapped: 1409024 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:50.583903+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62586880 unmapped: 1400832 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:51.584227+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62595072 unmapped: 1392640 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:52.584471+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62595072 unmapped: 1392640 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:53.584728+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62603264 unmapped: 1384448 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:54.585040+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62603264 unmapped: 1384448 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:55.585369+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62603264 unmapped: 1384448 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:56.585723+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62611456 unmapped: 1376256 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:57.586015+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62611456 unmapped: 1376256 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:58.586342+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62611456 unmapped: 1376256 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:59.586570+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62627840 unmapped: 1359872 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:00.586776+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62627840 unmapped: 1359872 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:01.587028+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62636032 unmapped: 1351680 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:02.587253+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62636032 unmapped: 1351680 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:03.587505+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62636032 unmapped: 1351680 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:04.587669+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62644224 unmapped: 1343488 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:05.588680+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62652416 unmapped: 1335296 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:06.589396+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62660608 unmapped: 1327104 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:07.590025+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:08.590248+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62660608 unmapped: 1327104 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:09.590416+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62660608 unmapped: 1327104 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:10.591227+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62668800 unmapped: 1318912 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:11.591379+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62668800 unmapped: 1318912 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:12.591782+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62676992 unmapped: 1310720 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:13.591950+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62676992 unmapped: 1310720 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:14.592114+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62676992 unmapped: 1310720 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:15.592281+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62685184 unmapped: 1302528 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:16.592449+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62685184 unmapped: 1302528 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:17.592645+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62693376 unmapped: 1294336 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:18.592833+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62693376 unmapped: 1294336 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:19.593016+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62701568 unmapped: 1286144 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:20.593162+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62709760 unmapped: 1277952 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:21.593336+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62709760 unmapped: 1277952 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:22.593519+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62709760 unmapped: 1277952 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:23.640930+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62709760 unmapped: 1277952 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:24.641070+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62717952 unmapped: 1269760 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:25.641250+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62717952 unmapped: 1269760 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:26.641483+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62726144 unmapped: 1261568 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:27.641637+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62734336 unmapped: 1253376 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:28.641984+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62742528 unmapped: 1245184 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:29.642488+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62742528 unmapped: 1245184 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:30.642747+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62750720 unmapped: 1236992 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:31.643007+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62750720 unmapped: 1236992 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:32.643276+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62758912 unmapped: 1228800 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:33.643493+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62758912 unmapped: 1228800 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:34.643717+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62758912 unmapped: 1228800 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:35.644010+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62767104 unmapped: 1220608 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:36.644246+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62767104 unmapped: 1220608 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:37.644699+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62767104 unmapped: 1220608 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:38.644861+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62775296 unmapped: 1212416 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:39.644998+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62775296 unmapped: 1212416 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:40.645158+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62783488 unmapped: 1204224 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:41.645351+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62791680 unmapped: 1196032 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:42.645522+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62791680 unmapped: 1196032 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:43.645742+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62799872 unmapped: 1187840 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:44.645944+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62799872 unmapped: 1187840 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:45.646148+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62808064 unmapped: 1179648 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:46.646291+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62808064 unmapped: 1179648 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:47.646412+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62816256 unmapped: 1171456 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:48.646539+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62816256 unmapped: 1171456 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:49.646658+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62816256 unmapped: 1171456 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:50.646772+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62824448 unmapped: 1163264 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:51.662539+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62824448 unmapped: 1163264 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:52.662697+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62824448 unmapped: 1163264 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:53.662864+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62832640 unmapped: 1155072 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:54.663006+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62832640 unmapped: 1155072 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:55.663211+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62840832 unmapped: 1146880 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:56.663380+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62840832 unmapped: 1146880 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:57.663664+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62840832 unmapped: 1146880 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:58.663926+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62849024 unmapped: 1138688 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:59.664153+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62849024 unmapped: 1138688 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:00.664388+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62849024 unmapped: 1138688 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:01.664525+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62857216 unmapped: 1130496 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:02.664765+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62857216 unmapped: 1130496 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:03.664944+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62865408 unmapped: 1122304 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:04.665113+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62865408 unmapped: 1122304 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:05.665374+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62865408 unmapped: 1122304 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:06.665541+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62873600 unmapped: 1114112 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:07.665761+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62873600 unmapped: 1114112 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:08.665935+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62881792 unmapped: 1105920 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:09.666094+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62881792 unmapped: 1105920 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:10.666243+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62881792 unmapped: 1105920 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:11.666438+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62889984 unmapped: 1097728 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:12.666595+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62889984 unmapped: 1097728 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:13.666772+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62898176 unmapped: 1089536 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:14.666998+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62898176 unmapped: 1089536 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:15.667234+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62906368 unmapped: 1081344 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:16.667431+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62906368 unmapped: 1081344 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:17.667593+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62906368 unmapped: 1081344 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:18.667773+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62914560 unmapped: 1073152 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:19.667961+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62914560 unmapped: 1073152 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:20.668099+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62922752 unmapped: 1064960 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:21.668306+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62922752 unmapped: 1064960 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:22.668451+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62922752 unmapped: 1064960 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:23.668591+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62930944 unmapped: 1056768 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:24.668755+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62930944 unmapped: 1056768 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:25.668961+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62930944 unmapped: 1056768 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:26.669232+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62939136 unmapped: 1048576 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:27.669378+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62939136 unmapped: 1048576 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:28.669532+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62939136 unmapped: 1048576 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:29.669664+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62947328 unmapped: 1040384 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:30.669812+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62947328 unmapped: 1040384 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:31.670112+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62947328 unmapped: 1040384 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:32.670278+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62955520 unmapped: 1032192 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:33.670451+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62955520 unmapped: 1032192 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:34.670637+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62963712 unmapped: 1024000 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:35.670852+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62963712 unmapped: 1024000 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:36.671003+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62971904 unmapped: 1015808 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:37.671190+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62971904 unmapped: 1015808 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:38.671349+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62971904 unmapped: 1015808 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:39.671538+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62980096 unmapped: 1007616 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:40.671734+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62980096 unmapped: 1007616 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:41.672082+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62980096 unmapped: 1007616 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:42.672249+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62988288 unmapped: 999424 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:43.672421+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62988288 unmapped: 999424 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:44.672535+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62996480 unmapped: 991232 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:45.672743+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62996480 unmapped: 991232 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:46.672832+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63004672 unmapped: 983040 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:47.672979+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63004672 unmapped: 983040 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:48.673148+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63004672 unmapped: 983040 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:49.673324+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63012864 unmapped: 974848 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:50.673455+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63012864 unmapped: 974848 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:51.673606+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63012864 unmapped: 974848 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:52.673764+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63021056 unmapped: 966656 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:53.673888+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63021056 unmapped: 966656 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:54.674026+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63021056 unmapped: 966656 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:55.674238+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63029248 unmapped: 958464 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:56.674389+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63029248 unmapped: 958464 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:57.674548+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63029248 unmapped: 958464 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:58.674703+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63037440 unmapped: 950272 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:59.674832+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63037440 unmapped: 950272 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:00.675013+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63045632 unmapped: 942080 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:01.675166+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63045632 unmapped: 942080 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:02.675335+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63053824 unmapped: 933888 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:03.675497+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63053824 unmapped: 933888 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:04.675631+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63053824 unmapped: 933888 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:05.675809+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63062016 unmapped: 925696 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:06.675995+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63062016 unmapped: 925696 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:07.676134+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63070208 unmapped: 917504 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:08.676265+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63070208 unmapped: 917504 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:09.676421+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63070208 unmapped: 917504 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:10.676589+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63078400 unmapped: 909312 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:11.676787+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63078400 unmapped: 909312 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:12.676923+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63086592 unmapped: 901120 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:13.677110+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63086592 unmapped: 901120 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:14.677259+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63086592 unmapped: 901120 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:15.677467+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63086592 unmapped: 901120 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:16.677597+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63094784 unmapped: 892928 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:17.677778+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63094784 unmapped: 892928 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:18.678020+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63094784 unmapped: 892928 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:19.678223+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63102976 unmapped: 884736 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:20.678364+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63102976 unmapped: 884736 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:21.678569+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63111168 unmapped: 876544 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:22.678765+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63111168 unmapped: 876544 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:23.678964+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63111168 unmapped: 876544 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:24.679121+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63119360 unmapped: 868352 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:25.679404+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63119360 unmapped: 868352 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:26.679598+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63119360 unmapped: 868352 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:27.679762+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63127552 unmapped: 860160 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:28.679900+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63127552 unmapped: 860160 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:29.681018+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63127552 unmapped: 860160 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:30.681222+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63135744 unmapped: 851968 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:31.681393+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63135744 unmapped: 851968 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:32.681550+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63143936 unmapped: 843776 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:33.681702+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63143936 unmapped: 843776 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:34.681929+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63152128 unmapped: 835584 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:35.682138+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63152128 unmapped: 835584 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:36.682320+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63152128 unmapped: 835584 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:37.682513+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63160320 unmapped: 827392 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:38.682674+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63160320 unmapped: 827392 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:39.682859+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63160320 unmapped: 827392 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:40.683010+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63168512 unmapped: 819200 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:41.683141+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63168512 unmapped: 819200 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:42.683292+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63176704 unmapped: 811008 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:43.683475+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63176704 unmapped: 811008 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:44.683588+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63176704 unmapped: 811008 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:45.684017+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63184896 unmapped: 802816 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:46.684113+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63184896 unmapped: 802816 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:47.684284+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63184896 unmapped: 802816 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:48.684497+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63193088 unmapped: 794624 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:49.684622+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63193088 unmapped: 794624 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:50.684798+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63193088 unmapped: 794624 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:51.684999+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63201280 unmapped: 786432 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:52.685217+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63201280 unmapped: 786432 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:53.685384+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63201280 unmapped: 786432 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:54.685554+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63209472 unmapped: 778240 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:55.685731+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63209472 unmapped: 778240 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:56.685883+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63217664 unmapped: 770048 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:57.686027+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63217664 unmapped: 770048 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:58.686227+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63217664 unmapped: 770048 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:59.686392+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63225856 unmapped: 761856 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:00.686564+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63225856 unmapped: 761856 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:01.686718+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63234048 unmapped: 753664 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:02.686910+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63234048 unmapped: 753664 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:03.687056+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63234048 unmapped: 753664 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:04.687284+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63242240 unmapped: 745472 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:05.687468+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63242240 unmapped: 745472 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:06.687598+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63250432 unmapped: 737280 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:07.687758+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63250432 unmapped: 737280 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:08.687892+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63258624 unmapped: 729088 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:09.688072+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63258624 unmapped: 729088 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:10.688243+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63258624 unmapped: 729088 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:11.688389+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63258624 unmapped: 729088 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:12.688530+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63266816 unmapped: 720896 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:13.688737+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63266816 unmapped: 720896 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:14.688954+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63275008 unmapped: 712704 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:15.689212+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63275008 unmapped: 712704 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:16.689375+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63283200 unmapped: 704512 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:17.689509+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63291392 unmapped: 696320 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:18.690097+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63291392 unmapped: 696320 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:19.690298+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63291392 unmapped: 696320 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:20.691907+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63299584 unmapped: 688128 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:21.692095+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63299584 unmapped: 688128 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:22.692253+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63299584 unmapped: 688128 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:23.692829+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63307776 unmapped: 679936 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:24.693103+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63307776 unmapped: 679936 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:25.693439+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63315968 unmapped: 671744 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:26.693650+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63315968 unmapped: 671744 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:27.693822+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63324160 unmapped: 663552 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:28.694016+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63324160 unmapped: 663552 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:29.694337+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63324160 unmapped: 663552 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:30.694712+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63332352 unmapped: 655360 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:31.694960+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63332352 unmapped: 655360 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:32.695163+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63332352 unmapped: 655360 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:33.695562+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63340544 unmapped: 647168 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:34.695841+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63340544 unmapped: 647168 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:35.696033+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63348736 unmapped: 638976 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:36.696215+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63348736 unmapped: 638976 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:37.696359+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63348736 unmapped: 638976 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:38.696709+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63356928 unmapped: 630784 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:39.696890+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63356928 unmapped: 630784 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:40.697061+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63365120 unmapped: 622592 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:41.697269+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63365120 unmapped: 622592 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:42.697450+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63365120 unmapped: 622592 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:43.697624+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63365120 unmapped: 622592 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:44.697772+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63365120 unmapped: 622592 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:45.697928+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63373312 unmapped: 614400 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:46.698033+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63373312 unmapped: 614400 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:47.698149+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63381504 unmapped: 606208 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:48.698324+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63381504 unmapped: 606208 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:49.698590+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63381504 unmapped: 606208 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:50.698926+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63389696 unmapped: 598016 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:51.699211+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63389696 unmapped: 598016 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:52.700082+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63397888 unmapped: 589824 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:53.700666+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63397888 unmapped: 589824 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:54.701021+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63397888 unmapped: 589824 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:55.701394+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63406080 unmapped: 581632 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:56.701521+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63406080 unmapped: 581632 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:57.701802+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63414272 unmapped: 573440 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:58.702085+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63414272 unmapped: 573440 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:59.702292+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63414272 unmapped: 573440 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:00.702463+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63422464 unmapped: 565248 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:01.702632+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63422464 unmapped: 565248 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:02.702842+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63430656 unmapped: 557056 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:03.703078+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63430656 unmapped: 557056 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:04.703256+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63430656 unmapped: 557056 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:05.703506+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63438848 unmapped: 548864 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:06.703692+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63438848 unmapped: 548864 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:07.703823+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63438848 unmapped: 548864 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:08.704017+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63447040 unmapped: 540672 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:09.704262+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63447040 unmapped: 540672 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:10.704425+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63455232 unmapped: 532480 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:11.704592+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63455232 unmapped: 532480 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:12.704803+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63455232 unmapped: 532480 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:13.704989+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63463424 unmapped: 524288 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:14.705203+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63463424 unmapped: 524288 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:15.705367+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63471616 unmapped: 516096 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:16.705552+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63496192 unmapped: 491520 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:17.705750+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63496192 unmapped: 491520 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:18.706012+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63504384 unmapped: 483328 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:19.706160+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63504384 unmapped: 483328 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:20.706345+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63512576 unmapped: 475136 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:21.706542+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63512576 unmapped: 475136 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:22.706679+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63512576 unmapped: 475136 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:23.706863+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63520768 unmapped: 466944 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:24.707071+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63520768 unmapped: 466944 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:25.707289+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63528960 unmapped: 458752 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:26.707476+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63528960 unmapped: 458752 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:27.707664+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:28.708223+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63528960 unmapped: 458752 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:29.708386+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63537152 unmapped: 450560 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:30.708530+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63537152 unmapped: 450560 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:31.708670+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63537152 unmapped: 450560 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:32.708817+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63545344 unmapped: 442368 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:33.708988+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63545344 unmapped: 442368 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:34.709246+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63545344 unmapped: 442368 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:35.709416+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63553536 unmapped: 434176 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:36.709540+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63553536 unmapped: 434176 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:37.709724+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63578112 unmapped: 409600 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:38.709890+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63578112 unmapped: 409600 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:39.710159+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63578112 unmapped: 409600 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:40.710417+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63586304 unmapped: 401408 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:41.710623+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63586304 unmapped: 401408 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:42.710753+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63586304 unmapped: 401408 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:43.711078+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63594496 unmapped: 393216 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:44.711420+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63594496 unmapped: 393216 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:45.711653+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63602688 unmapped: 385024 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:46.711810+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63602688 unmapped: 385024 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:47.712145+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63602688 unmapped: 385024 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:48.712395+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63602688 unmapped: 385024 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:49.712590+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63610880 unmapped: 376832 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:50.712813+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63610880 unmapped: 376832 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:51.712989+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63619072 unmapped: 368640 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:52.713229+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63619072 unmapped: 368640 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:53.713365+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63627264 unmapped: 360448 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:54.713504+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63627264 unmapped: 360448 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:55.713633+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63627264 unmapped: 360448 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:56.713830+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63635456 unmapped: 352256 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:57.714025+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63651840 unmapped: 335872 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:58.714217+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63651840 unmapped: 335872 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:59.714337+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63660032 unmapped: 327680 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:00.714579+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63660032 unmapped: 327680 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:01.714693+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63660032 unmapped: 327680 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:02.714956+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63668224 unmapped: 319488 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:03.715221+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63668224 unmapped: 319488 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:04.715381+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63676416 unmapped: 311296 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:05.715914+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63676416 unmapped: 311296 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:06.716083+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63676416 unmapped: 311296 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:07.716308+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63684608 unmapped: 303104 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:08.717253+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63684608 unmapped: 303104 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:09.717502+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63684608 unmapped: 303104 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:10.717723+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63692800 unmapped: 294912 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:11.717899+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63692800 unmapped: 294912 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:12.718093+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63692800 unmapped: 294912 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:13.718340+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63700992 unmapped: 286720 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:14.718612+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63700992 unmapped: 286720 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:15.718985+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63709184 unmapped: 278528 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:16.719128+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63709184 unmapped: 278528 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:17.719282+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63709184 unmapped: 278528 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:18.719643+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63717376 unmapped: 270336 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:19.719868+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63717376 unmapped: 270336 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:20.720148+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63725568 unmapped: 262144 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 4366 writes, 20K keys, 4366 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 4366 writes, 458 syncs, 9.53 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4366 writes, 20K keys, 4366 commit groups, 1.0 writes per commit group, ingest: 16.39 MB, 0.03 MB/s
                                           Interval WAL: 4366 writes, 458 syncs, 9.53 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a4eba691f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a4eba691f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a4eba691f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a4eba691f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a4eba691f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a4eba691f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a4eba691f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a4eba69090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a4eba69090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a4eba69090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a4eba691f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a4eba691f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:21.720374+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63791104 unmapped: 196608 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:22.720540+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63791104 unmapped: 196608 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:23.720702+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63799296 unmapped: 188416 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:24.720847+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63799296 unmapped: 188416 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:25.721413+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63807488 unmapped: 180224 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:26.721558+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63807488 unmapped: 180224 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:27.721680+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63807488 unmapped: 180224 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:28.721834+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63815680 unmapped: 172032 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:29.721954+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63815680 unmapped: 172032 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:30.722080+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63815680 unmapped: 172032 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:31.722199+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63823872 unmapped: 163840 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:32.722425+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63823872 unmapped: 163840 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:33.722575+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63823872 unmapped: 163840 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:34.722723+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63832064 unmapped: 155648 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:35.722889+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63832064 unmapped: 155648 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:36.723075+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63832064 unmapped: 155648 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:37.723278+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63840256 unmapped: 147456 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:38.723422+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63840256 unmapped: 147456 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:39.723556+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63848448 unmapped: 139264 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:40.723689+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63848448 unmapped: 139264 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:41.723891+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63848448 unmapped: 139264 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:42.724058+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63856640 unmapped: 131072 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:43.724274+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63856640 unmapped: 131072 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:44.724431+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63864832 unmapped: 122880 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:45.724658+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63864832 unmapped: 122880 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:46.724821+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63864832 unmapped: 122880 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:47.725020+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63864832 unmapped: 122880 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:48.725209+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63864832 unmapped: 122880 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63873024 unmapped: 114688 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:50.370379+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63873024 unmapped: 114688 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:51.370521+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63873024 unmapped: 114688 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:52.370689+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63881216 unmapped: 106496 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:53.370866+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63881216 unmapped: 106496 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:54.371025+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63881216 unmapped: 106496 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:55.371214+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63889408 unmapped: 98304 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:56.371385+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63889408 unmapped: 98304 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:57.371554+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63897600 unmapped: 90112 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:58.371703+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63897600 unmapped: 90112 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:59.371869+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63897600 unmapped: 90112 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:00.372052+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63905792 unmapped: 81920 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:01.372239+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63905792 unmapped: 81920 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:02.372418+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63905792 unmapped: 81920 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:03.372619+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63913984 unmapped: 73728 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:04.372786+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63913984 unmapped: 73728 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:05.372947+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63913984 unmapped: 73728 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:06.373132+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63922176 unmapped: 65536 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:07.373302+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63922176 unmapped: 65536 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:08.373429+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63922176 unmapped: 65536 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:09.373579+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63930368 unmapped: 57344 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:10.373813+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63930368 unmapped: 57344 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:11.374014+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63938560 unmapped: 49152 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:12.374235+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63938560 unmapped: 49152 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:13.374457+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63938560 unmapped: 49152 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:14.374737+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63946752 unmapped: 40960 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:15.375029+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63946752 unmapped: 40960 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:16.375261+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63954944 unmapped: 32768 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:17.375442+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63954944 unmapped: 32768 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:18.375626+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63954944 unmapped: 32768 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:19.375844+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63963136 unmapped: 24576 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:20.376134+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63963136 unmapped: 24576 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:21.376333+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63963136 unmapped: 24576 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:22.376490+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63971328 unmapped: 16384 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:23.376789+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63971328 unmapped: 16384 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:24.377048+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63971328 unmapped: 16384 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:25.377248+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63979520 unmapped: 8192 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:26.377481+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63979520 unmapped: 8192 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:27.377613+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63987712 unmapped: 0 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:28.377854+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63987712 unmapped: 0 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:29.378067+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63987712 unmapped: 0 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:30.378242+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63995904 unmapped: 1040384 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:31.378400+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63995904 unmapped: 1040384 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:32.378532+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64004096 unmapped: 1032192 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:33.378779+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64004096 unmapped: 1032192 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:34.379054+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64004096 unmapped: 1032192 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:35.379314+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64012288 unmapped: 1024000 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:36.379667+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64012288 unmapped: 1024000 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:37.379929+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64020480 unmapped: 1015808 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:38.380116+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64020480 unmapped: 1015808 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:39.380274+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64020480 unmapped: 1015808 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:40.380579+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64028672 unmapped: 1007616 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:41.380870+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64028672 unmapped: 1007616 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:42.381898+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64028672 unmapped: 1007616 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:43.382165+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64036864 unmapped: 999424 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:44.382527+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64036864 unmapped: 999424 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:45.382811+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64045056 unmapped: 991232 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:46.383067+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64045056 unmapped: 991232 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:47.383357+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64045056 unmapped: 991232 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:48.383608+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64053248 unmapped: 983040 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:49.383794+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64053248 unmapped: 983040 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:50.383983+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64053248 unmapped: 983040 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:51.384141+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64053248 unmapped: 983040 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:52.384847+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64061440 unmapped: 974848 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:53.385004+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64061440 unmapped: 974848 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:54.385147+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64069632 unmapped: 966656 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:55.385257+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:56.394089+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64069632 unmapped: 966656 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:57.394252+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64077824 unmapped: 958464 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:58.394413+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64077824 unmapped: 958464 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:59.394586+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64086016 unmapped: 950272 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:00.394752+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64086016 unmapped: 950272 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:01.394908+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64086016 unmapped: 950272 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:02.395041+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64094208 unmapped: 942080 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:03.395179+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64094208 unmapped: 942080 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:04.395323+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64094208 unmapped: 942080 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:05.395515+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64102400 unmapped: 933888 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:06.395659+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64102400 unmapped: 933888 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:07.395802+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64102400 unmapped: 933888 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:08.395909+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64110592 unmapped: 925696 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:09.396082+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64110592 unmapped: 925696 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:10.396208+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64118784 unmapped: 917504 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:11.396340+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64118784 unmapped: 917504 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:12.396467+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64118784 unmapped: 917504 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:13.396591+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64126976 unmapped: 909312 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:14.396749+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64126976 unmapped: 909312 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:15.396923+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64135168 unmapped: 901120 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:16.397167+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64135168 unmapped: 901120 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:17.397364+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64135168 unmapped: 901120 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:18.397490+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64143360 unmapped: 892928 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:19.397659+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64143360 unmapped: 892928 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:20.397779+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64143360 unmapped: 892928 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:21.397906+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64151552 unmapped: 884736 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:22.398084+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:23.398263+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:24.398395+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:25.398569+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:26.398821+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:27.398939+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:28.399102+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:29.399300+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:30.399501+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:31.399743+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:32.399929+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:33.400088+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:34.400299+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:35.400509+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:36.400746+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:37.400886+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:38.401096+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:39.401276+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:40.401418+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:41.401560+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:42.401669+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:43.401817+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:44.401995+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:45.402145+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:46.402397+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:47.402507+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:48.402680+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:49.402845+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:50.403004+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:51.403139+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:52.403224+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:53.403343+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:54.403460+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:55.403660+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:56.403847+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:57.404021+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:58.404244+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:59.404384+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:00.404507+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:01.404627+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:02.404774+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:03.404900+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:04.405025+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:05.405218+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:06.405398+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:07.405528+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:08.405689+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:09.405853+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:10.406008+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:11.406152+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:12.406294+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:13.406493+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:14.406677+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:15.406820+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:16.406963+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:17.407157+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:18.407379+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:19.407562+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:20.407734+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:21.407864+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:22.408029+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:23.408241+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:24.408404+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:25.408602+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:26.408815+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:27.409033+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:28.409167+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:29.409325+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:30.409542+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 868352 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:31.409744+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 868352 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:32.409861+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 868352 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:33.409978+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 868352 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:34.410107+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 868352 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:35.410232+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 868352 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:36.410433+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 868352 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:37.410551+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 868352 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:38.410726+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 868352 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:39.410889+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 868352 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:40.411077+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 868352 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:41.411274+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 868352 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:42.411430+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 868352 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:43.411704+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 868352 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:44.412062+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 868352 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:45.412264+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 868352 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:46.412515+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 868352 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:47.412775+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 868352 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:48.412958+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 868352 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:49.413391+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 868352 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:50.413579+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 868352 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:51.413813+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 868352 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:52.414101+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 868352 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:53.414362+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 868352 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:54.414591+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 868352 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:55.414768+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 868352 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:56.414936+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 868352 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:57.415068+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 868352 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:58.415230+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 868352 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:59.415342+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 868352 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:00.415451+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 868352 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:01.415597+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 868352 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:02.415746+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 868352 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:03.415896+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 868352 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:04.416043+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 868352 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:05.416212+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 868352 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:06.416368+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 868352 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:07.416517+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 868352 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:08.416702+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 868352 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:09.416861+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 868352 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:10.417087+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 868352 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:11.417227+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 868352 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:12.417361+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:13.417509+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:14.417681+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:15.418026+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:16.418238+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:17.418390+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:18.418542+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:19.418729+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:20.418883+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:21.419046+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:22.419190+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:23.419386+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:24.419538+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:25.419684+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:26.419851+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:27.419965+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:28.420100+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:29.420319+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:30.420480+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:31.420721+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:32.421057+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:33.421189+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:34.421359+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:35.421570+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:36.421795+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:37.421971+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:38.422111+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:39.422288+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:40.422461+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:41.422611+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:42.422751+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:43.422898+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:44.423116+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:45.423233+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:46.423763+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:47.424786+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:48.424913+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:49.425571+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:50.425979+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:51.426145+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:52.426315+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:53.427092+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:54.427237+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:55.427469+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:56.427661+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:57.427922+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:58.428167+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:59.428311+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:00.428529+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:01.428659+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:02.428792+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:03.428953+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:04.429115+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:05.429306+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:06.429518+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:07.429679+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:08.429889+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:09.430077+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:10.430262+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:11.430412+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:12.430551+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:13.430695+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:14.430839+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:15.430986+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:16.431216+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:17.431374+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:18.431483+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:19.431639+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:20.431782+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:21.431960+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:22.432116+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 868352 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:23.432282+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 868352 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:24.432460+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 868352 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:25.432630+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 868352 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:26.432836+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: mgrc ms_handle_reset ms_handle_reset con 0x55a4ecca1c00
Nov 25 23:58:35 compute-0 ceph-osd[89044]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/855624559
Nov 25 23:58:35 compute-0 ceph-osd[89044]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/855624559,v1:192.168.122.100:6801/855624559]
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: get_auth_request con 0x55a4ecca0800 auth_method 0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: mgrc handle_mgr_configure stats_period=5
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:27.432995+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:28.433130+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:29.433289+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:30.433407+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 ms_handle_reset con 0x55a4ecca1800 session 0x55a4ec84d4a0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4ef0b4800
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:31.433538+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:32.433708+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:33.433873+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:34.434001+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:35.434118+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:36.434272+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:37.434425+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:38.434585+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:39.434737+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:40.434882+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:41.435052+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:42.435237+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:43.435375+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:44.435524+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:45.435702+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:46.435918+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:47.436066+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:48.436197+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:49.436346+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:50.436481+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:51.436649+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:52.436802+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:53.436941+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:54.437160+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:55.437350+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:56.437551+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:57.437691+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:58.437819+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:59.437939+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:00.438075+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:01.438375+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:02.438615+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:03.438812+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:04.439049+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:05.439207+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:06.439345+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:07.439504+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:08.439636+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:09.439848+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:10.440053+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:11.440326+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:12.440560+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:13.440731+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:14.440927+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:15.441127+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:16.441346+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:17.441564+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:18.441759+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:19.441983+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:20.442201+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:21.442391+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:22.442557+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:23.442732+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:24.443117+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:25.443279+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:26.443741+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:27.443864+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:28.444027+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:29.444205+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:30.444349+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:31.444578+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:32.444875+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:33.445101+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:34.445311+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:35.445533+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:36.445810+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:37.446091+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:38.446282+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:39.446418+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:40.446554+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:41.446721+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:42.446915+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:43.447146+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:44.447321+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:45.447469+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:46.447663+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:47.447791+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:48.447957+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:49.448298+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:50.448558+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:51.448749+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:52.448997+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:53.449260+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:54.449495+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:55.449737+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:56.449952+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:57.450253+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:58.450532+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:59.450801+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:00.450974+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:01.451355+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:02.451545+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:03.452114+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:04.452325+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:05.452609+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:06.452786+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:07.452947+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:08.453270+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:09.453440+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:10.453593+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:11.453731+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:12.453906+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:13.454048+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:14.454242+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:15.454678+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:16.454835+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:17.454983+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:18.455105+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:19.455280+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:20.455474+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:21.455646+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:22.455866+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:23.456076+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:24.456245+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:25.456395+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:26.456549+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:27.456737+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:28.456893+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:29.457072+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:30.457219+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:31.457368+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:32.457577+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:33.457799+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:34.457980+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:35.458114+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:36.458274+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:37.458410+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:38.458548+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:39.458714+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:40.458863+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:41.459094+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:42.459287+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:43.459529+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:44.459717+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:45.459893+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:46.460059+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:47.460251+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:48.460407+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:49.460559+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:50.460807+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:51.460989+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:52.461195+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:53.461334+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:54.461511+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:55.461612+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:56.461783+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:57.461969+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:58.462137+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:59.462239+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:00.462380+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:01.462518+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:02.463214+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:03.464007+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:04.464163+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:05.464403+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:06.464773+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:07.464888+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:08.465014+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:09.465228+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:10.465337+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:11.465452+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:12.465553+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:13.465651+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:14.465771+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:15.465888+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:16.466045+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:17.466180+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:18.466326+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:19.466462+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:20.466709+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:21.466867+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:22.466980+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:23.467138+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:24.467384+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:25.467545+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:26.467713+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:27.467867+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:28.468019+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:29.468149+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:30.468272+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:31.468419+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:32.468560+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:33.468693+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:34.468804+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:35.468932+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:36.469080+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:37.469200+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:38.469387+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:39.469514+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:40.469666+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:41.469797+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:42.469976+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:43.470231+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:44.470377+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:45.470578+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:46.470785+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:47.470966+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:48.471103+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:49.471271+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:50.471379+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:51.471496+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:52.471596+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:53.471726+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:54.471897+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:55.472058+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:56.472304+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:57.472452+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:58.472639+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:59.472783+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:00.472919+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:01.473049+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:02.473190+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:03.473373+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:04.473682+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:05.473952+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:06.474235+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:07.474383+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:08.474995+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:09.475508+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:10.475955+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:11.476293+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:12.476545+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:13.476837+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:14.477106+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:15.477387+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:16.477581+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:17.477740+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:18.477901+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:19.478120+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:20.478344+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:21.478538+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:22.478746+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:23.478907+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:24.479053+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:25.479242+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 704512 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:26.479428+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 704512 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:27.479652+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 704512 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:28.479792+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 704512 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:29.479970+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 704512 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:30.480155+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 704512 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:31.480378+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 704512 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:32.480561+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 704512 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:33.480660+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 704512 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:34.480806+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 704512 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:35.480970+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 704512 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:36.481140+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:37.481297+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 704512 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:38.481480+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 704512 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:39.481648+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 704512 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:40.481828+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 704512 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:41.481996+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 704512 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:42.482156+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 704512 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:43.482342+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 704512 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:44.482423+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 704512 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:45.482511+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 704512 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:46.482643+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 704512 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:47.482779+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 704512 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:48.482908+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 704512 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:49.483108+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 704512 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:50.483292+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 704512 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:51.483466+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 704512 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:52.483569+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 704512 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:53.483718+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 704512 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:54.483890+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 704512 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:55.484045+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 704512 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:56.484226+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 704512 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:57.484357+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 704512 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:58.484497+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 704512 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:59.484691+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 704512 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:00.484903+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 704512 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:01.485014+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 704512 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:02.485119+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 704512 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:03.485267+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 704512 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:04.485416+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 704512 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:05.485562+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 704512 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:06.485772+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 704512 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:07.486075+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 704512 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:08.486369+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 704512 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:09.487254+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64339968 unmapped: 696320 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:10.487594+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64339968 unmapped: 696320 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:11.487985+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64339968 unmapped: 696320 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:12.488291+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64339968 unmapped: 696320 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:13.488493+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64339968 unmapped: 696320 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:14.488801+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64339968 unmapped: 696320 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:15.489231+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64339968 unmapped: 696320 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:16.489530+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64339968 unmapped: 696320 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:17.489869+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64339968 unmapped: 696320 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:18.490088+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64339968 unmapped: 696320 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:19.490346+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64339968 unmapped: 696320 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:20.490537+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64339968 unmapped: 696320 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:21.490934+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 4366 writes, 20K keys, 4366 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 4366 writes, 458 syncs, 9.53 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a4eba691f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a4eba691f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a4eba691f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a4eba691f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a4eba691f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a4eba691f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a4eba691f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a4eba69090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a4eba69090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a4eba69090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a4eba691f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a4eba691f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:22.491113+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:23.491296+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:24.491507+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:25.491825+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:26.492083+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:27.492329+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:28.492510+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:29.492723+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:30.492921+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:31.493147+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:32.493359+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:33.493575+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:34.493773+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:35.493941+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:36.494374+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:37.494570+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:38.494745+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:39.494885+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:40.495046+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:41.495284+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:42.495487+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:43.495659+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:44.495807+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:45.495967+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:46.496448+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:47.496601+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:48.496777+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:49.496977+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:50.497149+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:51.497328+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:52.497489+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:53.497725+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:54.497890+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:55.498084+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:56.498299+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:57.498460+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:58.498642+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:59.498950+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:00.499213+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:01.499404+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:02.499571+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:03.499778+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:04.499955+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:05.500166+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:06.500437+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:07.500611+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:08.500760+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:09.500888+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:10.501080+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:11.501288+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:12.501420+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:13.501595+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:14.501793+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:15.502013+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:16.502199+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:17.502418+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:18.502573+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:19.502806+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:20.502989+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:21.503267+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:22.503433+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:23.503612+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:24.503766+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:25.503927+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:26.504096+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:27.504248+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:28.504400+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:29.504556+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:30.504764+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:31.504912+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:32.505091+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:33.505244+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:34.505401+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:35.505608+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:36.505798+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:37.505980+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:38.506126+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:39.506302+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:40.506496+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:41.506687+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:42.506873+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:43.507134+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:44.507406+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:45.507610+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:46.507808+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:47.508019+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:48.508220+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:49.508374+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:50.508528+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:51.508687+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:52.508799+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:53.508923+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:54.509039+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:55.509199+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:56.509354+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:57.509473+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:58.509645+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:59.509891+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:00.510107+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:01.510391+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:02.510597+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:03.510875+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:04.511036+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:05.511266+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:06.511590+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:07.511787+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:08.511989+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:09.512239+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:10.512359+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:11.512550+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 63 handle_osd_map epochs [63,64], i have 63, src has [1,64]
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 1067.654418945s of 1067.670898438s, submitted: 4
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64389120 unmapped: 647168 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:12.512673+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64389120 unmapped: 647168 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:13.512830+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 64 handle_osd_map epochs [65,65], i have 64, src has [1,65]
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64471040 unmapped: 565248 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4ef0b4000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:14.513028+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 65 heartbeat osd_stat(store_statfs(0x4fe118000/0x0/0x4ffc00000, data 0x4add3/0xb4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64552960 unmapped: 9797632 heap: 74350592 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 65 ms_handle_reset con 0x55a4ef0b4000 session 0x55a4ee2e6d20
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:15.513266+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64552960 unmapped: 9797632 heap: 74350592 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 65 heartbeat osd_stat(store_statfs(0x4fdca9000/0x0/0x4ffc00000, data 0x4badd3/0x524000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 65 handle_osd_map epochs [66,66], i have 65, src has [1,66]
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 65 handle_osd_map epochs [66,66], i have 66, src has [1,66]
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:16.513473+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 496467 data_alloc: 218103808 data_used: 49152
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64626688 unmapped: 9723904 heap: 74350592 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:17.514015+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4ef0b4c00
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 66 heartbeat osd_stat(store_statfs(0x4fdca5000/0x0/0x4ffc00000, data 0x4bc3a9/0x527000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64651264 unmapped: 9699328 heap: 74350592 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:18.514461+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 18087936 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 66 ms_handle_reset con 0x55a4ef0b4c00 session 0x55a4ee2e6960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 66 handle_osd_map epochs [67,67], i have 66, src has [1,67]
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:19.514712+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 18087936 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:20.514865+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 18087936 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:21.515156+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 556101 data_alloc: 218103808 data_used: 49152
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 18087936 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:22.515366+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64675840 unmapped: 18071552 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:23.515814+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 67 heartbeat osd_stat(store_statfs(0x4fd4a1000/0x0/0x4ffc00000, data 0xcbd98f/0xd2b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 67 handle_osd_map epochs [68,68], i have 67, src has [1,68]
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.846909523s of 11.953395844s, submitted: 16
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64675840 unmapped: 18071552 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:24.516030+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64675840 unmapped: 18071552 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:25.516326+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64692224 unmapped: 18055168 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:26.516535+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 558865 data_alloc: 218103808 data_used: 65536
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64692224 unmapped: 18055168 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:27.517105+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fd49f000/0x0/0x4ffc00000, data 0xcbee2f/0xd2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64692224 unmapped: 18055168 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:28.517313+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64692224 unmapped: 18055168 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:29.517697+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64692224 unmapped: 18055168 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:30.517871+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64692224 unmapped: 18055168 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:31.518109+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 558865 data_alloc: 218103808 data_used: 65536
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64692224 unmapped: 18055168 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:32.518257+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64692224 unmapped: 18055168 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fd49f000/0x0/0x4ffc00000, data 0xcbee2f/0xd2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:33.518533+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64692224 unmapped: 18055168 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:34.518655+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64692224 unmapped: 18055168 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:35.518879+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64692224 unmapped: 18055168 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:36.519049+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 558865 data_alloc: 218103808 data_used: 65536
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64692224 unmapped: 18055168 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fd49f000/0x0/0x4ffc00000, data 0xcbee2f/0xd2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:37.519241+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64692224 unmapped: 18055168 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:38.519408+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64692224 unmapped: 18055168 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:39.519567+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64692224 unmapped: 18055168 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:40.519725+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64692224 unmapped: 18055168 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:41.520029+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 558865 data_alloc: 218103808 data_used: 65536
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64692224 unmapped: 18055168 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:42.520367+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64692224 unmapped: 18055168 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fd49f000/0x0/0x4ffc00000, data 0xcbee2f/0xd2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:43.520559+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64692224 unmapped: 18055168 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:44.520746+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64692224 unmapped: 18055168 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:45.520901+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64692224 unmapped: 18055168 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:46.521208+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 558865 data_alloc: 218103808 data_used: 65536
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64692224 unmapped: 18055168 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:47.521432+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fd49f000/0x0/0x4ffc00000, data 0xcbee2f/0xd2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64692224 unmapped: 18055168 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:48.521590+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64692224 unmapped: 18055168 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:49.521788+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64692224 unmapped: 18055168 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fd49f000/0x0/0x4ffc00000, data 0xcbee2f/0xd2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:50.522279+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64692224 unmapped: 18055168 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:51.522414+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 558865 data_alloc: 218103808 data_used: 65536
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64692224 unmapped: 18055168 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:52.522635+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4ef0b4400
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 29.079715729s of 29.092357635s, submitted: 9
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64716800 unmapped: 18030592 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 68 handle_osd_map epochs [68,69], i have 68, src has [1,69]
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 69 ms_handle_reset con 0x55a4ef0b4400 session 0x55a4ee2e7e00
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:53.522758+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64716800 unmapped: 18030592 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:54.523029+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64716800 unmapped: 18030592 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:55.523492+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 69 heartbeat osd_stat(store_statfs(0x4fd49a000/0x0/0x4ffc00000, data 0xcc041c/0xd33000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4eeeb2000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 25911296 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:56.523654+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 677841 data_alloc: 218103808 data_used: 65536
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 69 heartbeat osd_stat(store_statfs(0x4fc49a000/0x0/0x4ffc00000, data 0x1cc0442/0x1d34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 65339392 unmapped: 25804800 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:57.523799+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 69 heartbeat osd_stat(store_statfs(0x4fc49a000/0x0/0x4ffc00000, data 0x1cc0442/0x1d34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 65445888 unmapped: 25698304 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:58.523980+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 69 handle_osd_map epochs [69,70], i have 69, src has [1,70]
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _renew_subs
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 70 handle_osd_map epochs [70,70], i have 70, src has [1,70]
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 70 ms_handle_reset con 0x55a4eeeb2000 session 0x55a4ee2e7a40
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 25845760 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4f07f7800
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:59.524126+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 65363968 unmapped: 25780224 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:00.524263+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _renew_subs
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 70 handle_osd_map epochs [71,71], i have 70, src has [1,71]
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 71 ms_handle_reset con 0x55a4f07f7800 session 0x55a4ed444d20
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 65454080 unmapped: 25690112 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:01.524417+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4eeeb2000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 577509 data_alloc: 218103808 data_used: 77824
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 65454080 unmapped: 25690112 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:02.524627+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 71 handle_osd_map epochs [71,72], i have 71, src has [1,72]
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 72 ms_handle_reset con 0x55a4eeeb2000 session 0x55a4ee2f32c0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 72 heartbeat osd_stat(store_statfs(0x4fd493000/0x0/0x4ffc00000, data 0xcc45bf/0xd3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 65339392 unmapped: 25804800 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:03.524988+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 72 heartbeat osd_stat(store_statfs(0x4fd493000/0x0/0x4ffc00000, data 0xcc45bf/0xd3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4f07c3c00
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.891912460s of 11.310493469s, submitted: 105
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 65290240 unmapped: 25853952 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:04.525122+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _renew_subs
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 72 handle_osd_map epochs [73,73], i have 72, src has [1,73]
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 73 ms_handle_reset con 0x55a4f07c3c00 session 0x55a4ee2e7860
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 73 heartbeat osd_stat(store_statfs(0x4fd493000/0x0/0x4ffc00000, data 0xcc45bf/0xd3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 73 heartbeat osd_stat(store_statfs(0x4fd48f000/0x0/0x4ffc00000, data 0xcc5ba5/0xd3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 65331200 unmapped: 25812992 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4f07c2400
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:05.525314+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 73 ms_handle_reset con 0x55a4f07c2400 session 0x55a4ed6b2960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4f07c1c00
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 73 ms_handle_reset con 0x55a4f07c1c00 session 0x55a4ed6963c0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 73 heartbeat osd_stat(store_statfs(0x4fd48f000/0x0/0x4ffc00000, data 0xcc5ba5/0xd3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 73 handle_osd_map epochs [74,74], i have 73, src has [1,74]
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 65363968 unmapped: 25780224 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:06.525524+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4ef0b4c00
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 590680 data_alloc: 218103808 data_used: 86016
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 65323008 unmapped: 25821184 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:07.525743+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _renew_subs
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 74 handle_osd_map epochs [75,75], i have 74, src has [1,75]
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 75 ms_handle_reset con 0x55a4ef0b4c00 session 0x55a4ed696780
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 65331200 unmapped: 25812992 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:08.525940+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 75 heartbeat osd_stat(store_statfs(0x4fd489000/0x0/0x4ffc00000, data 0xcc8637/0xd44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4eeeb2000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4f07c1c00
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 65404928 unmapped: 25739264 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:09.526078+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 75 handle_osd_map epochs [75,76], i have 75, src has [1,76]
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 76 ms_handle_reset con 0x55a4f07c1c00 session 0x55a4ee32e000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 76 ms_handle_reset con 0x55a4eeeb2000 session 0x55a4ed5e90e0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 65478656 unmapped: 25665536 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4f07c2400
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:10.526229+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4f07c3c00
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _renew_subs
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 76 handle_osd_map epochs [77,77], i have 76, src has [1,77]
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 77 ms_handle_reset con 0x55a4f07c3c00 session 0x55a4ec84d860
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 77 ms_handle_reset con 0x55a4f07c2400 session 0x55a4ed562780
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 65691648 unmapped: 25452544 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:11.526422+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 600747 data_alloc: 218103808 data_used: 90112
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 77 heartbeat osd_stat(store_statfs(0x4fd484000/0x0/0x4ffc00000, data 0xccb2cb/0xd49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 65691648 unmapped: 25452544 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:12.526573+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4ef0b4400
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 77 ms_handle_reset con 0x55a4ef0b4400 session 0x55a4ee17d860
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4eeeb2000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 65724416 unmapped: 25419776 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:13.526766+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 65724416 unmapped: 25419776 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 77 heartbeat osd_stat(store_statfs(0x4fd484000/0x0/0x4ffc00000, data 0xccb2cb/0xd49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 77 handle_osd_map epochs [78,78], i have 77, src has [1,78]
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 77 handle_osd_map epochs [78,78], i have 78, src has [1,78]
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.694375038s of 10.103998184s, submitted: 108
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:14.526987+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 78 handle_osd_map epochs [78,79], i have 78, src has [1,79]
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 79 ms_handle_reset con 0x55a4eeeb2000 session 0x55a4ee17c000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4f07c1c00
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 79 ms_handle_reset con 0x55a4f07c1c00 session 0x55a4ee2faf00
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 65748992 unmapped: 25395200 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:15.527254+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 79 handle_osd_map epochs [79,80], i have 79, src has [1,80]
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 66797568 unmapped: 24346624 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:16.527541+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 611007 data_alloc: 218103808 data_used: 90112
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 66813952 unmapped: 24330240 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:17.527731+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 80 heartbeat osd_stat(store_statfs(0x4fd479000/0x0/0x4ffc00000, data 0xccf407/0xd52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 66813952 unmapped: 24330240 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:18.527986+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 80 heartbeat osd_stat(store_statfs(0x4fd479000/0x0/0x4ffc00000, data 0xccf407/0xd52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 66813952 unmapped: 24330240 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:19.528261+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4f07c2400
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 66813952 unmapped: 24330240 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:20.528470+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 80 handle_osd_map epochs [80,81], i have 80, src has [1,81]
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 81 ms_handle_reset con 0x55a4f07c2400 session 0x55a4ee2fba40
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 66871296 unmapped: 24272896 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:21.528641+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4f07c3c00
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 81 handle_osd_map epochs [81,82], i have 81, src has [1,82]
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _renew_subs
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 82 handle_osd_map epochs [82,82], i have 82, src has [1,82]
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 82 ms_handle_reset con 0x55a4f07c3c00 session 0x55a4ee2fa000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 621716 data_alloc: 218103808 data_used: 98304
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 66879488 unmapped: 24264704 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4ef0b4000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:22.528747+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 82 ms_handle_reset con 0x55a4ef0b4000 session 0x55a4ee2fab40
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 66953216 unmapped: 24190976 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:23.528914+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4eeeb2000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _renew_subs
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 82 handle_osd_map epochs [83,83], i have 82, src has [1,83]
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 83 handle_osd_map epochs [83,84], i have 83, src has [1,84]
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _renew_subs
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 84 handle_osd_map epochs [84,84], i have 84, src has [1,84]
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 84 ms_handle_reset con 0x55a4eeeb2000 session 0x55a4ee2f2000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 84 heartbeat osd_stat(store_statfs(0x4fd059000/0x0/0x4ffc00000, data 0xcd58fd/0xd63000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e3f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4ef0b4000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 67076096 unmapped: 24068096 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:24.529097+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 84 ms_handle_reset con 0x55a4ef0b4000 session 0x55a4ed4eda40
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 84 handle_osd_map epochs [85,85], i have 84, src has [1,85]
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.696181297s of 10.913640022s, submitted: 75
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 67100672 unmapped: 24043520 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:25.529238+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4f07c1c00
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _renew_subs
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 85 handle_osd_map epochs [86,86], i have 85, src has [1,86]
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 86 ms_handle_reset con 0x55a4f07c1c00 session 0x55a4ed4ec960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 67182592 unmapped: 23961600 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:26.529419+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 86 heartbeat osd_stat(store_statfs(0x4fd051000/0x0/0x4ffc00000, data 0xcd947f/0xd6c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e3f9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 648340 data_alloc: 218103808 data_used: 163840
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 67182592 unmapped: 23961600 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:27.529591+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 67182592 unmapped: 23961600 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:28.529804+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4f07c2400
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 23855104 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:29.529956+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 86 handle_osd_map epochs [86,87], i have 86, src has [1,87]
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _renew_subs
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 86 handle_osd_map epochs [87,87], i have 87, src has [1,87]
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 87 ms_handle_reset con 0x55a4f07c2400 session 0x55a4ee2faf00
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 23650304 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4f07c3c00
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:30.530074+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 87 heartbeat osd_stat(store_statfs(0x4fbeae000/0x0/0x4ffc00000, data 0xcdabdf/0xd6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4f07f7c00
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _renew_subs
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 87 handle_osd_map epochs [88,88], i have 87, src has [1,88]
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 88 ms_handle_reset con 0x55a4f07f7c00 session 0x55a4ec84d4a0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 88 ms_handle_reset con 0x55a4f07c3c00 session 0x55a4edd1f4a0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 88 heartbeat osd_stat(store_statfs(0x4fbeab000/0x0/0x4ffc00000, data 0xcdb7af/0xd70000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4eeeb2000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 67624960 unmapped: 23519232 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4ef0b4000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:31.530316+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 654437 data_alloc: 218103808 data_used: 180224
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _renew_subs
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 88 handle_osd_map epochs [89,89], i have 88, src has [1,89]
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 89 ms_handle_reset con 0x55a4eeeb2000 session 0x55a4ed4ed860
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4f07c1c00
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 67780608 unmapped: 23363584 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:32.530498+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 89 handle_osd_map epochs [89,90], i have 89, src has [1,90]
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 90 ms_handle_reset con 0x55a4f07c1c00 session 0x55a4ed2e41e0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 69885952 unmapped: 21258240 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4f07c2400
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:33.530669+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 90 handle_osd_map epochs [90,91], i have 90, src has [1,91]
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _renew_subs
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 91 handle_osd_map epochs [91,91], i have 91, src has [1,91]
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4f07f7000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 91 ms_handle_reset con 0x55a4f07c2400 session 0x55a4ed6cef00
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 70000640 unmapped: 21143552 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:34.530764+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4f07f7400
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4f07c2000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _renew_subs
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 91 handle_osd_map epochs [92,92], i have 91, src has [1,92]
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 92 ms_handle_reset con 0x55a4f07c2000 session 0x55a4ed2d30e0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 92 ms_handle_reset con 0x55a4f07f7000 session 0x55a4ed5a4f00
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 92 ms_handle_reset con 0x55a4f07f7400 session 0x55a4ed2d21e0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 70123520 unmapped: 21020672 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:35.530928+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 92 handle_osd_map epochs [93,93], i have 92, src has [1,93]
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.749933243s of 10.585572243s, submitted: 226
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4eeeb2000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 93 ms_handle_reset con 0x55a4eeeb2000 session 0x55a4ed2e4960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 20987904 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fbe9f000/0x0/0x4ffc00000, data 0xce152f/0xd7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:36.531057+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4f07c1c00
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 93 ms_handle_reset con 0x55a4f07c1c00 session 0x55a4ee2e7680
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4f07c2400
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 673052 data_alloc: 218103808 data_used: 192512
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 93 handle_osd_map epochs [93,94], i have 93, src has [1,94]
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 94 ms_handle_reset con 0x55a4f07c2400 session 0x55a4ed6970e0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 20987904 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:37.531270+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 94 heartbeat osd_stat(store_statfs(0x4fbe9b000/0x0/0x4ffc00000, data 0xce2b17/0xd7f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4eeeb2000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 70066176 unmapped: 21078016 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:38.531423+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 94 handle_osd_map epochs [94,95], i have 94, src has [1,95]
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _renew_subs
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 95 handle_osd_map epochs [95,95], i have 95, src has [1,95]
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 95 ms_handle_reset con 0x55a4eeeb2000 session 0x55a4ed4ec000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 70033408 unmapped: 21110784 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:39.531555+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4f07c1c00
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 70033408 unmapped: 21110784 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:40.531694+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _renew_subs
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 95 handle_osd_map epochs [96,96], i have 95, src has [1,96]
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 96 ms_handle_reset con 0x55a4f07c1c00 session 0x55a4edcab2c0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 70041600 unmapped: 21102592 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:41.531836+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 681428 data_alloc: 218103808 data_used: 208896
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 70041600 unmapped: 21102592 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:42.531998+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 70041600 unmapped: 21102592 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 96 heartbeat osd_stat(store_statfs(0x4fbe98000/0x0/0x4ffc00000, data 0xce5794/0xd85000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:43.532135+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4f07f7000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 96 ms_handle_reset con 0x55a4f07f7000 session 0x55a4edcaa3c0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4f07f7400
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 96 ms_handle_reset con 0x55a4f07f7400 session 0x55a4edcaa1e0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4f07c3c00
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 96 ms_handle_reset con 0x55a4f07c3c00 session 0x55a4edcaa000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4eeeb2000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 96 ms_handle_reset con 0x55a4eeeb2000 session 0x55a4ee16a000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4f07c1c00
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4f07f7000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 96 ms_handle_reset con 0x55a4f07f7000 session 0x55a4ee32eb40
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 96 ms_handle_reset con 0x55a4f07c1c00 session 0x55a4ed5a4d20
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4f07f7400
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 96 ms_handle_reset con 0x55a4f07f7400 session 0x55a4ed5a45a0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4ef5c4400
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 96 ms_handle_reset con 0x55a4ef5c4400 session 0x55a4ed5a41e0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4eeeb2000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 96 ms_handle_reset con 0x55a4eeeb2000 session 0x55a4edda74a0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4f07c1c00
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 70033408 unmapped: 21110784 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 96 ms_handle_reset con 0x55a4f07c1c00 session 0x55a4ed4ec000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:44.532283+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4f07f7000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 96 ms_handle_reset con 0x55a4f07f7000 session 0x55a4ee2f3c20
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 70033408 unmapped: 21110784 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:45.532436+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4f07f7400
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _renew_subs
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 96 handle_osd_map epochs [97,97], i have 96, src has [1,97]
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 97 ms_handle_reset con 0x55a4f07f7400 session 0x55a4ee2e7680
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4f07c1800
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 97 ms_handle_reset con 0x55a4f07c1800 session 0x55a4ee2e65a0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4eeeb2000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.030328751s of 10.426925659s, submitted: 139
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 97 ms_handle_reset con 0x55a4eeeb2000 session 0x55a4ee2e6b40
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 70049792 unmapped: 21094400 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:46.532607+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4f07c1c00
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 689930 data_alloc: 218103808 data_used: 221184
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 97 heartbeat osd_stat(store_statfs(0x4fbe93000/0x0/0x4ffc00000, data 0xce6cbe/0xd8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 70049792 unmapped: 21094400 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:47.532796+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 97 heartbeat osd_stat(store_statfs(0x4fbe92000/0x0/0x4ffc00000, data 0xce6ce1/0xd8b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 70049792 unmapped: 21094400 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:48.533023+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 70049792 unmapped: 21094400 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:49.533239+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 97 heartbeat osd_stat(store_statfs(0x4fbe92000/0x0/0x4ffc00000, data 0xce6ce1/0xd8b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 70049792 unmapped: 21094400 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:50.533418+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4f07f7000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 97 ms_handle_reset con 0x55a4f07f7000 session 0x55a4ee32f4a0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4f07f7400
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 21061632 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:51.533598+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _renew_subs
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 97 handle_osd_map epochs [98,98], i have 97, src has [1,98]
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 98 ms_handle_reset con 0x55a4f07f7400 session 0x55a4ee2fa960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4f07f6400
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4f07f6000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 699664 data_alloc: 218103808 data_used: 229376
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 98 ms_handle_reset con 0x55a4f07f6000 session 0x55a4ed5e90e0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 98 ms_handle_reset con 0x55a4f07f6400 session 0x55a4ed54d0e0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4f07f6400
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 98 ms_handle_reset con 0x55a4f07f6400 session 0x55a4ee2e7680
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4eeeb2000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 70262784 unmapped: 20881408 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:52.533745+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 98 handle_osd_map epochs [98,99], i have 98, src has [1,99]
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 99 ms_handle_reset con 0x55a4eeeb2000 session 0x55a4ed5a41e0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4f07f6000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 70328320 unmapped: 20815872 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:53.533958+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 99 handle_osd_map epochs [99,100], i have 99, src has [1,100]
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _renew_subs
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 100 handle_osd_map epochs [100,100], i have 100, src has [1,100]
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 100 ms_handle_reset con 0x55a4f07f6000 session 0x55a4edcaa000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 70393856 unmapped: 20750336 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:54.534148+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4f07f7000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 100 ms_handle_reset con 0x55a4f07f7000 session 0x55a4ed699e00
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4f07f7400
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 100 ms_handle_reset con 0x55a4f07f7400 session 0x55a4ed698b40
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4eeeb2000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 100 ms_handle_reset con 0x55a4eeeb2000 session 0x55a4ed5a41e0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 100 heartbeat osd_stat(store_statfs(0x4fbe85000/0x0/0x4ffc00000, data 0xceaf13/0xd97000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4f07f6000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 100 ms_handle_reset con 0x55a4f07f6000 session 0x55a4ee17da40
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 70443008 unmapped: 20701184 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:55.534356+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4f07f6400
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 70516736 unmapped: 20627456 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:56.534513+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _renew_subs
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 100 handle_osd_map epochs [101,101], i have 100, src has [1,101]
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.019435883s of 10.487161636s, submitted: 76
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 708356 data_alloc: 218103808 data_used: 249856
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 101 ms_handle_reset con 0x55a4f07f6400 session 0x55a4ed2e4f00
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71557120 unmapped: 19587072 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:57.534648+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 101 ms_handle_reset con 0x55a4f07c1c00 session 0x55a4ee32e960
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71557120 unmapped: 19587072 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:58.534787+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4f07f7000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 101 heartbeat osd_stat(store_statfs(0x4fbe84000/0x0/0x4ffc00000, data 0xcec4a9/0xd97000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [0,0,1])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71565312 unmapped: 19578880 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:59.534977+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 101 handle_osd_map epochs [101,102], i have 101, src has [1,102]
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 102 ms_handle_reset con 0x55a4f07f7000 session 0x55a4ed6b2780
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71573504 unmapped: 19570688 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:00.535194+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71573504 unmapped: 19570688 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:01.535499+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 705861 data_alloc: 218103808 data_used: 233472
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71573504 unmapped: 19570688 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:02.535647+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71573504 unmapped: 19570688 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:03.535816+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 102 handle_osd_map epochs [103,103], i have 102, src has [1,103]
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:04.535970+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71598080 unmapped: 19546112 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 103 ms_handle_reset con 0x55a4ef0b4000 session 0x55a4edda7860
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 103 heartbeat osd_stat(store_statfs(0x4fbe84000/0x0/0x4ffc00000, data 0xceeef7/0xd99000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4eeeb2000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 103 ms_handle_reset con 0x55a4eeeb2000 session 0x55a4ed2d21e0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:05.536119+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71581696 unmapped: 19562496 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:06.536331+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71581696 unmapped: 19562496 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 103 handle_osd_map epochs [104,104], i have 103, src has [1,104]
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4f07c1c00
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 104 ms_handle_reset con 0x55a4f07c1c00 session 0x55a4ed699680
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 104 heartbeat osd_stat(store_statfs(0x4fbe85000/0x0/0x4ffc00000, data 0xceeed4/0xd98000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4f07f6000
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.650251389s of 10.128237724s, submitted: 119
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 712528 data_alloc: 218103808 data_used: 237568
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:07.536483+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71598080 unmapped: 19546112 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _renew_subs
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 104 handle_osd_map epochs [105,105], i have 104, src has [1,105]
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 105 ms_handle_reset con 0x55a4f07f6000 session 0x55a4ee2fa780
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:08.536588+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71614464 unmapped: 19529728 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 105 heartbeat osd_stat(store_statfs(0x4fbe7f000/0x0/0x4ffc00000, data 0xcf1982/0xd9e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:09.536750+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71614464 unmapped: 19529728 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:10.536911+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71614464 unmapped: 19529728 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:11.537029+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71614464 unmapped: 19529728 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 715326 data_alloc: 218103808 data_used: 237568
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:12.537162+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 19521536 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:13.537364+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 19521536 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:14.537508+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 19521536 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 105 heartbeat osd_stat(store_statfs(0x4fbe7f000/0x0/0x4ffc00000, data 0xcf1982/0xd9e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 105 heartbeat osd_stat(store_statfs(0x4fbe7f000/0x0/0x4ffc00000, data 0xcf1982/0xd9e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:15.537644+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 19521536 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 105 handle_osd_map epochs [105,106], i have 105, src has [1,106]
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:16.537812+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 19513344 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 718300 data_alloc: 218103808 data_used: 237568
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:17.537964+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 19513344 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:18.538147+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 19513344 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:19.538303+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 19513344 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:20.538471+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 19513344 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:21.538655+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 19513344 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 718300 data_alloc: 218103808 data_used: 237568
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:22.538811+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 19513344 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:23.539011+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 19513344 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:24.539220+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 19513344 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:25.539409+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 19513344 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:26.539629+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 19513344 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:27.539889+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 19513344 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:28.540081+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 19513344 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:29.540241+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 19513344 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:30.541890+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 19513344 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:31.542548+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 19513344 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:32.543901+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 19513344 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:33.544104+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 19513344 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:34.545275+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 19513344 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:35.545576+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71639040 unmapped: 19505152 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:36.546247+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71639040 unmapped: 19505152 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:37.547089+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71639040 unmapped: 19505152 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:38.547261+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71639040 unmapped: 19505152 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:39.547421+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71639040 unmapped: 19505152 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:40.547548+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71639040 unmapped: 19505152 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:41.547727+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71639040 unmapped: 19505152 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:42.547861+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71639040 unmapped: 19505152 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:43.547988+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71639040 unmapped: 19505152 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:44.548220+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71639040 unmapped: 19505152 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:45.548395+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71639040 unmapped: 19505152 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:46.548586+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71639040 unmapped: 19505152 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:47.548763+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71639040 unmapped: 19505152 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:48.548889+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71639040 unmapped: 19505152 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:49.549042+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71639040 unmapped: 19505152 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:50.549203+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71639040 unmapped: 19505152 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:51.549430+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71639040 unmapped: 19505152 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:52.549619+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71639040 unmapped: 19505152 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:53.549812+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71639040 unmapped: 19505152 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:54.550003+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71639040 unmapped: 19505152 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:55.550232+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71639040 unmapped: 19505152 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:56.550425+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71639040 unmapped: 19505152 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:57.550581+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71647232 unmapped: 19496960 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:58.550738+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71647232 unmapped: 19496960 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:59.550867+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71647232 unmapped: 19496960 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:00.551016+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71647232 unmapped: 19496960 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:01.551115+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71647232 unmapped: 19496960 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:02.551307+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71647232 unmapped: 19496960 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:03.551508+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71647232 unmapped: 19496960 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:04.551695+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71647232 unmapped: 19496960 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:05.551882+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71647232 unmapped: 19496960 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:06.552088+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71647232 unmapped: 19496960 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:07.552256+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 19488768 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:08.552364+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 19488768 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:09.552521+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 19488768 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:10.552669+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 19488768 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:11.552831+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 19488768 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:12.553017+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 19488768 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:13.553213+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 19488768 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:14.553379+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 19488768 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:15.553507+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 19488768 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:16.553689+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 19488768 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:17.553798+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 19488768 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:18.553931+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 19488768 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:19.554103+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 19488768 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:20.554318+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 19488768 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:21.554439+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 19488768 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:22.554596+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 19488768 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:23.554752+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 19488768 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:24.554928+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 19488768 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:25.555084+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 19488768 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:26.555308+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 19488768 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:27.555472+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 19488768 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:28.555617+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 19488768 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:29.555827+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 19488768 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:30.556029+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 19488768 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:31.556469+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 19488768 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:32.558037+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 19488768 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:33.559114+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 19488768 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:34.560736+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 19488768 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:35.561226+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 19488768 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:36.561484+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 19488768 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:37.562375+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 19488768 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:38.563037+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 19488768 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:39.563213+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 19488768 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:40.563389+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71663616 unmapped: 19480576 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:41.563599+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71663616 unmapped: 19480576 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:42.563908+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71671808 unmapped: 19472384 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:43.564121+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71671808 unmapped: 19472384 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:44.564334+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71671808 unmapped: 19472384 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:45.564597+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71671808 unmapped: 19472384 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:46.564877+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71671808 unmapped: 19472384 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:47.565047+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71671808 unmapped: 19472384 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:48.565165+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71671808 unmapped: 19472384 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:49.565345+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71671808 unmapped: 19472384 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:50.565498+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71671808 unmapped: 19472384 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:51.565633+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71671808 unmapped: 19472384 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:52.565745+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71671808 unmapped: 19472384 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:53.565885+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71671808 unmapped: 19472384 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:54.566012+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71671808 unmapped: 19472384 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:55.566160+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71671808 unmapped: 19472384 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:56.566344+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71671808 unmapped: 19472384 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:57.566455+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71671808 unmapped: 19472384 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:58.566584+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71671808 unmapped: 19472384 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:59.566724+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71671808 unmapped: 19472384 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:00.566879+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71671808 unmapped: 19472384 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:01.566991+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71671808 unmapped: 19472384 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 23:58:35 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 23:58:35 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:02.567101+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: do_command 'config diff' '{prefix=config diff}'
Nov 25 23:58:35 compute-0 ceph-osd[89044]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 25 23:58:35 compute-0 ceph-osd[89044]: do_command 'config show' '{prefix=config show}'
Nov 25 23:58:35 compute-0 ceph-osd[89044]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 19046400 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: do_command 'counter dump' '{prefix=counter dump}'
Nov 25 23:58:35 compute-0 ceph-osd[89044]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 25 23:58:35 compute-0 ceph-osd[89044]: do_command 'counter schema' '{prefix=counter schema}'
Nov 25 23:58:35 compute-0 ceph-osd[89044]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:03.567218+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72409088 unmapped: 18735104 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: tick
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 25 23:58:35 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:04.567343+0000)
Nov 25 23:58:35 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 18915328 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 25 23:58:35 compute-0 ceph-osd[89044]: do_command 'log dump' '{prefix=log dump}'
Nov 25 23:58:35 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0) v1
Nov 25 23:58:35 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1123402641' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 25 23:58:35 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:58:35 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 25 23:58:35 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 25 23:58:36 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump"} v 0) v1
Nov 25 23:58:36 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2836107007' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 25 23:58:36 compute-0 ceph-mon[75654]: from='client.14844 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 23:58:36 compute-0 ceph-mon[75654]: pgmap v896: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:58:36 compute-0 ceph-mon[75654]: from='client.14848 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 23:58:36 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/1726622350' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 25 23:58:36 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/1123402641' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 25 23:58:36 compute-0 ceph-mon[75654]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 25 23:58:36 compute-0 ceph-mon[75654]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 25 23:58:36 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/2836107007' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 25 23:58:36 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14860 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 23:58:36 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v897: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:58:36 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0) v1
Nov 25 23:58:36 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4113696545' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 25 23:58:37 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df"} v 0) v1
Nov 25 23:58:37 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3291704597' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 25 23:58:37 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/4113696545' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 25 23:58:37 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/3291704597' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 25 23:58:37 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump"} v 0) v1
Nov 25 23:58:37 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3146059635' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Nov 25 23:58:38 compute-0 systemd[1]: Starting Hostname Service...
Nov 25 23:58:38 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs ls"} v 0) v1
Nov 25 23:58:38 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2822900433' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Nov 25 23:58:38 compute-0 systemd[1]: Started Hostname Service.
Nov 25 23:58:38 compute-0 ceph-mon[75654]: from='client.14860 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 23:58:38 compute-0 ceph-mon[75654]: pgmap v897: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:58:38 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/3146059635' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Nov 25 23:58:38 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/2822900433' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Nov 25 23:58:38 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14870 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 23:58:38 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v898: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:58:38 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds stat"} v 0) v1
Nov 25 23:58:38 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1631429720' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Nov 25 23:58:39 compute-0 ceph-mon[75654]: from='client.14870 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 23:58:39 compute-0 ceph-mon[75654]: pgmap v898: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:58:39 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/1631429720' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Nov 25 23:58:39 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump"} v 0) v1
Nov 25 23:58:39 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2600041822' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Nov 25 23:58:39 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14876 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 23:58:40 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd blocklist ls"} v 0) v1
Nov 25 23:58:40 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/152063357' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Nov 25 23:58:40 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/2600041822' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Nov 25 23:58:40 compute-0 ceph-mon[75654]: from='client.14876 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 23:58:40 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/152063357' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Nov 25 23:58:40 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14880 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 23:58:40 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v899: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:58:40 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:58:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:58:40.766 160725 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 23:58:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:58:40.769 160725 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 23:58:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:58:40.769 160725 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 23:58:40 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14882 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 23:58:41 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd dump"} v 0) v1
Nov 25 23:58:41 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1152659261' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Nov 25 23:58:41 compute-0 ceph-mon[75654]: from='client.14880 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 23:58:41 compute-0 ceph-mon[75654]: pgmap v899: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:58:41 compute-0 ceph-mon[75654]: from='client.14882 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 23:58:41 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/1152659261' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Nov 25 23:58:41 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd numa-status"} v 0) v1
Nov 25 23:58:41 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4052102445' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Nov 25 23:58:41 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14888 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 23:58:42 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/4052102445' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Nov 25 23:58:42 compute-0 ceph-mon[75654]: from='client.14888 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 23:58:42 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14890 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 23:58:42 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:58:42 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 23:58:42 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:58:42 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 23:58:42 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:58:42 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 23:58:42 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:58:42 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:58:42 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:58:42 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 25 23:58:42 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:58:42 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 23:58:42 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:58:42 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:58:42 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v900: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:58:42 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0) v1
Nov 25 23:58:42 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2665316842' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Nov 25 23:58:43 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd stat"} v 0) v1
Nov 25 23:58:43 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3427722926' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Nov 25 23:58:43 compute-0 ceph-mon[75654]: from='client.14890 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 23:58:43 compute-0 ceph-mon[75654]: pgmap v900: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:58:43 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/2665316842' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Nov 25 23:58:43 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/3427722926' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Nov 25 23:58:43 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14896 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 23:58:43 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14898 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 23:58:44 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Nov 25 23:58:44 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/584420192' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 25 23:58:44 compute-0 ceph-mon[75654]: from='client.14896 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 23:58:44 compute-0 ceph-mon[75654]: from='client.14898 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 25 23:58:44 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/584420192' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 25 23:58:44 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v901: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:58:44 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "time-sync-status"} v 0) v1
Nov 25 23:58:44 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2089609647' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Nov 25 23:58:45 compute-0 ovs-appctl[265236]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Nov 25 23:58:45 compute-0 ovs-appctl[265241]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Nov 25 23:58:45 compute-0 ovs-appctl[265247]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Nov 25 23:58:45 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json-pretty"} v 0) v1
Nov 25 23:58:45 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4147155489' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Nov 25 23:58:45 compute-0 ceph-mon[75654]: pgmap v901: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:58:45 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/2089609647' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Nov 25 23:58:45 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/4147155489' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Nov 25 23:58:45 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14906 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 23:58:45 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:58:46 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail", "format": "json-pretty"} v 0) v1
Nov 25 23:58:46 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3010632188' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 25 23:58:46 compute-0 ceph-mon[75654]: from='client.14906 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 23:58:46 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/3010632188' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 25 23:58:46 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json-pretty"} v 0) v1
Nov 25 23:58:46 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3215647163' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Nov 25 23:58:46 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v902: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:58:46 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump", "format": "json-pretty"} v 0) v1
Nov 25 23:58:46 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1448117070' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Nov 25 23:58:47 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs ls", "format": "json-pretty"} v 0) v1
Nov 25 23:58:47 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2521135715' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Nov 25 23:58:47 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/3215647163' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Nov 25 23:58:47 compute-0 ceph-mon[75654]: pgmap v902: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:58:47 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/1448117070' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Nov 25 23:58:47 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/2521135715' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Nov 25 23:58:47 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14916 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 23:58:48 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds stat", "format": "json-pretty"} v 0) v1
Nov 25 23:58:48 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4252699828' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Nov 25 23:58:48 compute-0 ceph-mon[75654]: from='client.14916 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 23:58:48 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/4252699828' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Nov 25 23:58:48 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v903: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:58:48 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json-pretty"} v 0) v1
Nov 25 23:58:48 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/259861828' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Nov 25 23:58:48 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14922 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 23:58:49 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json-pretty"} v 0) v1
Nov 25 23:58:49 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/728045983' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Nov 25 23:58:49 compute-0 ceph-mon[75654]: pgmap v903: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:58:49 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/259861828' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Nov 25 23:58:49 compute-0 ceph-mon[75654]: from='client.14922 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 23:58:49 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/728045983' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Nov 25 23:58:49 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14926 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 23:58:50 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14928 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 23:58:50 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd dump", "format": "json-pretty"} v 0) v1
Nov 25 23:58:50 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3531588618' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Nov 25 23:58:50 compute-0 ceph-mon[75654]: from='client.14926 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 23:58:50 compute-0 ceph-mon[75654]: from='client.14928 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 23:58:50 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/3531588618' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Nov 25 23:58:50 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v904: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:58:50 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:58:50 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd numa-status", "format": "json-pretty"} v 0) v1
Nov 25 23:58:50 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3419870886' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Nov 25 23:58:51 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14934 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 23:58:51 compute-0 podman[266540]: 2025-11-25 23:58:51.263191697 +0000 UTC m=+0.073447267 container health_status b33bb0f8771548517ad62d4e0193b93d84fd686bd9113462f17aea9d51587084 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 23:58:51 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14936 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 23:58:51 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:58:51 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 23:58:51 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:58:51 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 23:58:51 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:58:51 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 23:58:51 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:58:51 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:58:51 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:58:51 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 25 23:58:51 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:58:51 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 23:58:51 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:58:51 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:58:51 compute-0 ceph-mon[75654]: pgmap v904: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:58:51 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/3419870886' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Nov 25 23:58:51 compute-0 ceph-mon[75654]: from='client.14934 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 23:58:51 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"} v 0) v1
Nov 25 23:58:51 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3653784796' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 25 23:58:52 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd stat", "format": "json-pretty"} v 0) v1
Nov 25 23:58:52 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2862580984' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Nov 25 23:58:52 compute-0 ceph-mon[75654]: from='client.14936 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 23:58:52 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/3653784796' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 25 23:58:52 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/2862580984' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Nov 25 23:58:52 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v905: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:58:52 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14942 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 23:58:52 compute-0 sudo[266665]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:58:52 compute-0 sudo[266665]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:58:52 compute-0 sudo[266665]: pam_unix(sudo:session): session closed for user root
Nov 25 23:58:52 compute-0 sudo[266692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:58:52 compute-0 sudo[266692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:58:52 compute-0 sudo[266692]: pam_unix(sudo:session): session closed for user root
Nov 25 23:58:52 compute-0 sudo[266723]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:58:52 compute-0 sudo[266723]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:58:52 compute-0 sudo[266723]: pam_unix(sudo:session): session closed for user root
Nov 25 23:58:52 compute-0 sudo[266767]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 25 23:58:52 compute-0 sudo[266767]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:58:53 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.14944 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 23:58:53 compute-0 sudo[266767]: pam_unix(sudo:session): session closed for user root
Nov 25 23:58:53 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 23:58:53 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:58:53 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 25 23:58:53 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 23:58:53 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 25 23:58:53 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:58:53 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 25 23:58:53 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1213200097' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 25 23:58:53 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev 82438a57-162f-47b0-ae4d-2de5acd91506 does not exist
Nov 25 23:58:53 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev 048b4f39-4d39-4876-b6c3-d253229815f1 does not exist
Nov 25 23:58:53 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev 5678b575-faa4-4bb6-8076-72fc514c6a87 does not exist
Nov 25 23:58:53 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 25 23:58:53 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 23:58:53 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 25 23:58:53 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 23:58:53 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 25 23:58:53 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:58:53 compute-0 sudo[266853]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:58:53 compute-0 sudo[266853]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:58:53 compute-0 sudo[266853]: pam_unix(sudo:session): session closed for user root
Nov 25 23:58:53 compute-0 ceph-mon[75654]: pgmap v905: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:58:53 compute-0 ceph-mon[75654]: from='client.14942 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 23:58:53 compute-0 ceph-mon[75654]: from='client.14944 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 25 23:58:53 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:58:53 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 23:58:53 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:58:53 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/1213200097' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 25 23:58:53 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 25 23:58:53 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 23:58:53 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 23:58:53 compute-0 sudo[266884]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:58:53 compute-0 sudo[266884]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:58:53 compute-0 sudo[266884]: pam_unix(sudo:session): session closed for user root
Nov 25 23:58:53 compute-0 sudo[266924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:58:53 compute-0 sudo[266924]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:58:53 compute-0 sudo[266924]: pam_unix(sudo:session): session closed for user root
Nov 25 23:58:53 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "time-sync-status", "format": "json-pretty"} v 0) v1
Nov 25 23:58:53 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/292938086' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Nov 25 23:58:54 compute-0 sudo[266961]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 25 23:58:54 compute-0 sudo[266961]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:58:54 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v906: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:58:54 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/292938086' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Nov 25 23:58:54 compute-0 podman[267277]: 2025-11-25 23:58:54.967376493 +0000 UTC m=+0.055711756 container create 0f5b07a2fb8d2a9a7d8a021c1699c7a3bf1db4254941aedde967438d9c356266 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_dubinsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True)
Nov 25 23:58:55 compute-0 systemd[1]: Started libpod-conmon-0f5b07a2fb8d2a9a7d8a021c1699c7a3bf1db4254941aedde967438d9c356266.scope.
Nov 25 23:58:55 compute-0 podman[267277]: 2025-11-25 23:58:54.930834919 +0000 UTC m=+0.019170162 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:58:55 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:58:55 compute-0 podman[267277]: 2025-11-25 23:58:55.076440818 +0000 UTC m=+0.164776091 container init 0f5b07a2fb8d2a9a7d8a021c1699c7a3bf1db4254941aedde967438d9c356266 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_dubinsky, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:58:55 compute-0 podman[267277]: 2025-11-25 23:58:55.085538865 +0000 UTC m=+0.173874078 container start 0f5b07a2fb8d2a9a7d8a021c1699c7a3bf1db4254941aedde967438d9c356266 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:58:55 compute-0 cool_dubinsky[267303]: 167 167
Nov 25 23:58:55 compute-0 systemd[1]: libpod-0f5b07a2fb8d2a9a7d8a021c1699c7a3bf1db4254941aedde967438d9c356266.scope: Deactivated successfully.
Nov 25 23:58:55 compute-0 podman[267277]: 2025-11-25 23:58:55.095393042 +0000 UTC m=+0.183728305 container attach 0f5b07a2fb8d2a9a7d8a021c1699c7a3bf1db4254941aedde967438d9c356266 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_dubinsky, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:58:55 compute-0 podman[267277]: 2025-11-25 23:58:55.095794404 +0000 UTC m=+0.184129637 container died 0f5b07a2fb8d2a9a7d8a021c1699c7a3bf1db4254941aedde967438d9c356266 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_dubinsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:58:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-ff2df23865bba958a84a2099024fb3fc60c99fca05cb460ae054381de7b97a9e-merged.mount: Deactivated successfully.
Nov 25 23:58:55 compute-0 virtqemud[251995]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 25 23:58:55 compute-0 podman[267277]: 2025-11-25 23:58:55.190969202 +0000 UTC m=+0.279304425 container remove 0f5b07a2fb8d2a9a7d8a021c1699c7a3bf1db4254941aedde967438d9c356266 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_dubinsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:58:55 compute-0 systemd[1]: libpod-conmon-0f5b07a2fb8d2a9a7d8a021c1699c7a3bf1db4254941aedde967438d9c356266.scope: Deactivated successfully.
Nov 25 23:58:55 compute-0 podman[267353]: 2025-11-25 23:58:55.360345686 +0000 UTC m=+0.058009528 container create ed7fdcb7434fb9eb15ff57cb94616aa88a0dd15015f90434a4c976d971b8db21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_boyd, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:58:55 compute-0 systemd[1]: Started libpod-conmon-ed7fdcb7434fb9eb15ff57cb94616aa88a0dd15015f90434a4c976d971b8db21.scope.
Nov 25 23:58:55 compute-0 podman[267353]: 2025-11-25 23:58:55.332728445 +0000 UTC m=+0.030392287 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:58:55 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:58:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/280efd25b387c41f21c3469992a97d6c865e81f3217f755441b1e13ac92b79ac/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 23:58:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/280efd25b387c41f21c3469992a97d6c865e81f3217f755441b1e13ac92b79ac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:58:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/280efd25b387c41f21c3469992a97d6c865e81f3217f755441b1e13ac92b79ac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:58:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/280efd25b387c41f21c3469992a97d6c865e81f3217f755441b1e13ac92b79ac/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:58:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/280efd25b387c41f21c3469992a97d6c865e81f3217f755441b1e13ac92b79ac/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 23:58:55 compute-0 podman[267353]: 2025-11-25 23:58:55.488634774 +0000 UTC m=+0.186298636 container init ed7fdcb7434fb9eb15ff57cb94616aa88a0dd15015f90434a4c976d971b8db21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_boyd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:58:55 compute-0 podman[267353]: 2025-11-25 23:58:55.499179371 +0000 UTC m=+0.196843193 container start ed7fdcb7434fb9eb15ff57cb94616aa88a0dd15015f90434a4c976d971b8db21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 25 23:58:55 compute-0 podman[267353]: 2025-11-25 23:58:55.507698312 +0000 UTC m=+0.205362184 container attach ed7fdcb7434fb9eb15ff57cb94616aa88a0dd15015f90434a4c976d971b8db21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_boyd, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:58:55 compute-0 ceph-mon[75654]: pgmap v906: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:58:55 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:58:56 compute-0 ceph-mgr[75954]: [balancer INFO root] Optimize plan auto_2025-11-25_23:58:56
Nov 25 23:58:56 compute-0 ceph-mgr[75954]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 25 23:58:56 compute-0 ceph-mgr[75954]: [balancer INFO root] do_upmap
Nov 25 23:58:56 compute-0 ceph-mgr[75954]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.mgr', 'backups', 'vms', 'cephfs.cephfs.data', 'volumes', 'images']
Nov 25 23:58:56 compute-0 ceph-mgr[75954]: [balancer INFO root] prepared 0/10 changes
Nov 25 23:58:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:58:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:58:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:58:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:58:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:58:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:58:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 23:58:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 23:58:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 23:58:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 23:58:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 23:58:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 23:58:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 23:58:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 23:58:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 23:58:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 23:58:56 compute-0 fervent_boyd[267376]: --> passed data devices: 0 physical, 3 LVM
Nov 25 23:58:56 compute-0 fervent_boyd[267376]: --> relative data size: 1.0
Nov 25 23:58:56 compute-0 fervent_boyd[267376]: --> All data devices are unavailable
Nov 25 23:58:56 compute-0 systemd[1]: libpod-ed7fdcb7434fb9eb15ff57cb94616aa88a0dd15015f90434a4c976d971b8db21.scope: Deactivated successfully.
Nov 25 23:58:56 compute-0 systemd[1]: libpod-ed7fdcb7434fb9eb15ff57cb94616aa88a0dd15015f90434a4c976d971b8db21.scope: Consumed 1.003s CPU time, 25.3M memory peak, read 296.0K from disk, written 0B to disk.
Nov 25 23:58:56 compute-0 podman[267353]: 2025-11-25 23:58:56.571983957 +0000 UTC m=+1.269647789 container died ed7fdcb7434fb9eb15ff57cb94616aa88a0dd15015f90434a4c976d971b8db21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_boyd, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 23:58:56 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v907: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:58:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-280efd25b387c41f21c3469992a97d6c865e81f3217f755441b1e13ac92b79ac-merged.mount: Deactivated successfully.
Nov 25 23:58:56 compute-0 podman[267353]: 2025-11-25 23:58:56.760843571 +0000 UTC m=+1.458507393 container remove ed7fdcb7434fb9eb15ff57cb94616aa88a0dd15015f90434a4c976d971b8db21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_boyd, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 25 23:58:56 compute-0 systemd[1]: libpod-conmon-ed7fdcb7434fb9eb15ff57cb94616aa88a0dd15015f90434a4c976d971b8db21.scope: Deactivated successfully.
Nov 25 23:58:56 compute-0 sudo[266961]: pam_unix(sudo:session): session closed for user root
Nov 25 23:58:56 compute-0 sudo[267543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:58:56 compute-0 sudo[267543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:58:56 compute-0 sudo[267543]: pam_unix(sudo:session): session closed for user root
Nov 25 23:58:56 compute-0 sudo[267575]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:58:56 compute-0 sudo[267575]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:58:56 compute-0 sudo[267575]: pam_unix(sudo:session): session closed for user root
Nov 25 23:58:57 compute-0 sudo[267605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:58:57 compute-0 sudo[267605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:58:57 compute-0 sudo[267605]: pam_unix(sudo:session): session closed for user root
Nov 25 23:58:57 compute-0 sudo[267634]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 -- lvm list --format json
Nov 25 23:58:57 compute-0 sudo[267634]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:58:57 compute-0 podman[267710]: 2025-11-25 23:58:57.437796845 +0000 UTC m=+0.055659274 container create d3133baff4de332d73b289395ebb53a010c853f9c257185b3e4a280199961293 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_montalcini, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 25 23:58:57 compute-0 systemd[1]: Started libpod-conmon-d3133baff4de332d73b289395ebb53a010c853f9c257185b3e4a280199961293.scope.
Nov 25 23:58:57 compute-0 podman[267710]: 2025-11-25 23:58:57.408575081 +0000 UTC m=+0.026437530 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:58:57 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:58:57 compute-0 podman[267710]: 2025-11-25 23:58:57.542698907 +0000 UTC m=+0.160561356 container init d3133baff4de332d73b289395ebb53a010c853f9c257185b3e4a280199961293 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_montalcini, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:58:57 compute-0 podman[267710]: 2025-11-25 23:58:57.549447891 +0000 UTC m=+0.167310320 container start d3133baff4de332d73b289395ebb53a010c853f9c257185b3e4a280199961293 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_montalcini, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 25 23:58:57 compute-0 priceless_montalcini[267727]: 167 167
Nov 25 23:58:57 compute-0 systemd[1]: libpod-d3133baff4de332d73b289395ebb53a010c853f9c257185b3e4a280199961293.scope: Deactivated successfully.
Nov 25 23:58:57 compute-0 podman[267710]: 2025-11-25 23:58:57.560541072 +0000 UTC m=+0.178403531 container attach d3133baff4de332d73b289395ebb53a010c853f9c257185b3e4a280199961293 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_montalcini, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:58:57 compute-0 podman[267710]: 2025-11-25 23:58:57.561433517 +0000 UTC m=+0.179295946 container died d3133baff4de332d73b289395ebb53a010c853f9c257185b3e4a280199961293 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_montalcini, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:58:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-e4bcf52cba8489ea9840d470a2b63e9096125bbb498973a74228df692145c0f3-merged.mount: Deactivated successfully.
Nov 25 23:58:57 compute-0 podman[267710]: 2025-11-25 23:58:57.635162071 +0000 UTC m=+0.253024550 container remove d3133baff4de332d73b289395ebb53a010c853f9c257185b3e4a280199961293 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_montalcini, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 23:58:57 compute-0 systemd[1]: libpod-conmon-d3133baff4de332d73b289395ebb53a010c853f9c257185b3e4a280199961293.scope: Deactivated successfully.
Nov 25 23:58:57 compute-0 ceph-mon[75654]: pgmap v907: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:58:57 compute-0 podman[267750]: 2025-11-25 23:58:57.831430407 +0000 UTC m=+0.057603848 container create 32df77203e4713ca54a3c2869ef2e822a3d161edfd8a8150129a22234c1095a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_almeida, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 25 23:58:57 compute-0 systemd[1]: Started libpod-conmon-32df77203e4713ca54a3c2869ef2e822a3d161edfd8a8150129a22234c1095a8.scope.
Nov 25 23:58:57 compute-0 podman[267750]: 2025-11-25 23:58:57.801065812 +0000 UTC m=+0.027239303 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:58:57 compute-0 systemd[1]: Starting Time & Date Service...
Nov 25 23:58:57 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:58:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf687e10b974a509564b05c166a3241e0e62237ed084f3b005f53437c6f24a27/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 23:58:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf687e10b974a509564b05c166a3241e0e62237ed084f3b005f53437c6f24a27/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:58:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf687e10b974a509564b05c166a3241e0e62237ed084f3b005f53437c6f24a27/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:58:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf687e10b974a509564b05c166a3241e0e62237ed084f3b005f53437c6f24a27/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:58:57 compute-0 podman[267750]: 2025-11-25 23:58:57.953315031 +0000 UTC m=+0.179488512 container init 32df77203e4713ca54a3c2869ef2e822a3d161edfd8a8150129a22234c1095a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_almeida, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 25 23:58:57 compute-0 podman[267750]: 2025-11-25 23:58:57.962561192 +0000 UTC m=+0.188734643 container start 32df77203e4713ca54a3c2869ef2e822a3d161edfd8a8150129a22234c1095a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_almeida, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:58:57 compute-0 podman[267750]: 2025-11-25 23:58:57.972392339 +0000 UTC m=+0.198565800 container attach 32df77203e4713ca54a3c2869ef2e822a3d161edfd8a8150129a22234c1095a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_almeida, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:58:58 compute-0 systemd[1]: Started Time & Date Service.
Nov 25 23:58:58 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v908: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:58:58 compute-0 funny_almeida[267772]: {
Nov 25 23:58:58 compute-0 funny_almeida[267772]:     "0": [
Nov 25 23:58:58 compute-0 funny_almeida[267772]:         {
Nov 25 23:58:58 compute-0 funny_almeida[267772]:             "devices": [
Nov 25 23:58:58 compute-0 funny_almeida[267772]:                 "/dev/loop3"
Nov 25 23:58:58 compute-0 funny_almeida[267772]:             ],
Nov 25 23:58:58 compute-0 funny_almeida[267772]:             "lv_name": "ceph_lv0",
Nov 25 23:58:58 compute-0 funny_almeida[267772]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 23:58:58 compute-0 funny_almeida[267772]:             "lv_size": "21470642176",
Nov 25 23:58:58 compute-0 funny_almeida[267772]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c5ddf08a-6193-41e0-8332-60b5083aa62e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 23:58:58 compute-0 funny_almeida[267772]:             "lv_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 23:58:58 compute-0 funny_almeida[267772]:             "name": "ceph_lv0",
Nov 25 23:58:58 compute-0 funny_almeida[267772]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 23:58:58 compute-0 funny_almeida[267772]:             "tags": {
Nov 25 23:58:58 compute-0 funny_almeida[267772]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 23:58:58 compute-0 funny_almeida[267772]:                 "ceph.block_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 25 23:58:58 compute-0 funny_almeida[267772]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 23:58:58 compute-0 funny_almeida[267772]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:58:58 compute-0 funny_almeida[267772]:                 "ceph.cluster_name": "ceph",
Nov 25 23:58:58 compute-0 funny_almeida[267772]:                 "ceph.crush_device_class": "",
Nov 25 23:58:58 compute-0 funny_almeida[267772]:                 "ceph.encrypted": "0",
Nov 25 23:58:58 compute-0 funny_almeida[267772]:                 "ceph.osd_fsid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 23:58:58 compute-0 funny_almeida[267772]:                 "ceph.osd_id": "0",
Nov 25 23:58:58 compute-0 funny_almeida[267772]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 23:58:58 compute-0 funny_almeida[267772]:                 "ceph.type": "block",
Nov 25 23:58:58 compute-0 funny_almeida[267772]:                 "ceph.vdo": "0"
Nov 25 23:58:58 compute-0 funny_almeida[267772]:             },
Nov 25 23:58:58 compute-0 funny_almeida[267772]:             "type": "block",
Nov 25 23:58:58 compute-0 funny_almeida[267772]:             "vg_name": "ceph_vg0"
Nov 25 23:58:58 compute-0 funny_almeida[267772]:         }
Nov 25 23:58:58 compute-0 funny_almeida[267772]:     ],
Nov 25 23:58:58 compute-0 funny_almeida[267772]:     "1": [
Nov 25 23:58:58 compute-0 funny_almeida[267772]:         {
Nov 25 23:58:58 compute-0 funny_almeida[267772]:             "devices": [
Nov 25 23:58:58 compute-0 funny_almeida[267772]:                 "/dev/loop4"
Nov 25 23:58:58 compute-0 funny_almeida[267772]:             ],
Nov 25 23:58:58 compute-0 funny_almeida[267772]:             "lv_name": "ceph_lv1",
Nov 25 23:58:58 compute-0 funny_almeida[267772]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 23:58:58 compute-0 funny_almeida[267772]:             "lv_size": "21470642176",
Nov 25 23:58:58 compute-0 funny_almeida[267772]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21bbab34-bea3-466b-8bf7-812749fcef47,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 23:58:58 compute-0 funny_almeida[267772]:             "lv_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 23:58:58 compute-0 funny_almeida[267772]:             "name": "ceph_lv1",
Nov 25 23:58:58 compute-0 funny_almeida[267772]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 25 23:58:58 compute-0 funny_almeida[267772]:             "tags": {
Nov 25 23:58:58 compute-0 funny_almeida[267772]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 25 23:58:58 compute-0 funny_almeida[267772]:                 "ceph.block_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 25 23:58:58 compute-0 funny_almeida[267772]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 23:58:58 compute-0 funny_almeida[267772]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:58:58 compute-0 funny_almeida[267772]:                 "ceph.cluster_name": "ceph",
Nov 25 23:58:58 compute-0 funny_almeida[267772]:                 "ceph.crush_device_class": "",
Nov 25 23:58:58 compute-0 funny_almeida[267772]:                 "ceph.encrypted": "0",
Nov 25 23:58:58 compute-0 funny_almeida[267772]:                 "ceph.osd_fsid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 23:58:58 compute-0 funny_almeida[267772]:                 "ceph.osd_id": "1",
Nov 25 23:58:58 compute-0 funny_almeida[267772]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 23:58:58 compute-0 funny_almeida[267772]:                 "ceph.type": "block",
Nov 25 23:58:58 compute-0 funny_almeida[267772]:                 "ceph.vdo": "0"
Nov 25 23:58:58 compute-0 funny_almeida[267772]:             },
Nov 25 23:58:58 compute-0 funny_almeida[267772]:             "type": "block",
Nov 25 23:58:58 compute-0 funny_almeida[267772]:             "vg_name": "ceph_vg1"
Nov 25 23:58:58 compute-0 funny_almeida[267772]:         }
Nov 25 23:58:58 compute-0 funny_almeida[267772]:     ],
Nov 25 23:58:58 compute-0 funny_almeida[267772]:     "2": [
Nov 25 23:58:58 compute-0 funny_almeida[267772]:         {
Nov 25 23:58:58 compute-0 funny_almeida[267772]:             "devices": [
Nov 25 23:58:58 compute-0 funny_almeida[267772]:                 "/dev/loop5"
Nov 25 23:58:58 compute-0 funny_almeida[267772]:             ],
Nov 25 23:58:58 compute-0 funny_almeida[267772]:             "lv_name": "ceph_lv2",
Nov 25 23:58:58 compute-0 funny_almeida[267772]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 23:58:58 compute-0 funny_almeida[267772]:             "lv_size": "21470642176",
Nov 25 23:58:58 compute-0 funny_almeida[267772]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=019d967b-1a56-4e90-8682-a890da577e20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 25 23:58:58 compute-0 funny_almeida[267772]:             "lv_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 23:58:58 compute-0 funny_almeida[267772]:             "name": "ceph_lv2",
Nov 25 23:58:58 compute-0 funny_almeida[267772]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 25 23:58:58 compute-0 funny_almeida[267772]:             "tags": {
Nov 25 23:58:58 compute-0 funny_almeida[267772]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 25 23:58:58 compute-0 funny_almeida[267772]:                 "ceph.block_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 25 23:58:58 compute-0 funny_almeida[267772]:                 "ceph.cephx_lockbox_secret": "",
Nov 25 23:58:58 compute-0 funny_almeida[267772]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:58:58 compute-0 funny_almeida[267772]:                 "ceph.cluster_name": "ceph",
Nov 25 23:58:58 compute-0 funny_almeida[267772]:                 "ceph.crush_device_class": "",
Nov 25 23:58:58 compute-0 funny_almeida[267772]:                 "ceph.encrypted": "0",
Nov 25 23:58:58 compute-0 funny_almeida[267772]:                 "ceph.osd_fsid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 23:58:58 compute-0 funny_almeida[267772]:                 "ceph.osd_id": "2",
Nov 25 23:58:58 compute-0 funny_almeida[267772]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 25 23:58:58 compute-0 funny_almeida[267772]:                 "ceph.type": "block",
Nov 25 23:58:58 compute-0 funny_almeida[267772]:                 "ceph.vdo": "0"
Nov 25 23:58:58 compute-0 funny_almeida[267772]:             },
Nov 25 23:58:58 compute-0 funny_almeida[267772]:             "type": "block",
Nov 25 23:58:58 compute-0 funny_almeida[267772]:             "vg_name": "ceph_vg2"
Nov 25 23:58:58 compute-0 funny_almeida[267772]:         }
Nov 25 23:58:58 compute-0 funny_almeida[267772]:     ]
Nov 25 23:58:58 compute-0 funny_almeida[267772]: }
Nov 25 23:58:58 compute-0 systemd[1]: libpod-32df77203e4713ca54a3c2869ef2e822a3d161edfd8a8150129a22234c1095a8.scope: Deactivated successfully.
Nov 25 23:58:58 compute-0 podman[267750]: 2025-11-25 23:58:58.794687824 +0000 UTC m=+1.020861275 container died 32df77203e4713ca54a3c2869ef2e822a3d161edfd8a8150129a22234c1095a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_almeida, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 25 23:58:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-cf687e10b974a509564b05c166a3241e0e62237ed084f3b005f53437c6f24a27-merged.mount: Deactivated successfully.
Nov 25 23:58:58 compute-0 podman[267750]: 2025-11-25 23:58:58.882009779 +0000 UTC m=+1.108183230 container remove 32df77203e4713ca54a3c2869ef2e822a3d161edfd8a8150129a22234c1095a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_almeida, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 25 23:58:58 compute-0 systemd[1]: libpod-conmon-32df77203e4713ca54a3c2869ef2e822a3d161edfd8a8150129a22234c1095a8.scope: Deactivated successfully.
Nov 25 23:58:58 compute-0 sudo[267634]: pam_unix(sudo:session): session closed for user root
Nov 25 23:58:59 compute-0 sudo[267798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:58:59 compute-0 sudo[267798]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:58:59 compute-0 sudo[267798]: pam_unix(sudo:session): session closed for user root
Nov 25 23:58:59 compute-0 sudo[267825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 25 23:58:59 compute-0 sudo[267825]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:58:59 compute-0 sudo[267825]: pam_unix(sudo:session): session closed for user root
Nov 25 23:58:59 compute-0 sudo[267850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:58:59 compute-0 sudo[267850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:58:59 compute-0 sudo[267850]: pam_unix(sudo:session): session closed for user root
Nov 25 23:58:59 compute-0 sudo[267875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 -- raw list --format json
Nov 25 23:58:59 compute-0 sudo[267875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:58:59 compute-0 podman[267938]: 2025-11-25 23:58:59.642139945 +0000 UTC m=+0.059973982 container create f06f6989f68c9d1dd18424c2ce5d0d7a981085a963107449ed7c8cc6515db245 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_goldstine, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 23:58:59 compute-0 systemd[1]: Started libpod-conmon-f06f6989f68c9d1dd18424c2ce5d0d7a981085a963107449ed7c8cc6515db245.scope.
Nov 25 23:58:59 compute-0 podman[267938]: 2025-11-25 23:58:59.614782621 +0000 UTC m=+0.032616718 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:58:59 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:58:59 compute-0 ceph-mon[75654]: pgmap v908: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:58:59 compute-0 podman[267938]: 2025-11-25 23:58:59.748060284 +0000 UTC m=+0.165894321 container init f06f6989f68c9d1dd18424c2ce5d0d7a981085a963107449ed7c8cc6515db245 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_goldstine, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 23:58:59 compute-0 podman[267938]: 2025-11-25 23:58:59.755683322 +0000 UTC m=+0.173517329 container start f06f6989f68c9d1dd18424c2ce5d0d7a981085a963107449ed7c8cc6515db245 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_goldstine, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True)
Nov 25 23:58:59 compute-0 magical_goldstine[267971]: 167 167
Nov 25 23:58:59 compute-0 systemd[1]: libpod-f06f6989f68c9d1dd18424c2ce5d0d7a981085a963107449ed7c8cc6515db245.scope: Deactivated successfully.
Nov 25 23:58:59 compute-0 podman[267938]: 2025-11-25 23:58:59.759592978 +0000 UTC m=+0.177427035 container attach f06f6989f68c9d1dd18424c2ce5d0d7a981085a963107449ed7c8cc6515db245 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 23:58:59 compute-0 podman[267938]: 2025-11-25 23:58:59.763852954 +0000 UTC m=+0.181686971 container died f06f6989f68c9d1dd18424c2ce5d0d7a981085a963107449ed7c8cc6515db245 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_goldstine, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 25 23:58:59 compute-0 podman[267952]: 2025-11-25 23:58:59.78029733 +0000 UTC m=+0.093180724 container health_status 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 23:58:59 compute-0 podman[267953]: 2025-11-25 23:58:59.781687628 +0000 UTC m=+0.094966022 container health_status 9c33443d6f21f9126b440289b7d384a59ef92406382d0e40a2b11f1da5dc47d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 23:58:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-798cfca87a78ebc250e9fce6ed9f06a784940a9267568f6b59e87d6e1f01c958-merged.mount: Deactivated successfully.
Nov 25 23:58:59 compute-0 podman[267938]: 2025-11-25 23:58:59.807388747 +0000 UTC m=+0.225222764 container remove f06f6989f68c9d1dd18424c2ce5d0d7a981085a963107449ed7c8cc6515db245 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3)
Nov 25 23:58:59 compute-0 systemd[1]: libpod-conmon-f06f6989f68c9d1dd18424c2ce5d0d7a981085a963107449ed7c8cc6515db245.scope: Deactivated successfully.
Nov 25 23:58:59 compute-0 podman[268019]: 2025-11-25 23:58:59.985470099 +0000 UTC m=+0.043267308 container create 2cb64a0d4a73c50ca2e069cbd438851ffac9015212465541f7a0a7c9d8ba7119 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 23:59:00 compute-0 systemd[1]: Started libpod-conmon-2cb64a0d4a73c50ca2e069cbd438851ffac9015212465541f7a0a7c9d8ba7119.scope.
Nov 25 23:59:00 compute-0 podman[268019]: 2025-11-25 23:58:59.96346743 +0000 UTC m=+0.021264659 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 25 23:59:00 compute-0 systemd[1]: Started libcrun container.
Nov 25 23:59:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d461d1680271004d680372c4dd300af5318dc79096c1056d205bc54aa5146530/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 23:59:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d461d1680271004d680372c4dd300af5318dc79096c1056d205bc54aa5146530/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 23:59:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d461d1680271004d680372c4dd300af5318dc79096c1056d205bc54aa5146530/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 23:59:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d461d1680271004d680372c4dd300af5318dc79096c1056d205bc54aa5146530/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 23:59:00 compute-0 podman[268019]: 2025-11-25 23:59:00.093602758 +0000 UTC m=+0.151400067 container init 2cb64a0d4a73c50ca2e069cbd438851ffac9015212465541f7a0a7c9d8ba7119 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_haslett, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 25 23:59:00 compute-0 podman[268019]: 2025-11-25 23:59:00.103911469 +0000 UTC m=+0.161708678 container start 2cb64a0d4a73c50ca2e069cbd438851ffac9015212465541f7a0a7c9d8ba7119 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:59:00 compute-0 podman[268019]: 2025-11-25 23:59:00.10801796 +0000 UTC m=+0.165815259 container attach 2cb64a0d4a73c50ca2e069cbd438851ffac9015212465541f7a0a7c9d8ba7119 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_haslett, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 23:59:00 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v909: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:59:00 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:59:01 compute-0 admiring_haslett[268036]: {
Nov 25 23:59:01 compute-0 admiring_haslett[268036]:     "019d967b-1a56-4e90-8682-a890da577e20": {
Nov 25 23:59:01 compute-0 admiring_haslett[268036]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:59:01 compute-0 admiring_haslett[268036]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 25 23:59:01 compute-0 admiring_haslett[268036]:         "osd_id": 2,
Nov 25 23:59:01 compute-0 admiring_haslett[268036]:         "osd_uuid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 25 23:59:01 compute-0 admiring_haslett[268036]:         "type": "bluestore"
Nov 25 23:59:01 compute-0 admiring_haslett[268036]:     },
Nov 25 23:59:01 compute-0 admiring_haslett[268036]:     "21bbab34-bea3-466b-8bf7-812749fcef47": {
Nov 25 23:59:01 compute-0 admiring_haslett[268036]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:59:01 compute-0 admiring_haslett[268036]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 25 23:59:01 compute-0 admiring_haslett[268036]:         "osd_id": 1,
Nov 25 23:59:01 compute-0 admiring_haslett[268036]:         "osd_uuid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 25 23:59:01 compute-0 admiring_haslett[268036]:         "type": "bluestore"
Nov 25 23:59:01 compute-0 admiring_haslett[268036]:     },
Nov 25 23:59:01 compute-0 admiring_haslett[268036]:     "c5ddf08a-6193-41e0-8332-60b5083aa62e": {
Nov 25 23:59:01 compute-0 admiring_haslett[268036]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 25 23:59:01 compute-0 admiring_haslett[268036]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 25 23:59:01 compute-0 admiring_haslett[268036]:         "osd_id": 0,
Nov 25 23:59:01 compute-0 admiring_haslett[268036]:         "osd_uuid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 25 23:59:01 compute-0 admiring_haslett[268036]:         "type": "bluestore"
Nov 25 23:59:01 compute-0 admiring_haslett[268036]:     }
Nov 25 23:59:01 compute-0 admiring_haslett[268036]: }
Nov 25 23:59:01 compute-0 systemd[1]: libpod-2cb64a0d4a73c50ca2e069cbd438851ffac9015212465541f7a0a7c9d8ba7119.scope: Deactivated successfully.
Nov 25 23:59:01 compute-0 podman[268019]: 2025-11-25 23:59:01.097590893 +0000 UTC m=+1.155388112 container died 2cb64a0d4a73c50ca2e069cbd438851ffac9015212465541f7a0a7c9d8ba7119 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_haslett, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 23:59:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-d461d1680271004d680372c4dd300af5318dc79096c1056d205bc54aa5146530-merged.mount: Deactivated successfully.
Nov 25 23:59:01 compute-0 podman[268019]: 2025-11-25 23:59:01.282507981 +0000 UTC m=+1.340305210 container remove 2cb64a0d4a73c50ca2e069cbd438851ffac9015212465541f7a0a7c9d8ba7119 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_haslett, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 23:59:01 compute-0 systemd[1]: libpod-conmon-2cb64a0d4a73c50ca2e069cbd438851ffac9015212465541f7a0a7c9d8ba7119.scope: Deactivated successfully.
Nov 25 23:59:01 compute-0 sudo[267875]: pam_unix(sudo:session): session closed for user root
Nov 25 23:59:01 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 25 23:59:01 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:59:01 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 25 23:59:01 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:59:01 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev b3624e8e-ca39-4ab4-860f-41b0dafd5f66 does not exist
Nov 25 23:59:01 compute-0 sudo[268081]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 25 23:59:01 compute-0 sudo[268081]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:59:01 compute-0 sudo[268081]: pam_unix(sudo:session): session closed for user root
Nov 25 23:59:01 compute-0 sudo[268106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 25 23:59:01 compute-0 sudo[268106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 25 23:59:01 compute-0 sudo[268106]: pam_unix(sudo:session): session closed for user root
Nov 25 23:59:01 compute-0 ceph-mon[75654]: pgmap v909: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:59:01 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:59:01 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 25 23:59:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] _maybe_adjust
Nov 25 23:59:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:59:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 25 23:59:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:59:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 23:59:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:59:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 25 23:59:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:59:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:59:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:59:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 25 23:59:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:59:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 25 23:59:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 25 23:59:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 25 23:59:02 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v910: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:59:04 compute-0 ceph-mon[75654]: pgmap v910: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:59:04 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v911: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:59:05 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:59:05 compute-0 ceph-mon[75654]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #45. Immutable memtables: 0.
Nov 25 23:59:05 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:59:05.849701) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 23:59:05 compute-0 ceph-mon[75654]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 45
Nov 25 23:59:05 compute-0 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764115145849737, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 934, "num_deletes": 256, "total_data_size": 735537, "memory_usage": 754792, "flush_reason": "Manual Compaction"}
Nov 25 23:59:05 compute-0 ceph-mon[75654]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #46: started
Nov 25 23:59:05 compute-0 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764115145879774, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 46, "file_size": 724761, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18243, "largest_seqno": 19176, "table_properties": {"data_size": 720105, "index_size": 2116, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11682, "raw_average_key_size": 19, "raw_value_size": 710039, "raw_average_value_size": 1207, "num_data_blocks": 95, "num_entries": 588, "num_filter_entries": 588, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764115086, "oldest_key_time": 1764115086, "file_creation_time": 1764115145, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cf57a6b1-796f-4cfa-b350-53eb10a4554d", "db_session_id": "Q7VS70283MEZ1V621ZPR", "orig_file_number": 46, "seqno_to_time_mapping": "N/A"}}
Nov 25 23:59:05 compute-0 ceph-mon[75654]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 30129 microseconds, and 4065 cpu microseconds.
Nov 25 23:59:05 compute-0 ceph-mon[75654]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 23:59:05 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:59:05.879827) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #46: 724761 bytes OK
Nov 25 23:59:05 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:59:05.879851) [db/memtable_list.cc:519] [default] Level-0 commit table #46 started
Nov 25 23:59:05 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:59:05.884680) [db/memtable_list.cc:722] [default] Level-0 commit table #46: memtable #1 done
Nov 25 23:59:05 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:59:05.884701) EVENT_LOG_v1 {"time_micros": 1764115145884695, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 23:59:05 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:59:05.884719) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 23:59:05 compute-0 ceph-mon[75654]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 730687, prev total WAL file size 730687, number of live WAL files 2.
Nov 25 23:59:05 compute-0 ceph-mon[75654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000042.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 23:59:05 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:59:05.885820) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323531' seq:72057594037927935, type:22 .. '6C6F676D00353033' seq:0, type:0; will stop at (end)
Nov 25 23:59:05 compute-0 ceph-mon[75654]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 23:59:05 compute-0 ceph-mon[75654]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [46(707KB)], [44(4712KB)]
Nov 25 23:59:05 compute-0 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764115145885898, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [46], "files_L6": [44], "score": -1, "input_data_size": 5550727, "oldest_snapshot_seqno": -1}
Nov 25 23:59:06 compute-0 ceph-mon[75654]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #47: 3835 keys, 5449074 bytes, temperature: kUnknown
Nov 25 23:59:06 compute-0 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764115146153810, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 47, "file_size": 5449074, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 5421635, "index_size": 16769, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9605, "raw_key_size": 92680, "raw_average_key_size": 24, "raw_value_size": 5350865, "raw_average_value_size": 1395, "num_data_blocks": 713, "num_entries": 3835, "num_filter_entries": 3835, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764113467, "oldest_key_time": 0, "file_creation_time": 1764115145, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cf57a6b1-796f-4cfa-b350-53eb10a4554d", "db_session_id": "Q7VS70283MEZ1V621ZPR", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Nov 25 23:59:06 compute-0 ceph-mon[75654]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 23:59:06 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:59:06.154105) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 5449074 bytes
Nov 25 23:59:06 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:59:06.157699) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 20.7 rd, 20.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 4.6 +0.0 blob) out(5.2 +0.0 blob), read-write-amplify(15.2) write-amplify(7.5) OK, records in: 4359, records dropped: 524 output_compression: NoCompression
Nov 25 23:59:06 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:59:06.157731) EVENT_LOG_v1 {"time_micros": 1764115146157716, "job": 22, "event": "compaction_finished", "compaction_time_micros": 267995, "compaction_time_cpu_micros": 17557, "output_level": 6, "num_output_files": 1, "total_output_size": 5449074, "num_input_records": 4359, "num_output_records": 3835, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 23:59:06 compute-0 ceph-mon[75654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000046.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 23:59:06 compute-0 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764115146158274, "job": 22, "event": "table_file_deletion", "file_number": 46}
Nov 25 23:59:06 compute-0 ceph-mon[75654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 23:59:06 compute-0 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764115146159959, "job": 22, "event": "table_file_deletion", "file_number": 44}
Nov 25 23:59:06 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:59:05.885681) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 23:59:06 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:59:06.160051) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 23:59:06 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:59:06.160056) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 23:59:06 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:59:06.160057) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 23:59:06 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:59:06.160059) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 23:59:06 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/25-23:59:06.160060) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 23:59:06 compute-0 ceph-mon[75654]: pgmap v911: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:59:06 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v912: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:59:06 compute-0 nova_compute[252550]: 2025-11-25 23:59:06.822 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 23:59:06 compute-0 nova_compute[252550]: 2025-11-25 23:59:06.824 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 23:59:07 compute-0 nova_compute[252550]: 2025-11-25 23:59:07.819 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 23:59:07 compute-0 nova_compute[252550]: 2025-11-25 23:59:07.821 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 23:59:08 compute-0 ceph-mon[75654]: pgmap v912: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:59:08 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v913: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:59:08 compute-0 nova_compute[252550]: 2025-11-25 23:59:08.817 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 23:59:08 compute-0 nova_compute[252550]: 2025-11-25 23:59:08.855 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 23:59:08 compute-0 nova_compute[252550]: 2025-11-25 23:59:08.855 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 23:59:08 compute-0 nova_compute[252550]: 2025-11-25 23:59:08.912 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 23:59:08 compute-0 nova_compute[252550]: 2025-11-25 23:59:08.912 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 23:59:08 compute-0 nova_compute[252550]: 2025-11-25 23:59:08.912 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 23:59:08 compute-0 nova_compute[252550]: 2025-11-25 23:59:08.912 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 23:59:08 compute-0 nova_compute[252550]: 2025-11-25 23:59:08.913 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 23:59:09 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 23:59:09 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3478515868' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 23:59:09 compute-0 nova_compute[252550]: 2025-11-25 23:59:09.324 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 23:59:09 compute-0 nova_compute[252550]: 2025-11-25 23:59:09.481 252558 WARNING nova.virt.libvirt.driver [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 23:59:09 compute-0 nova_compute[252550]: 2025-11-25 23:59:09.482 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5003MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 23:59:09 compute-0 nova_compute[252550]: 2025-11-25 23:59:09.483 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 23:59:09 compute-0 nova_compute[252550]: 2025-11-25 23:59:09.483 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 23:59:09 compute-0 ceph-mon[75654]: pgmap v913: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:59:09 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/3478515868' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 23:59:09 compute-0 nova_compute[252550]: 2025-11-25 23:59:09.677 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 23:59:09 compute-0 nova_compute[252550]: 2025-11-25 23:59:09.677 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 23:59:09 compute-0 nova_compute[252550]: 2025-11-25 23:59:09.712 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 23:59:10 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 25 23:59:10 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1063312293' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 23:59:10 compute-0 nova_compute[252550]: 2025-11-25 23:59:10.145 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 23:59:10 compute-0 nova_compute[252550]: 2025-11-25 23:59:10.150 252558 DEBUG nova.compute.provider_tree [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Inventory has not changed in ProviderTree for provider: 08547965-b35f-4b7b-95d8-902f06aa011c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 23:59:10 compute-0 nova_compute[252550]: 2025-11-25 23:59:10.186 252558 DEBUG nova.scheduler.client.report [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Inventory has not changed for provider 08547965-b35f-4b7b-95d8-902f06aa011c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 23:59:10 compute-0 nova_compute[252550]: 2025-11-25 23:59:10.188 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 23:59:10 compute-0 nova_compute[252550]: 2025-11-25 23:59:10.188 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.705s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 23:59:10 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/1063312293' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 23:59:10 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v914: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:59:10 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:59:11 compute-0 nova_compute[252550]: 2025-11-25 23:59:11.155 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 23:59:11 compute-0 nova_compute[252550]: 2025-11-25 23:59:11.156 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 23:59:11 compute-0 nova_compute[252550]: 2025-11-25 23:59:11.157 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 23:59:11 compute-0 nova_compute[252550]: 2025-11-25 23:59:11.249 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 23:59:11 compute-0 nova_compute[252550]: 2025-11-25 23:59:11.249 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 23:59:11 compute-0 ceph-mon[75654]: pgmap v914: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:59:11 compute-0 nova_compute[252550]: 2025-11-25 23:59:11.822 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 23:59:12 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v915: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:59:13 compute-0 nova_compute[252550]: 2025-11-25 23:59:13.821 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 23:59:14 compute-0 ceph-mon[75654]: pgmap v915: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:59:14 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v916: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:59:15 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:59:16 compute-0 ceph-mon[75654]: pgmap v916: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:59:16 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v917: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:59:17 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 25 23:59:17 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3976786771' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 23:59:17 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 25 23:59:17 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3976786771' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 23:59:18 compute-0 ceph-mon[75654]: pgmap v917: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:59:18 compute-0 ceph-mon[75654]: from='client.? 192.168.122.10:0/3976786771' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 23:59:18 compute-0 ceph-mon[75654]: from='client.? 192.168.122.10:0/3976786771' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 23:59:18 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v918: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:59:20 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v919: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:59:20 compute-0 ceph-mon[75654]: pgmap v918: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:59:20 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:59:21 compute-0 sudo[260722]: pam_unix(sudo:session): session closed for user root
Nov 25 23:59:21 compute-0 sshd-session[260721]: Received disconnect from 192.168.122.10 port 57878:11: disconnected by user
Nov 25 23:59:21 compute-0 sshd-session[260721]: Disconnected from user zuul 192.168.122.10 port 57878
Nov 25 23:59:21 compute-0 sshd-session[260718]: pam_unix(sshd:session): session closed for user zuul
Nov 25 23:59:21 compute-0 systemd[1]: session-52.scope: Deactivated successfully.
Nov 25 23:59:21 compute-0 systemd[1]: session-52.scope: Consumed 2min 28.788s CPU time, 732.9M memory peak, read 301.3M from disk, written 63.0M to disk.
Nov 25 23:59:21 compute-0 systemd-logind[789]: Session 52 logged out. Waiting for processes to exit.
Nov 25 23:59:21 compute-0 systemd-logind[789]: Removed session 52.
Nov 25 23:59:21 compute-0 sshd-session[268175]: Accepted publickey for zuul from 192.168.122.10 port 56228 ssh2: ECDSA SHA256:7fdySOwLsSW4/MY3qn2D8coSBpPDcuftesbV+ivPlmY
Nov 25 23:59:21 compute-0 systemd-logind[789]: New session 53 of user zuul.
Nov 25 23:59:21 compute-0 systemd[1]: Started Session 53 of User zuul.
Nov 25 23:59:21 compute-0 sshd-session[268175]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 23:59:21 compute-0 sudo[268179]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/cat /var/tmp/sos-osp/sosreport-compute-0-2025-11-25-smsfdbd.tar.xz
Nov 25 23:59:21 compute-0 sudo[268179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:59:21 compute-0 podman[268203]: 2025-11-25 23:59:21.412897012 +0000 UTC m=+0.094949793 container health_status b33bb0f8771548517ad62d4e0193b93d84fd686bd9113462f17aea9d51587084 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 25 23:59:21 compute-0 sudo[268179]: pam_unix(sudo:session): session closed for user root
Nov 25 23:59:21 compute-0 sshd-session[268178]: Received disconnect from 192.168.122.10 port 56228:11: disconnected by user
Nov 25 23:59:21 compute-0 sshd-session[268178]: Disconnected from user zuul 192.168.122.10 port 56228
Nov 25 23:59:21 compute-0 sshd-session[268175]: pam_unix(sshd:session): session closed for user zuul
Nov 25 23:59:21 compute-0 systemd[1]: session-53.scope: Deactivated successfully.
Nov 25 23:59:21 compute-0 systemd-logind[789]: Session 53 logged out. Waiting for processes to exit.
Nov 25 23:59:21 compute-0 systemd-logind[789]: Removed session 53.
Nov 25 23:59:21 compute-0 sshd-session[268224]: Accepted publickey for zuul from 192.168.122.10 port 56236 ssh2: ECDSA SHA256:7fdySOwLsSW4/MY3qn2D8coSBpPDcuftesbV+ivPlmY
Nov 25 23:59:21 compute-0 systemd-logind[789]: New session 54 of user zuul.
Nov 25 23:59:21 compute-0 systemd[1]: Started Session 54 of User zuul.
Nov 25 23:59:21 compute-0 sshd-session[268224]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 23:59:21 compute-0 ceph-mon[75654]: pgmap v919: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:59:21 compute-0 sudo[268228]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/rm -rf /var/tmp/sos-osp
Nov 25 23:59:21 compute-0 sudo[268228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 23:59:21 compute-0 sudo[268228]: pam_unix(sudo:session): session closed for user root
Nov 25 23:59:21 compute-0 sshd-session[268227]: Received disconnect from 192.168.122.10 port 56236:11: disconnected by user
Nov 25 23:59:21 compute-0 sshd-session[268227]: Disconnected from user zuul 192.168.122.10 port 56236
Nov 25 23:59:21 compute-0 sshd-session[268224]: pam_unix(sshd:session): session closed for user zuul
Nov 25 23:59:21 compute-0 systemd[1]: session-54.scope: Deactivated successfully.
Nov 25 23:59:21 compute-0 systemd-logind[789]: Session 54 logged out. Waiting for processes to exit.
Nov 25 23:59:21 compute-0 systemd-logind[789]: Removed session 54.
Nov 25 23:59:22 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v920: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:59:23 compute-0 ceph-mon[75654]: pgmap v920: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:59:24 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v921: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:59:25 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:59:26 compute-0 ceph-mon[75654]: pgmap v921: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:59:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:59:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:59:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:59:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:59:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:59:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:59:26 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v922: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:59:28 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 25 23:59:28 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 25 23:59:28 compute-0 ceph-mon[75654]: pgmap v922: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:59:28 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v923: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:59:30 compute-0 podman[268258]: 2025-11-25 23:59:30.267904779 +0000 UTC m=+0.083641494 container health_status 9c33443d6f21f9126b440289b7d384a59ef92406382d0e40a2b11f1da5dc47d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Nov 25 23:59:30 compute-0 ceph-mon[75654]: pgmap v923: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:59:30 compute-0 podman[268257]: 2025-11-25 23:59:30.331821648 +0000 UTC m=+0.146684859 container health_status 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true)
Nov 25 23:59:30 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v924: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:59:30 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:59:32 compute-0 ceph-mon[75654]: pgmap v924: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:59:32 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v925: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:59:33 compute-0 ceph-mon[75654]: pgmap v925: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:59:34 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v926: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:59:35 compute-0 ceph-mon[75654]: pgmap v926: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:59:35 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:59:36 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v927: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:59:38 compute-0 ceph-mon[75654]: pgmap v927: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:59:38 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v928: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:59:40 compute-0 ceph-mon[75654]: pgmap v928: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:59:40 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v929: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:59:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:59:40.767 160725 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 23:59:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:59:40.768 160725 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 23:59:40 compute-0 ovn_metadata_agent[160720]: 2025-11-25 23:59:40.768 160725 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 23:59:40 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:59:42 compute-0 ceph-mon[75654]: pgmap v929: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:59:42 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v930: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:59:43 compute-0 ceph-mon[75654]: pgmap v930: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:59:44 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v931: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:59:45 compute-0 ceph-mon[75654]: pgmap v931: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:59:45 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:59:46 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v932: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:59:47 compute-0 ceph-mon[75654]: pgmap v932: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:59:48 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v933: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:59:49 compute-0 ceph-mon[75654]: pgmap v933: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:59:50 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v934: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:59:50 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:59:51 compute-0 ceph-mon[75654]: pgmap v934: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:59:52 compute-0 podman[268302]: 2025-11-25 23:59:52.26171523 +0000 UTC m=+0.081915297 container health_status b33bb0f8771548517ad62d4e0193b93d84fd686bd9113462f17aea9d51587084 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3)
Nov 25 23:59:52 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v935: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:59:54 compute-0 ceph-mon[75654]: pgmap v935: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:59:54 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v936: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:59:55 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 23:59:56 compute-0 ceph-mon[75654]: pgmap v936: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:59:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:59:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:59:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:59:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:59:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 25 23:59:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 25 23:59:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 25 23:59:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 23:59:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 25 23:59:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 25 23:59:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 23:59:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 25 23:59:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 23:59:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 23:59:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 25 23:59:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 25 23:59:56 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v937: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:59:58 compute-0 ceph-mon[75654]: pgmap v937: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 25 23:59:58 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v938: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:00:00 compute-0 ceph-mon[75654]: pgmap v938: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:00:00 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v939: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:00:00 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 00:00:01 compute-0 systemd[1]: Starting update of the root trust anchor for DNSSEC validation in unbound...
Nov 26 00:00:01 compute-0 systemd[1]: Starting Rotate log files...
Nov 26 00:00:01 compute-0 podman[268324]: 2025-11-26 00:00:01.230531324 +0000 UTC m=+0.048222952 container health_status 9c33443d6f21f9126b440289b7d384a59ef92406382d0e40a2b11f1da5dc47d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 26 00:00:01 compute-0 systemd[1]: unbound-anchor.service: Deactivated successfully.
Nov 26 00:00:01 compute-0 systemd[1]: Finished update of the root trust anchor for DNSSEC validation in unbound.
Nov 26 00:00:01 compute-0 podman[268323]: 2025-11-26 00:00:01.264865097 +0000 UTC m=+0.087481979 container health_status 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 26 00:00:01 compute-0 systemd[1]: logrotate.service: Deactivated successfully.
Nov 26 00:00:01 compute-0 systemd[1]: Finished Rotate log files.
Nov 26 00:00:01 compute-0 sudo[268374]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 00:00:01 compute-0 sudo[268374]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:00:01 compute-0 sudo[268374]: pam_unix(sudo:session): session closed for user root
Nov 26 00:00:01 compute-0 sudo[268399]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 00:00:01 compute-0 sudo[268399]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:00:01 compute-0 sudo[268399]: pam_unix(sudo:session): session closed for user root
Nov 26 00:00:01 compute-0 sudo[268424]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 00:00:01 compute-0 sudo[268424]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:00:01 compute-0 sudo[268424]: pam_unix(sudo:session): session closed for user root
Nov 26 00:00:01 compute-0 sudo[268449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 26 00:00:01 compute-0 sudo[268449]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:00:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 00:00:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 00:00:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 00:00:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 00:00:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 26 00:00:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 00:00:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 26 00:00:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 00:00:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 00:00:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 00:00:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 26 00:00:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 00:00:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 26 00:00:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 00:00:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 00:00:02 compute-0 ceph-mon[75654]: pgmap v939: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:00:02 compute-0 sudo[268449]: pam_unix(sudo:session): session closed for user root
Nov 26 00:00:02 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 00:00:02 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 00:00:02 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 00:00:02 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 00:00:02 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 00:00:02 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 26 00:00:02 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev 51e0fed9-3c59-43ed-a408-d3e73a7b2129 does not exist
Nov 26 00:00:02 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev 0896ec21-652d-42a3-b04f-612989227d64 does not exist
Nov 26 00:00:02 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev 9b1aa101-6a55-427f-a671-a4f8715238c3 does not exist
Nov 26 00:00:02 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 00:00:02 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 00:00:02 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 00:00:02 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 00:00:02 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 00:00:02 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 00:00:02 compute-0 sudo[268505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 00:00:02 compute-0 sudo[268505]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:00:02 compute-0 sudo[268505]: pam_unix(sudo:session): session closed for user root
Nov 26 00:00:02 compute-0 sudo[268530]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 00:00:02 compute-0 sudo[268530]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:00:02 compute-0 sudo[268530]: pam_unix(sudo:session): session closed for user root
Nov 26 00:00:02 compute-0 sudo[268555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 00:00:02 compute-0 sudo[268555]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:00:02 compute-0 sudo[268555]: pam_unix(sudo:session): session closed for user root
Nov 26 00:00:02 compute-0 sudo[268580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 26 00:00:02 compute-0 sudo[268580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:00:02 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v940: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:00:02 compute-0 podman[268649]: 2025-11-26 00:00:02.923134251 +0000 UTC m=+0.038592311 container create f650b97b2f0bccf087ca7c0465a1308f7b496fa8d192b27181acbd03abb759a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 26 00:00:02 compute-0 systemd[1]: Started libpod-conmon-f650b97b2f0bccf087ca7c0465a1308f7b496fa8d192b27181acbd03abb759a5.scope.
Nov 26 00:00:02 compute-0 systemd[1]: Started libcrun container.
Nov 26 00:00:02 compute-0 podman[268649]: 2025-11-26 00:00:02.994024217 +0000 UTC m=+0.109482287 container init f650b97b2f0bccf087ca7c0465a1308f7b496fa8d192b27181acbd03abb759a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 26 00:00:02 compute-0 podman[268649]: 2025-11-26 00:00:02.999498296 +0000 UTC m=+0.114956346 container start f650b97b2f0bccf087ca7c0465a1308f7b496fa8d192b27181acbd03abb759a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_satoshi, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 00:00:03 compute-0 podman[268649]: 2025-11-26 00:00:02.906530739 +0000 UTC m=+0.021988809 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 00:00:03 compute-0 podman[268649]: 2025-11-26 00:00:03.003351651 +0000 UTC m=+0.118809731 container attach f650b97b2f0bccf087ca7c0465a1308f7b496fa8d192b27181acbd03abb759a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_satoshi, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 26 00:00:03 compute-0 quizzical_satoshi[268665]: 167 167
Nov 26 00:00:03 compute-0 systemd[1]: libpod-f650b97b2f0bccf087ca7c0465a1308f7b496fa8d192b27181acbd03abb759a5.scope: Deactivated successfully.
Nov 26 00:00:03 compute-0 conmon[268665]: conmon f650b97b2f0bccf087ca <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f650b97b2f0bccf087ca7c0465a1308f7b496fa8d192b27181acbd03abb759a5.scope/container/memory.events
Nov 26 00:00:03 compute-0 podman[268649]: 2025-11-26 00:00:03.006287261 +0000 UTC m=+0.121745351 container died f650b97b2f0bccf087ca7c0465a1308f7b496fa8d192b27181acbd03abb759a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_satoshi, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 00:00:03 compute-0 rsyslogd[1001]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 26 00:00:03 compute-0 rsyslogd[1001]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 26 00:00:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-ed5aa74defc63a763fac00a08de014c869422d5b537254ce833702651192c7ba-merged.mount: Deactivated successfully.
Nov 26 00:00:03 compute-0 podman[268649]: 2025-11-26 00:00:03.062755307 +0000 UTC m=+0.178213397 container remove f650b97b2f0bccf087ca7c0465a1308f7b496fa8d192b27181acbd03abb759a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_satoshi, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 00:00:03 compute-0 systemd[1]: libpod-conmon-f650b97b2f0bccf087ca7c0465a1308f7b496fa8d192b27181acbd03abb759a5.scope: Deactivated successfully.
Nov 26 00:00:03 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 00:00:03 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 00:00:03 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 26 00:00:03 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 00:00:03 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 00:00:03 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 00:00:03 compute-0 podman[268692]: 2025-11-26 00:00:03.289061789 +0000 UTC m=+0.048507019 container create 1e791d285a25385b042a12903ac659c8a03b48f495dea5d6ffd813a24ceb267f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_proskuriakova, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 00:00:03 compute-0 systemd[1]: Started libpod-conmon-1e791d285a25385b042a12903ac659c8a03b48f495dea5d6ffd813a24ceb267f.scope.
Nov 26 00:00:03 compute-0 podman[268692]: 2025-11-26 00:00:03.269808775 +0000 UTC m=+0.029253985 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 00:00:03 compute-0 systemd[1]: Started libcrun container.
Nov 26 00:00:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e79c0c6cdc3c0905e071fdec29fdfaf4ad3aaf0e3a9cc3fff9c25f7f5d7197a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 00:00:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e79c0c6cdc3c0905e071fdec29fdfaf4ad3aaf0e3a9cc3fff9c25f7f5d7197a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 00:00:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e79c0c6cdc3c0905e071fdec29fdfaf4ad3aaf0e3a9cc3fff9c25f7f5d7197a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 00:00:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e79c0c6cdc3c0905e071fdec29fdfaf4ad3aaf0e3a9cc3fff9c25f7f5d7197a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 00:00:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e79c0c6cdc3c0905e071fdec29fdfaf4ad3aaf0e3a9cc3fff9c25f7f5d7197a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 00:00:03 compute-0 podman[268692]: 2025-11-26 00:00:03.396340326 +0000 UTC m=+0.155785606 container init 1e791d285a25385b042a12903ac659c8a03b48f495dea5d6ffd813a24ceb267f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_proskuriakova, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 26 00:00:03 compute-0 podman[268692]: 2025-11-26 00:00:03.40752156 +0000 UTC m=+0.166966790 container start 1e791d285a25385b042a12903ac659c8a03b48f495dea5d6ffd813a24ceb267f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_proskuriakova, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 26 00:00:03 compute-0 podman[268692]: 2025-11-26 00:00:03.41158743 +0000 UTC m=+0.171032650 container attach 1e791d285a25385b042a12903ac659c8a03b48f495dea5d6ffd813a24ceb267f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_proskuriakova, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 26 00:00:04 compute-0 admiring_proskuriakova[268708]: --> passed data devices: 0 physical, 3 LVM
Nov 26 00:00:04 compute-0 admiring_proskuriakova[268708]: --> relative data size: 1.0
Nov 26 00:00:04 compute-0 admiring_proskuriakova[268708]: --> All data devices are unavailable
Nov 26 00:00:04 compute-0 systemd[1]: libpod-1e791d285a25385b042a12903ac659c8a03b48f495dea5d6ffd813a24ceb267f.scope: Deactivated successfully.
Nov 26 00:00:04 compute-0 podman[268692]: 2025-11-26 00:00:04.441278034 +0000 UTC m=+1.200723234 container died 1e791d285a25385b042a12903ac659c8a03b48f495dea5d6ffd813a24ceb267f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_proskuriakova, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 26 00:00:04 compute-0 ceph-mon[75654]: pgmap v940: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:00:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-2e79c0c6cdc3c0905e071fdec29fdfaf4ad3aaf0e3a9cc3fff9c25f7f5d7197a-merged.mount: Deactivated successfully.
Nov 26 00:00:04 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v941: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:00:04 compute-0 podman[268692]: 2025-11-26 00:00:04.670786334 +0000 UTC m=+1.430231534 container remove 1e791d285a25385b042a12903ac659c8a03b48f495dea5d6ffd813a24ceb267f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 00:00:04 compute-0 sudo[268580]: pam_unix(sudo:session): session closed for user root
Nov 26 00:00:04 compute-0 systemd[1]: libpod-conmon-1e791d285a25385b042a12903ac659c8a03b48f495dea5d6ffd813a24ceb267f.scope: Deactivated successfully.
Nov 26 00:00:04 compute-0 sudo[268749]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 00:00:04 compute-0 sudo[268749]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:00:04 compute-0 sudo[268749]: pam_unix(sudo:session): session closed for user root
Nov 26 00:00:04 compute-0 sudo[268774]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 00:00:04 compute-0 sudo[268774]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:00:04 compute-0 sudo[268774]: pam_unix(sudo:session): session closed for user root
Nov 26 00:00:04 compute-0 sudo[268799]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 00:00:04 compute-0 sudo[268799]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:00:04 compute-0 sudo[268799]: pam_unix(sudo:session): session closed for user root
Nov 26 00:00:04 compute-0 sudo[268824]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 -- lvm list --format json
Nov 26 00:00:04 compute-0 sudo[268824]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:00:05 compute-0 podman[268889]: 2025-11-26 00:00:05.514254324 +0000 UTC m=+0.122558232 container create 49cfb2972c80e70aa85b576417fe78bfc8d83c6b3b1e64644a3031d7778fb990 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_almeida, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 26 00:00:05 compute-0 podman[268889]: 2025-11-26 00:00:05.431678259 +0000 UTC m=+0.039982207 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 00:00:05 compute-0 ceph-mon[75654]: pgmap v941: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:00:05 compute-0 systemd[1]: Started libpod-conmon-49cfb2972c80e70aa85b576417fe78bfc8d83c6b3b1e64644a3031d7778fb990.scope.
Nov 26 00:00:05 compute-0 systemd[1]: Started libcrun container.
Nov 26 00:00:05 compute-0 podman[268889]: 2025-11-26 00:00:05.67000614 +0000 UTC m=+0.278310088 container init 49cfb2972c80e70aa85b576417fe78bfc8d83c6b3b1e64644a3031d7778fb990 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_almeida, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 26 00:00:05 compute-0 podman[268889]: 2025-11-26 00:00:05.680758451 +0000 UTC m=+0.289062339 container start 49cfb2972c80e70aa85b576417fe78bfc8d83c6b3b1e64644a3031d7778fb990 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_almeida, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 00:00:05 compute-0 wonderful_almeida[268906]: 167 167
Nov 26 00:00:05 compute-0 systemd[1]: libpod-49cfb2972c80e70aa85b576417fe78bfc8d83c6b3b1e64644a3031d7778fb990.scope: Deactivated successfully.
Nov 26 00:00:05 compute-0 conmon[268906]: conmon 49cfb2972c80e70aa85b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-49cfb2972c80e70aa85b576417fe78bfc8d83c6b3b1e64644a3031d7778fb990.scope/container/memory.events
Nov 26 00:00:05 compute-0 podman[268889]: 2025-11-26 00:00:05.693555029 +0000 UTC m=+0.301859017 container attach 49cfb2972c80e70aa85b576417fe78bfc8d83c6b3b1e64644a3031d7778fb990 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_almeida, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 00:00:05 compute-0 podman[268889]: 2025-11-26 00:00:05.694051893 +0000 UTC m=+0.302355831 container died 49cfb2972c80e70aa85b576417fe78bfc8d83c6b3b1e64644a3031d7778fb990 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_almeida, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 00:00:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-ba053a1cf0d2da896ae26286bab0e3cd0d09d91d89dc50ddc2d863e8f1f9dbcb-merged.mount: Deactivated successfully.
Nov 26 00:00:05 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 00:00:05 compute-0 podman[268889]: 2025-11-26 00:00:05.930374678 +0000 UTC m=+0.538678616 container remove 49cfb2972c80e70aa85b576417fe78bfc8d83c6b3b1e64644a3031d7778fb990 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_almeida, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 26 00:00:05 compute-0 systemd[1]: libpod-conmon-49cfb2972c80e70aa85b576417fe78bfc8d83c6b3b1e64644a3031d7778fb990.scope: Deactivated successfully.
Nov 26 00:00:06 compute-0 podman[268932]: 2025-11-26 00:00:06.172737487 +0000 UTC m=+0.067952269 container create f70dfa4e2d9cede06455281e228c946f17b0ec4767154f4163885b68a7a77b6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_franklin, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 00:00:06 compute-0 systemd[1]: Started libpod-conmon-f70dfa4e2d9cede06455281e228c946f17b0ec4767154f4163885b68a7a77b6c.scope.
Nov 26 00:00:06 compute-0 podman[268932]: 2025-11-26 00:00:06.140756527 +0000 UTC m=+0.035971209 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 00:00:06 compute-0 systemd[1]: Started libcrun container.
Nov 26 00:00:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc8b9e02340b44304f1adef25e8cd72ae4a586a3b93da7fdefc004eaa2104cd9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 00:00:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc8b9e02340b44304f1adef25e8cd72ae4a586a3b93da7fdefc004eaa2104cd9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 00:00:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc8b9e02340b44304f1adef25e8cd72ae4a586a3b93da7fdefc004eaa2104cd9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 00:00:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc8b9e02340b44304f1adef25e8cd72ae4a586a3b93da7fdefc004eaa2104cd9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 00:00:06 compute-0 podman[268932]: 2025-11-26 00:00:06.305452194 +0000 UTC m=+0.200666936 container init f70dfa4e2d9cede06455281e228c946f17b0ec4767154f4163885b68a7a77b6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_franklin, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 00:00:06 compute-0 podman[268932]: 2025-11-26 00:00:06.317579144 +0000 UTC m=+0.212793806 container start f70dfa4e2d9cede06455281e228c946f17b0ec4767154f4163885b68a7a77b6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_franklin, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 00:00:06 compute-0 podman[268932]: 2025-11-26 00:00:06.330433594 +0000 UTC m=+0.225648256 container attach f70dfa4e2d9cede06455281e228c946f17b0ec4767154f4163885b68a7a77b6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_franklin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 00:00:06 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v942: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:00:06 compute-0 nova_compute[252550]: 2025-11-26 00:00:06.822 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 00:00:06 compute-0 nova_compute[252550]: 2025-11-26 00:00:06.823 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 26 00:00:07 compute-0 laughing_franklin[268949]: {
Nov 26 00:00:07 compute-0 laughing_franklin[268949]:     "0": [
Nov 26 00:00:07 compute-0 laughing_franklin[268949]:         {
Nov 26 00:00:07 compute-0 laughing_franklin[268949]:             "devices": [
Nov 26 00:00:07 compute-0 laughing_franklin[268949]:                 "/dev/loop3"
Nov 26 00:00:07 compute-0 laughing_franklin[268949]:             ],
Nov 26 00:00:07 compute-0 laughing_franklin[268949]:             "lv_name": "ceph_lv0",
Nov 26 00:00:07 compute-0 laughing_franklin[268949]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 00:00:07 compute-0 laughing_franklin[268949]:             "lv_size": "21470642176",
Nov 26 00:00:07 compute-0 laughing_franklin[268949]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c5ddf08a-6193-41e0-8332-60b5083aa62e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 00:00:07 compute-0 laughing_franklin[268949]:             "lv_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 26 00:00:07 compute-0 laughing_franklin[268949]:             "name": "ceph_lv0",
Nov 26 00:00:07 compute-0 laughing_franklin[268949]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 00:00:07 compute-0 laughing_franklin[268949]:             "tags": {
Nov 26 00:00:07 compute-0 laughing_franklin[268949]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 00:00:07 compute-0 laughing_franklin[268949]:                 "ceph.block_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 26 00:00:07 compute-0 laughing_franklin[268949]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 00:00:07 compute-0 laughing_franklin[268949]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 26 00:00:07 compute-0 laughing_franklin[268949]:                 "ceph.cluster_name": "ceph",
Nov 26 00:00:07 compute-0 laughing_franklin[268949]:                 "ceph.crush_device_class": "",
Nov 26 00:00:07 compute-0 laughing_franklin[268949]:                 "ceph.encrypted": "0",
Nov 26 00:00:07 compute-0 laughing_franklin[268949]:                 "ceph.osd_fsid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 26 00:00:07 compute-0 laughing_franklin[268949]:                 "ceph.osd_id": "0",
Nov 26 00:00:07 compute-0 laughing_franklin[268949]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 00:00:07 compute-0 laughing_franklin[268949]:                 "ceph.type": "block",
Nov 26 00:00:07 compute-0 laughing_franklin[268949]:                 "ceph.vdo": "0"
Nov 26 00:00:07 compute-0 laughing_franklin[268949]:             },
Nov 26 00:00:07 compute-0 laughing_franklin[268949]:             "type": "block",
Nov 26 00:00:07 compute-0 laughing_franklin[268949]:             "vg_name": "ceph_vg0"
Nov 26 00:00:07 compute-0 laughing_franklin[268949]:         }
Nov 26 00:00:07 compute-0 laughing_franklin[268949]:     ],
Nov 26 00:00:07 compute-0 laughing_franklin[268949]:     "1": [
Nov 26 00:00:07 compute-0 laughing_franklin[268949]:         {
Nov 26 00:00:07 compute-0 laughing_franklin[268949]:             "devices": [
Nov 26 00:00:07 compute-0 laughing_franklin[268949]:                 "/dev/loop4"
Nov 26 00:00:07 compute-0 laughing_franklin[268949]:             ],
Nov 26 00:00:07 compute-0 laughing_franklin[268949]:             "lv_name": "ceph_lv1",
Nov 26 00:00:07 compute-0 laughing_franklin[268949]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 00:00:07 compute-0 laughing_franklin[268949]:             "lv_size": "21470642176",
Nov 26 00:00:07 compute-0 laughing_franklin[268949]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21bbab34-bea3-466b-8bf7-812749fcef47,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 00:00:07 compute-0 laughing_franklin[268949]:             "lv_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 26 00:00:07 compute-0 laughing_franklin[268949]:             "name": "ceph_lv1",
Nov 26 00:00:07 compute-0 laughing_franklin[268949]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 00:00:07 compute-0 laughing_franklin[268949]:             "tags": {
Nov 26 00:00:07 compute-0 laughing_franklin[268949]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 00:00:07 compute-0 laughing_franklin[268949]:                 "ceph.block_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 26 00:00:07 compute-0 laughing_franklin[268949]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 00:00:07 compute-0 laughing_franklin[268949]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 26 00:00:07 compute-0 laughing_franklin[268949]:                 "ceph.cluster_name": "ceph",
Nov 26 00:00:07 compute-0 laughing_franklin[268949]:                 "ceph.crush_device_class": "",
Nov 26 00:00:07 compute-0 laughing_franklin[268949]:                 "ceph.encrypted": "0",
Nov 26 00:00:07 compute-0 laughing_franklin[268949]:                 "ceph.osd_fsid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 26 00:00:07 compute-0 laughing_franklin[268949]:                 "ceph.osd_id": "1",
Nov 26 00:00:07 compute-0 laughing_franklin[268949]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 00:00:07 compute-0 laughing_franklin[268949]:                 "ceph.type": "block",
Nov 26 00:00:07 compute-0 laughing_franklin[268949]:                 "ceph.vdo": "0"
Nov 26 00:00:07 compute-0 laughing_franklin[268949]:             },
Nov 26 00:00:07 compute-0 laughing_franklin[268949]:             "type": "block",
Nov 26 00:00:07 compute-0 laughing_franklin[268949]:             "vg_name": "ceph_vg1"
Nov 26 00:00:07 compute-0 laughing_franklin[268949]:         }
Nov 26 00:00:07 compute-0 laughing_franklin[268949]:     ],
Nov 26 00:00:07 compute-0 laughing_franklin[268949]:     "2": [
Nov 26 00:00:07 compute-0 laughing_franklin[268949]:         {
Nov 26 00:00:07 compute-0 laughing_franklin[268949]:             "devices": [
Nov 26 00:00:07 compute-0 laughing_franklin[268949]:                 "/dev/loop5"
Nov 26 00:00:07 compute-0 laughing_franklin[268949]:             ],
Nov 26 00:00:07 compute-0 laughing_franklin[268949]:             "lv_name": "ceph_lv2",
Nov 26 00:00:07 compute-0 laughing_franklin[268949]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 00:00:07 compute-0 laughing_franklin[268949]:             "lv_size": "21470642176",
Nov 26 00:00:07 compute-0 laughing_franklin[268949]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=019d967b-1a56-4e90-8682-a890da577e20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 00:00:07 compute-0 laughing_franklin[268949]:             "lv_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 26 00:00:07 compute-0 laughing_franklin[268949]:             "name": "ceph_lv2",
Nov 26 00:00:07 compute-0 laughing_franklin[268949]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 00:00:07 compute-0 laughing_franklin[268949]:             "tags": {
Nov 26 00:00:07 compute-0 laughing_franklin[268949]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 00:00:07 compute-0 laughing_franklin[268949]:                 "ceph.block_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 26 00:00:07 compute-0 laughing_franklin[268949]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 00:00:07 compute-0 laughing_franklin[268949]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 26 00:00:07 compute-0 laughing_franklin[268949]:                 "ceph.cluster_name": "ceph",
Nov 26 00:00:07 compute-0 laughing_franklin[268949]:                 "ceph.crush_device_class": "",
Nov 26 00:00:07 compute-0 laughing_franklin[268949]:                 "ceph.encrypted": "0",
Nov 26 00:00:07 compute-0 laughing_franklin[268949]:                 "ceph.osd_fsid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 26 00:00:07 compute-0 laughing_franklin[268949]:                 "ceph.osd_id": "2",
Nov 26 00:00:07 compute-0 laughing_franklin[268949]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 00:00:07 compute-0 laughing_franklin[268949]:                 "ceph.type": "block",
Nov 26 00:00:07 compute-0 laughing_franklin[268949]:                 "ceph.vdo": "0"
Nov 26 00:00:07 compute-0 laughing_franklin[268949]:             },
Nov 26 00:00:07 compute-0 laughing_franklin[268949]:             "type": "block",
Nov 26 00:00:07 compute-0 laughing_franklin[268949]:             "vg_name": "ceph_vg2"
Nov 26 00:00:07 compute-0 laughing_franklin[268949]:         }
Nov 26 00:00:07 compute-0 laughing_franklin[268949]:     ]
Nov 26 00:00:07 compute-0 laughing_franklin[268949]: }
Nov 26 00:00:07 compute-0 systemd[1]: libpod-f70dfa4e2d9cede06455281e228c946f17b0ec4767154f4163885b68a7a77b6c.scope: Deactivated successfully.
Nov 26 00:00:07 compute-0 podman[268932]: 2025-11-26 00:00:07.090145548 +0000 UTC m=+0.985360260 container died f70dfa4e2d9cede06455281e228c946f17b0ec4767154f4163885b68a7a77b6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_franklin, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 00:00:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-fc8b9e02340b44304f1adef25e8cd72ae4a586a3b93da7fdefc004eaa2104cd9-merged.mount: Deactivated successfully.
Nov 26 00:00:07 compute-0 podman[268932]: 2025-11-26 00:00:07.311379673 +0000 UTC m=+1.206594305 container remove f70dfa4e2d9cede06455281e228c946f17b0ec4767154f4163885b68a7a77b6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_franklin, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 26 00:00:07 compute-0 systemd[1]: libpod-conmon-f70dfa4e2d9cede06455281e228c946f17b0ec4767154f4163885b68a7a77b6c.scope: Deactivated successfully.
Nov 26 00:00:07 compute-0 sudo[268824]: pam_unix(sudo:session): session closed for user root
Nov 26 00:00:07 compute-0 sudo[268970]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 00:00:07 compute-0 sudo[268970]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:00:07 compute-0 sudo[268970]: pam_unix(sudo:session): session closed for user root
Nov 26 00:00:07 compute-0 sudo[268995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 00:00:07 compute-0 sudo[268995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:00:07 compute-0 sudo[268995]: pam_unix(sudo:session): session closed for user root
Nov 26 00:00:07 compute-0 sudo[269020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 00:00:07 compute-0 sudo[269020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:00:07 compute-0 sudo[269020]: pam_unix(sudo:session): session closed for user root
Nov 26 00:00:07 compute-0 sudo[269045]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 -- raw list --format json
Nov 26 00:00:07 compute-0 sudo[269045]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:00:07 compute-0 nova_compute[252550]: 2025-11-26 00:00:07.818 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 00:00:07 compute-0 nova_compute[252550]: 2025-11-26 00:00:07.821 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 00:00:07 compute-0 ceph-mon[75654]: pgmap v942: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:00:08 compute-0 podman[269109]: 2025-11-26 00:00:08.067255503 +0000 UTC m=+0.073540221 container create c9dcba1505a4cd79707204d672401585fa5921aecc3939c0f528c4cd2dff8a8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 00:00:08 compute-0 podman[269109]: 2025-11-26 00:00:08.018905578 +0000 UTC m=+0.025190356 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 00:00:08 compute-0 systemd[1]: Started libpod-conmon-c9dcba1505a4cd79707204d672401585fa5921aecc3939c0f528c4cd2dff8a8d.scope.
Nov 26 00:00:08 compute-0 systemd[1]: Started libcrun container.
Nov 26 00:00:08 compute-0 podman[269109]: 2025-11-26 00:00:08.205409718 +0000 UTC m=+0.211694496 container init c9dcba1505a4cd79707204d672401585fa5921aecc3939c0f528c4cd2dff8a8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 00:00:08 compute-0 podman[269109]: 2025-11-26 00:00:08.218866514 +0000 UTC m=+0.225151242 container start c9dcba1505a4cd79707204d672401585fa5921aecc3939c0f528c4cd2dff8a8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_proskuriakova, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 00:00:08 compute-0 admiring_proskuriakova[269125]: 167 167
Nov 26 00:00:08 compute-0 systemd[1]: libpod-c9dcba1505a4cd79707204d672401585fa5921aecc3939c0f528c4cd2dff8a8d.scope: Deactivated successfully.
Nov 26 00:00:08 compute-0 podman[269109]: 2025-11-26 00:00:08.249542798 +0000 UTC m=+0.255827526 container attach c9dcba1505a4cd79707204d672401585fa5921aecc3939c0f528c4cd2dff8a8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507)
Nov 26 00:00:08 compute-0 podman[269109]: 2025-11-26 00:00:08.249944449 +0000 UTC m=+0.256229147 container died c9dcba1505a4cd79707204d672401585fa5921aecc3939c0f528c4cd2dff8a8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_proskuriakova, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 00:00:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-42d7a02d646f1f95181e2e5a1b76d7f8425c480d6620f9954b2f8591059b3936-merged.mount: Deactivated successfully.
Nov 26 00:00:08 compute-0 podman[269109]: 2025-11-26 00:00:08.42321714 +0000 UTC m=+0.429501838 container remove c9dcba1505a4cd79707204d672401585fa5921aecc3939c0f528c4cd2dff8a8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 26 00:00:08 compute-0 systemd[1]: libpod-conmon-c9dcba1505a4cd79707204d672401585fa5921aecc3939c0f528c4cd2dff8a8d.scope: Deactivated successfully.
Nov 26 00:00:08 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v943: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:00:08 compute-0 podman[269151]: 2025-11-26 00:00:08.667571124 +0000 UTC m=+0.058153753 container create 0cb5d1ae2d9c380106309c6da51c2c0a8fc380e07daeee0554ccb134143500e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_hamilton, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 26 00:00:08 compute-0 systemd[1]: Started libpod-conmon-0cb5d1ae2d9c380106309c6da51c2c0a8fc380e07daeee0554ccb134143500e3.scope.
Nov 26 00:00:08 compute-0 podman[269151]: 2025-11-26 00:00:08.645096112 +0000 UTC m=+0.035678721 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 00:00:08 compute-0 systemd[1]: Started libcrun container.
Nov 26 00:00:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e90e96de3be2845cb0094476d7f27faef63b65fb69e54a8980ca9f0e0229bb56/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 00:00:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e90e96de3be2845cb0094476d7f27faef63b65fb69e54a8980ca9f0e0229bb56/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 00:00:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e90e96de3be2845cb0094476d7f27faef63b65fb69e54a8980ca9f0e0229bb56/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 00:00:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e90e96de3be2845cb0094476d7f27faef63b65fb69e54a8980ca9f0e0229bb56/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 00:00:08 compute-0 podman[269151]: 2025-11-26 00:00:08.799916161 +0000 UTC m=+0.190498800 container init 0cb5d1ae2d9c380106309c6da51c2c0a8fc380e07daeee0554ccb134143500e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_hamilton, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 00:00:08 compute-0 podman[269151]: 2025-11-26 00:00:08.806995594 +0000 UTC m=+0.197578203 container start 0cb5d1ae2d9c380106309c6da51c2c0a8fc380e07daeee0554ccb134143500e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_hamilton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 26 00:00:08 compute-0 podman[269151]: 2025-11-26 00:00:08.820634715 +0000 UTC m=+0.211217354 container attach 0cb5d1ae2d9c380106309c6da51c2c0a8fc380e07daeee0554ccb134143500e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_hamilton, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 26 00:00:08 compute-0 nova_compute[252550]: 2025-11-26 00:00:08.821 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 00:00:08 compute-0 nova_compute[252550]: 2025-11-26 00:00:08.822 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 00:00:08 compute-0 nova_compute[252550]: 2025-11-26 00:00:08.847 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 26 00:00:08 compute-0 nova_compute[252550]: 2025-11-26 00:00:08.848 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 26 00:00:08 compute-0 nova_compute[252550]: 2025-11-26 00:00:08.848 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 26 00:00:08 compute-0 nova_compute[252550]: 2025-11-26 00:00:08.848 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 26 00:00:08 compute-0 nova_compute[252550]: 2025-11-26 00:00:08.849 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 26 00:00:09 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 00:00:09 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2935119959' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 00:00:09 compute-0 nova_compute[252550]: 2025-11-26 00:00:09.293 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 26 00:00:09 compute-0 nova_compute[252550]: 2025-11-26 00:00:09.531 252558 WARNING nova.virt.libvirt.driver [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 26 00:00:09 compute-0 nova_compute[252550]: 2025-11-26 00:00:09.534 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5085MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 26 00:00:09 compute-0 nova_compute[252550]: 2025-11-26 00:00:09.534 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 26 00:00:09 compute-0 nova_compute[252550]: 2025-11-26 00:00:09.535 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 26 00:00:09 compute-0 nova_compute[252550]: 2025-11-26 00:00:09.655 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 26 00:00:09 compute-0 nova_compute[252550]: 2025-11-26 00:00:09.656 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 26 00:00:09 compute-0 nova_compute[252550]: 2025-11-26 00:00:09.674 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 26 00:00:09 compute-0 modest_hamilton[269167]: {
Nov 26 00:00:09 compute-0 modest_hamilton[269167]:     "019d967b-1a56-4e90-8682-a890da577e20": {
Nov 26 00:00:09 compute-0 modest_hamilton[269167]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 26 00:00:09 compute-0 modest_hamilton[269167]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 00:00:09 compute-0 modest_hamilton[269167]:         "osd_id": 2,
Nov 26 00:00:09 compute-0 modest_hamilton[269167]:         "osd_uuid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 26 00:00:09 compute-0 modest_hamilton[269167]:         "type": "bluestore"
Nov 26 00:00:09 compute-0 modest_hamilton[269167]:     },
Nov 26 00:00:09 compute-0 modest_hamilton[269167]:     "21bbab34-bea3-466b-8bf7-812749fcef47": {
Nov 26 00:00:09 compute-0 modest_hamilton[269167]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 26 00:00:09 compute-0 modest_hamilton[269167]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 00:00:09 compute-0 modest_hamilton[269167]:         "osd_id": 1,
Nov 26 00:00:09 compute-0 modest_hamilton[269167]:         "osd_uuid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 26 00:00:09 compute-0 modest_hamilton[269167]:         "type": "bluestore"
Nov 26 00:00:09 compute-0 modest_hamilton[269167]:     },
Nov 26 00:00:09 compute-0 modest_hamilton[269167]:     "c5ddf08a-6193-41e0-8332-60b5083aa62e": {
Nov 26 00:00:09 compute-0 modest_hamilton[269167]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 26 00:00:09 compute-0 modest_hamilton[269167]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 00:00:09 compute-0 modest_hamilton[269167]:         "osd_id": 0,
Nov 26 00:00:09 compute-0 modest_hamilton[269167]:         "osd_uuid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 26 00:00:09 compute-0 modest_hamilton[269167]:         "type": "bluestore"
Nov 26 00:00:09 compute-0 modest_hamilton[269167]:     }
Nov 26 00:00:09 compute-0 modest_hamilton[269167]: }
Nov 26 00:00:09 compute-0 systemd[1]: libpod-0cb5d1ae2d9c380106309c6da51c2c0a8fc380e07daeee0554ccb134143500e3.scope: Deactivated successfully.
Nov 26 00:00:09 compute-0 podman[269151]: 2025-11-26 00:00:09.909393084 +0000 UTC m=+1.299975723 container died 0cb5d1ae2d9c380106309c6da51c2c0a8fc380e07daeee0554ccb134143500e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_hamilton, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 26 00:00:09 compute-0 systemd[1]: libpod-0cb5d1ae2d9c380106309c6da51c2c0a8fc380e07daeee0554ccb134143500e3.scope: Consumed 1.092s CPU time.
Nov 26 00:00:09 compute-0 ceph-mon[75654]: pgmap v943: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:00:09 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/2935119959' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 00:00:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-e90e96de3be2845cb0094476d7f27faef63b65fb69e54a8980ca9f0e0229bb56-merged.mount: Deactivated successfully.
Nov 26 00:00:10 compute-0 podman[269151]: 2025-11-26 00:00:10.083338954 +0000 UTC m=+1.473921553 container remove 0cb5d1ae2d9c380106309c6da51c2c0a8fc380e07daeee0554ccb134143500e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 00:00:10 compute-0 systemd[1]: libpod-conmon-0cb5d1ae2d9c380106309c6da51c2c0a8fc380e07daeee0554ccb134143500e3.scope: Deactivated successfully.
Nov 26 00:00:10 compute-0 sudo[269045]: pam_unix(sudo:session): session closed for user root
Nov 26 00:00:10 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 00:00:10 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 00:00:10 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/166564272' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 00:00:10 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 26 00:00:10 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 00:00:10 compute-0 nova_compute[252550]: 2025-11-26 00:00:10.162 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 26 00:00:10 compute-0 nova_compute[252550]: 2025-11-26 00:00:10.170 252558 DEBUG nova.compute.provider_tree [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Inventory has not changed in ProviderTree for provider: 08547965-b35f-4b7b-95d8-902f06aa011c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 26 00:00:10 compute-0 nova_compute[252550]: 2025-11-26 00:00:10.187 252558 DEBUG nova.scheduler.client.report [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Inventory has not changed for provider 08547965-b35f-4b7b-95d8-902f06aa011c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 26 00:00:10 compute-0 nova_compute[252550]: 2025-11-26 00:00:10.190 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 26 00:00:10 compute-0 nova_compute[252550]: 2025-11-26 00:00:10.191 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.656s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 26 00:00:10 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 26 00:00:10 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev 28a6ae4f-9139-4bba-a155-2f86e1e16048 does not exist
Nov 26 00:00:10 compute-0 sudo[269258]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 00:00:10 compute-0 sudo[269258]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:00:10 compute-0 sudo[269258]: pam_unix(sudo:session): session closed for user root
Nov 26 00:00:10 compute-0 sudo[269283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 26 00:00:10 compute-0 sudo[269283]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:00:10 compute-0 sudo[269283]: pam_unix(sudo:session): session closed for user root
Nov 26 00:00:10 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v944: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:00:10 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 00:00:10 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/166564272' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 00:00:10 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 26 00:00:10 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 26 00:00:12 compute-0 ceph-mon[75654]: pgmap v944: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:00:12 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v945: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:00:13 compute-0 nova_compute[252550]: 2025-11-26 00:00:13.191 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 00:00:13 compute-0 nova_compute[252550]: 2025-11-26 00:00:13.192 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 26 00:00:13 compute-0 nova_compute[252550]: 2025-11-26 00:00:13.192 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 26 00:00:13 compute-0 nova_compute[252550]: 2025-11-26 00:00:13.218 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 26 00:00:13 compute-0 nova_compute[252550]: 2025-11-26 00:00:13.219 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 00:00:13 compute-0 nova_compute[252550]: 2025-11-26 00:00:13.220 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 00:00:13 compute-0 ceph-mon[75654]: pgmap v945: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:00:14 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v946: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:00:14 compute-0 nova_compute[252550]: 2025-11-26 00:00:14.822 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 00:00:15 compute-0 ceph-mon[75654]: pgmap v946: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:00:15 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 00:00:16 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v947: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:00:17 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 26 00:00:17 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2955567111' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 00:00:17 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 26 00:00:17 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2955567111' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 00:00:17 compute-0 ceph-mon[75654]: pgmap v947: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:00:17 compute-0 ceph-mon[75654]: from='client.? 192.168.122.10:0/2955567111' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 00:00:17 compute-0 ceph-mon[75654]: from='client.? 192.168.122.10:0/2955567111' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 00:00:18 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v948: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:00:20 compute-0 ceph-mon[75654]: pgmap v948: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:00:20 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v949: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:00:20 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 00:00:22 compute-0 ceph-mon[75654]: pgmap v949: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:00:22 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v950: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:00:23 compute-0 podman[269308]: 2025-11-26 00:00:23.282582908 +0000 UTC m=+0.102765096 container health_status b33bb0f8771548517ad62d4e0193b93d84fd686bd9113462f17aea9d51587084 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd)
Nov 26 00:00:23 compute-0 ceph-mon[75654]: pgmap v950: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:00:24 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v951: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:00:25 compute-0 ceph-mon[75654]: pgmap v951: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:00:25 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 00:00:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 00:00:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 00:00:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 00:00:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 00:00:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 00:00:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 00:00:26 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v952: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:00:27 compute-0 ceph-mon[75654]: pgmap v952: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:00:28 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v953: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:00:30 compute-0 ceph-mon[75654]: pgmap v953: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:00:30 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v954: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:00:30 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 00:00:32 compute-0 podman[269330]: 2025-11-26 00:00:32.25944227 +0000 UTC m=+0.081659711 container health_status 9c33443d6f21f9126b440289b7d384a59ef92406382d0e40a2b11f1da5dc47d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Nov 26 00:00:32 compute-0 podman[269329]: 2025-11-26 00:00:32.282297561 +0000 UTC m=+0.111554563 container health_status 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 26 00:00:32 compute-0 ceph-mon[75654]: pgmap v954: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:00:32 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v955: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:00:34 compute-0 ceph-mon[75654]: pgmap v955: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:00:34 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v956: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:00:35 compute-0 ceph-mon[75654]: pgmap v956: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:00:35 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 00:00:36 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v957: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:00:37 compute-0 ceph-mon[75654]: pgmap v957: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:00:38 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v958: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:00:40 compute-0 ceph-mon[75654]: pgmap v958: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:00:40 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v959: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:00:40 compute-0 ovn_metadata_agent[160720]: 2025-11-26 00:00:40.768 160725 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 26 00:00:40 compute-0 ovn_metadata_agent[160720]: 2025-11-26 00:00:40.769 160725 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 26 00:00:40 compute-0 ovn_metadata_agent[160720]: 2025-11-26 00:00:40.769 160725 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 26 00:00:40 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 00:00:42 compute-0 ceph-mon[75654]: pgmap v959: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:00:42 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v960: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:00:44 compute-0 ceph-mon[75654]: pgmap v960: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:00:44 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v961: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:00:45 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 00:00:46 compute-0 ceph-mon[75654]: pgmap v961: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:00:46 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v962: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:00:48 compute-0 ceph-mon[75654]: pgmap v962: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:00:48 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v963: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:00:50 compute-0 ceph-mon[75654]: pgmap v963: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:00:50 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v964: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:00:50 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 00:00:52 compute-0 ceph-mon[75654]: pgmap v964: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:00:52 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v965: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:00:54 compute-0 ceph-mon[75654]: pgmap v965: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:00:54 compute-0 podman[269373]: 2025-11-26 00:00:54.249102386 +0000 UTC m=+0.074125166 container health_status b33bb0f8771548517ad62d4e0193b93d84fd686bd9113462f17aea9d51587084 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 26 00:00:54 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v966: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:00:55 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 00:00:56 compute-0 ceph-mgr[75954]: [balancer INFO root] Optimize plan auto_2025-11-26_00:00:56
Nov 26 00:00:56 compute-0 ceph-mgr[75954]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 00:00:56 compute-0 ceph-mgr[75954]: [balancer INFO root] do_upmap
Nov 26 00:00:56 compute-0 ceph-mgr[75954]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'images', 'vms', 'cephfs.cephfs.data', '.mgr', 'volumes', 'backups']
Nov 26 00:00:56 compute-0 ceph-mgr[75954]: [balancer INFO root] prepared 0/10 changes
Nov 26 00:00:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 00:00:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 00:00:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 00:00:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 00:00:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 00:00:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 00:00:56 compute-0 ceph-mon[75654]: pgmap v966: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:00:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 00:00:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 00:00:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 00:00:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 00:00:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 00:00:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 00:00:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 00:00:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 00:00:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 00:00:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 00:00:56 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v967: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:00:58 compute-0 ceph-mon[75654]: pgmap v967: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:00:58 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v968: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:01:00 compute-0 ceph-mon[75654]: pgmap v968: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:01:00 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v969: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:01:00 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 00:01:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 00:01:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 00:01:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 00:01:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 00:01:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 26 00:01:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 00:01:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 26 00:01:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 00:01:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 00:01:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 00:01:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 26 00:01:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 00:01:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 26 00:01:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 00:01:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 00:01:01 compute-0 CROND[269394]: (root) CMD (run-parts /etc/cron.hourly)
Nov 26 00:01:01 compute-0 run-parts[269397]: (/etc/cron.hourly) starting 0anacron
Nov 26 00:01:01 compute-0 anacron[269405]: Anacron started on 2025-11-26
Nov 26 00:01:01 compute-0 anacron[269405]: Job `cron.monthly' locked by another anacron - skipping
Nov 26 00:01:01 compute-0 anacron[269405]: Normal exit (0 jobs run)
Nov 26 00:01:01 compute-0 run-parts[269407]: (/etc/cron.hourly) finished 0anacron
Nov 26 00:01:01 compute-0 CROND[269393]: (root) CMDEND (run-parts /etc/cron.hourly)
Nov 26 00:01:02 compute-0 ceph-mon[75654]: pgmap v969: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:01:02 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v970: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:01:02 compute-0 nova_compute[252550]: 2025-11-26 00:01:02.822 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 00:01:02 compute-0 nova_compute[252550]: 2025-11-26 00:01:02.823 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 26 00:01:02 compute-0 nova_compute[252550]: 2025-11-26 00:01:02.842 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 26 00:01:03 compute-0 podman[269409]: 2025-11-26 00:01:03.277945422 +0000 UTC m=+0.095519318 container health_status 9c33443d6f21f9126b440289b7d384a59ef92406382d0e40a2b11f1da5dc47d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 26 00:01:03 compute-0 podman[269408]: 2025-11-26 00:01:03.320875229 +0000 UTC m=+0.133212253 container health_status 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 26 00:01:04 compute-0 ceph-mon[75654]: pgmap v970: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:01:04 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v971: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:01:05 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 00:01:06 compute-0 ceph-mon[75654]: pgmap v971: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:01:06 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v972: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:01:07 compute-0 nova_compute[252550]: 2025-11-26 00:01:07.841 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 00:01:08 compute-0 ceph-mon[75654]: pgmap v972: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:01:08 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v973: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:01:08 compute-0 nova_compute[252550]: 2025-11-26 00:01:08.818 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 00:01:09 compute-0 nova_compute[252550]: 2025-11-26 00:01:09.483 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 00:01:09 compute-0 nova_compute[252550]: 2025-11-26 00:01:09.484 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 00:01:09 compute-0 nova_compute[252550]: 2025-11-26 00:01:09.484 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 26 00:01:09 compute-0 nova_compute[252550]: 2025-11-26 00:01:09.484 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 00:01:09 compute-0 nova_compute[252550]: 2025-11-26 00:01:09.564 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 26 00:01:09 compute-0 nova_compute[252550]: 2025-11-26 00:01:09.565 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 26 00:01:09 compute-0 nova_compute[252550]: 2025-11-26 00:01:09.566 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 26 00:01:09 compute-0 nova_compute[252550]: 2025-11-26 00:01:09.566 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 26 00:01:09 compute-0 nova_compute[252550]: 2025-11-26 00:01:09.566 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 26 00:01:10 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 00:01:10 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1441685280' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 00:01:10 compute-0 nova_compute[252550]: 2025-11-26 00:01:10.034 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 26 00:01:10 compute-0 ceph-mon[75654]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 00:01:10 compute-0 ceph-mon[75654]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Cumulative writes: 4529 writes, 19K keys, 4529 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 4529 writes, 4529 syncs, 1.00 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1483 writes, 7004 keys, 1483 commit groups, 1.0 writes per commit group, ingest: 6.37 MB, 0.01 MB/s
                                           Interval WAL: 1483 writes, 1483 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     94.3      0.17              0.08        11    0.015       0      0       0.0       0.0
                                             L6      1/0    5.20 MB   0.0      0.1     0.0      0.0       0.0      0.0       0.0   3.1    101.1     83.3      0.60              0.22        10    0.060     38K   5309       0.0       0.0
                                            Sum      1/0    5.20 MB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   4.1     78.9     85.7      0.77              0.29        21    0.037     38K   5309       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.4     60.9     62.1      0.51              0.15        10    0.051     21K   3046       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.0      0.0       0.0   0.0    101.1     83.3      0.60              0.22        10    0.060     38K   5309       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     95.6      0.17              0.08        10    0.017       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     18.8      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.016, interval 0.006
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.06 GB write, 0.04 MB/s write, 0.06 GB read, 0.03 MB/s read, 0.8 seconds
                                           Interval compaction: 0.03 GB write, 0.05 MB/s write, 0.03 GB read, 0.05 MB/s read, 0.5 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f0edcc31f0#2 capacity: 308.00 MB usage: 5.35 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 8.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(539,5.02 MB,1.63142%) FilterBlock(22,112.17 KB,0.0355659%) IndexBlock(22,217.30 KB,0.0688974%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 26 00:01:10 compute-0 nova_compute[252550]: 2025-11-26 00:01:10.163 252558 WARNING nova.virt.libvirt.driver [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 26 00:01:10 compute-0 nova_compute[252550]: 2025-11-26 00:01:10.164 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5174MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 26 00:01:10 compute-0 nova_compute[252550]: 2025-11-26 00:01:10.164 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 26 00:01:10 compute-0 nova_compute[252550]: 2025-11-26 00:01:10.164 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 26 00:01:10 compute-0 ceph-mon[75654]: pgmap v973: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:01:10 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/1441685280' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 00:01:10 compute-0 sudo[269473]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 00:01:10 compute-0 sudo[269473]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:01:10 compute-0 sudo[269473]: pam_unix(sudo:session): session closed for user root
Nov 26 00:01:10 compute-0 sudo[269498]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 00:01:10 compute-0 sudo[269498]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:01:10 compute-0 sudo[269498]: pam_unix(sudo:session): session closed for user root
Nov 26 00:01:10 compute-0 sudo[269523]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 00:01:10 compute-0 sudo[269523]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:01:10 compute-0 sudo[269523]: pam_unix(sudo:session): session closed for user root
Nov 26 00:01:10 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v974: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:01:10 compute-0 sudo[269548]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 26 00:01:10 compute-0 sudo[269548]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:01:10 compute-0 nova_compute[252550]: 2025-11-26 00:01:10.732 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 26 00:01:10 compute-0 nova_compute[252550]: 2025-11-26 00:01:10.733 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 26 00:01:10 compute-0 nova_compute[252550]: 2025-11-26 00:01:10.813 252558 DEBUG nova.scheduler.client.report [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Refreshing inventories for resource provider 08547965-b35f-4b7b-95d8-902f06aa011c _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 26 00:01:10 compute-0 nova_compute[252550]: 2025-11-26 00:01:10.909 252558 DEBUG nova.scheduler.client.report [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Updating ProviderTree inventory for provider 08547965-b35f-4b7b-95d8-902f06aa011c from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 26 00:01:10 compute-0 nova_compute[252550]: 2025-11-26 00:01:10.909 252558 DEBUG nova.compute.provider_tree [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Updating inventory in ProviderTree for provider 08547965-b35f-4b7b-95d8-902f06aa011c with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 26 00:01:10 compute-0 nova_compute[252550]: 2025-11-26 00:01:10.933 252558 DEBUG nova.scheduler.client.report [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Refreshing aggregate associations for resource provider 08547965-b35f-4b7b-95d8-902f06aa011c, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 26 00:01:10 compute-0 nova_compute[252550]: 2025-11-26 00:01:10.953 252558 DEBUG nova.scheduler.client.report [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Refreshing trait associations for resource provider 08547965-b35f-4b7b-95d8-902f06aa011c, traits: COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_USB,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_FMA3,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_BMI,HW_CPU_X86_F16C,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NODE,HW_CPU_X86_AVX,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_FDC,COMPUTE_ACCELERATORS,HW_CPU_X86_AESNI,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_AVX2,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_MMX,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE4A,HW_CPU_X86_SHA,HW_CPU_X86_SSE41,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000E _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 26 00:01:11 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 00:01:11 compute-0 nova_compute[252550]: 2025-11-26 00:01:11.006 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 26 00:01:11 compute-0 sudo[269548]: pam_unix(sudo:session): session closed for user root
Nov 26 00:01:11 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 00:01:11 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 00:01:11 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 00:01:11 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 00:01:11 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 00:01:11 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 26 00:01:11 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev e5516550-e9f7-4be8-910c-b6519c6b275e does not exist
Nov 26 00:01:11 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev 26a4f011-47f3-4552-bc25-1efcb224aeee does not exist
Nov 26 00:01:11 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev f545da60-f0d2-489c-8e84-ff20beae4aec does not exist
Nov 26 00:01:11 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 00:01:11 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 00:01:11 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 00:01:11 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 00:01:11 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 00:01:11 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 00:01:11 compute-0 sudo[269624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 00:01:11 compute-0 sudo[269624]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:01:11 compute-0 sudo[269624]: pam_unix(sudo:session): session closed for user root
Nov 26 00:01:11 compute-0 sudo[269649]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 00:01:11 compute-0 sudo[269649]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:01:11 compute-0 sudo[269649]: pam_unix(sudo:session): session closed for user root
Nov 26 00:01:11 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 00:01:11 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 00:01:11 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 26 00:01:11 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 00:01:11 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 00:01:11 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 00:01:11 compute-0 sudo[269674]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 00:01:11 compute-0 sudo[269674]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:01:11 compute-0 sudo[269674]: pam_unix(sudo:session): session closed for user root
Nov 26 00:01:11 compute-0 sudo[269699]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 26 00:01:11 compute-0 sudo[269699]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:01:11 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 00:01:11 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3476412978' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 00:01:11 compute-0 nova_compute[252550]: 2025-11-26 00:01:11.433 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 26 00:01:11 compute-0 nova_compute[252550]: 2025-11-26 00:01:11.438 252558 DEBUG nova.compute.provider_tree [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Inventory has not changed in ProviderTree for provider: 08547965-b35f-4b7b-95d8-902f06aa011c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 26 00:01:11 compute-0 nova_compute[252550]: 2025-11-26 00:01:11.455 252558 DEBUG nova.scheduler.client.report [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Inventory has not changed for provider 08547965-b35f-4b7b-95d8-902f06aa011c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 26 00:01:11 compute-0 nova_compute[252550]: 2025-11-26 00:01:11.457 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 26 00:01:11 compute-0 nova_compute[252550]: 2025-11-26 00:01:11.457 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.293s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 26 00:01:11 compute-0 nova_compute[252550]: 2025-11-26 00:01:11.458 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 00:01:11 compute-0 podman[269764]: 2025-11-26 00:01:11.695688472 +0000 UTC m=+0.034652173 container create 4caf23658b10fe8eb91278cad6dd68851d78f08f2e942e4031595b2e40d08ee5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_grothendieck, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 00:01:11 compute-0 systemd[1]: Started libpod-conmon-4caf23658b10fe8eb91278cad6dd68851d78f08f2e942e4031595b2e40d08ee5.scope.
Nov 26 00:01:11 compute-0 systemd[1]: Started libcrun container.
Nov 26 00:01:11 compute-0 podman[269764]: 2025-11-26 00:01:11.775482312 +0000 UTC m=+0.114446013 container init 4caf23658b10fe8eb91278cad6dd68851d78f08f2e942e4031595b2e40d08ee5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_grothendieck, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 00:01:11 compute-0 podman[269764]: 2025-11-26 00:01:11.68125737 +0000 UTC m=+0.020221091 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 00:01:11 compute-0 podman[269764]: 2025-11-26 00:01:11.781744061 +0000 UTC m=+0.120707762 container start 4caf23658b10fe8eb91278cad6dd68851d78f08f2e942e4031595b2e40d08ee5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 00:01:11 compute-0 podman[269764]: 2025-11-26 00:01:11.785047832 +0000 UTC m=+0.124011533 container attach 4caf23658b10fe8eb91278cad6dd68851d78f08f2e942e4031595b2e40d08ee5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_grothendieck, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 00:01:11 compute-0 exciting_grothendieck[269780]: 167 167
Nov 26 00:01:11 compute-0 systemd[1]: libpod-4caf23658b10fe8eb91278cad6dd68851d78f08f2e942e4031595b2e40d08ee5.scope: Deactivated successfully.
Nov 26 00:01:11 compute-0 podman[269764]: 2025-11-26 00:01:11.786498201 +0000 UTC m=+0.125461902 container died 4caf23658b10fe8eb91278cad6dd68851d78f08f2e942e4031595b2e40d08ee5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 00:01:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-bc4640ffcf6ee9ca8ee5b0accc2f8e3c57e9c67b9dc24da9c715beae9ccd693d-merged.mount: Deactivated successfully.
Nov 26 00:01:11 compute-0 podman[269764]: 2025-11-26 00:01:11.82689624 +0000 UTC m=+0.165859941 container remove 4caf23658b10fe8eb91278cad6dd68851d78f08f2e942e4031595b2e40d08ee5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_grothendieck, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 00:01:11 compute-0 systemd[1]: libpod-conmon-4caf23658b10fe8eb91278cad6dd68851d78f08f2e942e4031595b2e40d08ee5.scope: Deactivated successfully.
Nov 26 00:01:11 compute-0 podman[269804]: 2025-11-26 00:01:11.977977707 +0000 UTC m=+0.039401723 container create f7dda24ad26317edbb6c1d7a5f4f7f2d8fa68b046a5c224abf8714a00454ed1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 26 00:01:12 compute-0 systemd[1]: Started libpod-conmon-f7dda24ad26317edbb6c1d7a5f4f7f2d8fa68b046a5c224abf8714a00454ed1b.scope.
Nov 26 00:01:12 compute-0 systemd[1]: Started libcrun container.
Nov 26 00:01:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ee16bba8890f008f7ee17c31e30eeb23e1cb733aed405a2ec2b98f1a232dd08/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 00:01:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ee16bba8890f008f7ee17c31e30eeb23e1cb733aed405a2ec2b98f1a232dd08/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 00:01:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ee16bba8890f008f7ee17c31e30eeb23e1cb733aed405a2ec2b98f1a232dd08/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 00:01:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ee16bba8890f008f7ee17c31e30eeb23e1cb733aed405a2ec2b98f1a232dd08/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 00:01:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ee16bba8890f008f7ee17c31e30eeb23e1cb733aed405a2ec2b98f1a232dd08/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 00:01:12 compute-0 podman[269804]: 2025-11-26 00:01:11.959803503 +0000 UTC m=+0.021227549 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 00:01:12 compute-0 podman[269804]: 2025-11-26 00:01:12.066443812 +0000 UTC m=+0.127867838 container init f7dda24ad26317edbb6c1d7a5f4f7f2d8fa68b046a5c224abf8714a00454ed1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_goodall, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 00:01:12 compute-0 podman[269804]: 2025-11-26 00:01:12.072578818 +0000 UTC m=+0.134002824 container start f7dda24ad26317edbb6c1d7a5f4f7f2d8fa68b046a5c224abf8714a00454ed1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 00:01:12 compute-0 podman[269804]: 2025-11-26 00:01:12.081053609 +0000 UTC m=+0.142477615 container attach f7dda24ad26317edbb6c1d7a5f4f7f2d8fa68b046a5c224abf8714a00454ed1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_goodall, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 26 00:01:12 compute-0 ceph-mon[75654]: pgmap v974: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:01:12 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/3476412978' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 00:01:12 compute-0 nova_compute[252550]: 2025-11-26 00:01:12.467 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 00:01:12 compute-0 nova_compute[252550]: 2025-11-26 00:01:12.469 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 00:01:12 compute-0 nova_compute[252550]: 2025-11-26 00:01:12.469 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 26 00:01:12 compute-0 nova_compute[252550]: 2025-11-26 00:01:12.469 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 26 00:01:12 compute-0 nova_compute[252550]: 2025-11-26 00:01:12.487 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 26 00:01:12 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v975: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:01:12 compute-0 nova_compute[252550]: 2025-11-26 00:01:12.821 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 00:01:13 compute-0 eloquent_goodall[269820]: --> passed data devices: 0 physical, 3 LVM
Nov 26 00:01:13 compute-0 eloquent_goodall[269820]: --> relative data size: 1.0
Nov 26 00:01:13 compute-0 eloquent_goodall[269820]: --> All data devices are unavailable
Nov 26 00:01:13 compute-0 systemd[1]: libpod-f7dda24ad26317edbb6c1d7a5f4f7f2d8fa68b046a5c224abf8714a00454ed1b.scope: Deactivated successfully.
Nov 26 00:01:13 compute-0 systemd[1]: libpod-f7dda24ad26317edbb6c1d7a5f4f7f2d8fa68b046a5c224abf8714a00454ed1b.scope: Consumed 1.021s CPU time.
Nov 26 00:01:13 compute-0 conmon[269820]: conmon f7dda24ad26317edbb6c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f7dda24ad26317edbb6c1d7a5f4f7f2d8fa68b046a5c224abf8714a00454ed1b.scope/container/memory.events
Nov 26 00:01:13 compute-0 podman[269804]: 2025-11-26 00:01:13.167934267 +0000 UTC m=+1.229358283 container died f7dda24ad26317edbb6c1d7a5f4f7f2d8fa68b046a5c224abf8714a00454ed1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_goodall, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 00:01:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-7ee16bba8890f008f7ee17c31e30eeb23e1cb733aed405a2ec2b98f1a232dd08-merged.mount: Deactivated successfully.
Nov 26 00:01:13 compute-0 podman[269804]: 2025-11-26 00:01:13.229242284 +0000 UTC m=+1.290666300 container remove f7dda24ad26317edbb6c1d7a5f4f7f2d8fa68b046a5c224abf8714a00454ed1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_goodall, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 26 00:01:13 compute-0 systemd[1]: libpod-conmon-f7dda24ad26317edbb6c1d7a5f4f7f2d8fa68b046a5c224abf8714a00454ed1b.scope: Deactivated successfully.
Nov 26 00:01:13 compute-0 sudo[269699]: pam_unix(sudo:session): session closed for user root
Nov 26 00:01:13 compute-0 sudo[269863]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 00:01:13 compute-0 sudo[269863]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:01:13 compute-0 sudo[269863]: pam_unix(sudo:session): session closed for user root
Nov 26 00:01:13 compute-0 sudo[269888]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 00:01:13 compute-0 sudo[269888]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:01:13 compute-0 sudo[269888]: pam_unix(sudo:session): session closed for user root
Nov 26 00:01:13 compute-0 sudo[269913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 00:01:13 compute-0 sudo[269913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:01:13 compute-0 sudo[269913]: pam_unix(sudo:session): session closed for user root
Nov 26 00:01:13 compute-0 sudo[269938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 -- lvm list --format json
Nov 26 00:01:13 compute-0 sudo[269938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:01:13 compute-0 podman[270003]: 2025-11-26 00:01:13.963339633 +0000 UTC m=+0.055083810 container create f1e63b12eebb1a3d7f7b0e954c1173aec7d5c0aacf98c70609a6ecc62947dc7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_bassi, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 26 00:01:14 compute-0 systemd[1]: Started libpod-conmon-f1e63b12eebb1a3d7f7b0e954c1173aec7d5c0aacf98c70609a6ecc62947dc7a.scope.
Nov 26 00:01:14 compute-0 podman[270003]: 2025-11-26 00:01:13.936546473 +0000 UTC m=+0.028290710 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 00:01:14 compute-0 systemd[1]: Started libcrun container.
Nov 26 00:01:14 compute-0 podman[270003]: 2025-11-26 00:01:14.051825068 +0000 UTC m=+0.143569295 container init f1e63b12eebb1a3d7f7b0e954c1173aec7d5c0aacf98c70609a6ecc62947dc7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 26 00:01:14 compute-0 podman[270003]: 2025-11-26 00:01:14.065401867 +0000 UTC m=+0.157146014 container start f1e63b12eebb1a3d7f7b0e954c1173aec7d5c0aacf98c70609a6ecc62947dc7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_bassi, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 00:01:14 compute-0 podman[270003]: 2025-11-26 00:01:14.069684574 +0000 UTC m=+0.161428721 container attach f1e63b12eebb1a3d7f7b0e954c1173aec7d5c0aacf98c70609a6ecc62947dc7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_bassi, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 26 00:01:14 compute-0 heuristic_bassi[270019]: 167 167
Nov 26 00:01:14 compute-0 systemd[1]: libpod-f1e63b12eebb1a3d7f7b0e954c1173aec7d5c0aacf98c70609a6ecc62947dc7a.scope: Deactivated successfully.
Nov 26 00:01:14 compute-0 conmon[270019]: conmon f1e63b12eebb1a3d7f7b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f1e63b12eebb1a3d7f7b0e954c1173aec7d5c0aacf98c70609a6ecc62947dc7a.scope/container/memory.events
Nov 26 00:01:14 compute-0 podman[270003]: 2025-11-26 00:01:14.075001908 +0000 UTC m=+0.166746085 container died f1e63b12eebb1a3d7f7b0e954c1173aec7d5c0aacf98c70609a6ecc62947dc7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_bassi, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 00:01:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-d4d421b6c9b0a057094eb7207140c5519f62959ebd57283c722eaee1daf28f92-merged.mount: Deactivated successfully.
Nov 26 00:01:14 compute-0 podman[270003]: 2025-11-26 00:01:14.122769086 +0000 UTC m=+0.214513243 container remove f1e63b12eebb1a3d7f7b0e954c1173aec7d5c0aacf98c70609a6ecc62947dc7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_bassi, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 00:01:14 compute-0 systemd[1]: libpod-conmon-f1e63b12eebb1a3d7f7b0e954c1173aec7d5c0aacf98c70609a6ecc62947dc7a.scope: Deactivated successfully.
Nov 26 00:01:14 compute-0 podman[270042]: 2025-11-26 00:01:14.318461007 +0000 UTC m=+0.041469618 container create 0bba892621aadd765d7affa15a5dec4f295dca82ca3598bfe0502d36fa40d6bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_sammet, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 26 00:01:14 compute-0 ceph-mon[75654]: pgmap v975: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:01:14 compute-0 systemd[1]: Started libpod-conmon-0bba892621aadd765d7affa15a5dec4f295dca82ca3598bfe0502d36fa40d6bf.scope.
Nov 26 00:01:14 compute-0 systemd[1]: Started libcrun container.
Nov 26 00:01:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c91af29b734f6a03eec00a346235b91c2657133b51e2775d5455162b4a587ca4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 00:01:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c91af29b734f6a03eec00a346235b91c2657133b51e2775d5455162b4a587ca4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 00:01:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c91af29b734f6a03eec00a346235b91c2657133b51e2775d5455162b4a587ca4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 00:01:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c91af29b734f6a03eec00a346235b91c2657133b51e2775d5455162b4a587ca4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 00:01:14 compute-0 podman[270042]: 2025-11-26 00:01:14.395489921 +0000 UTC m=+0.118498552 container init 0bba892621aadd765d7affa15a5dec4f295dca82ca3598bfe0502d36fa40d6bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_sammet, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 00:01:14 compute-0 podman[270042]: 2025-11-26 00:01:14.302677168 +0000 UTC m=+0.025685809 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 00:01:14 compute-0 podman[270042]: 2025-11-26 00:01:14.402624215 +0000 UTC m=+0.125632826 container start 0bba892621aadd765d7affa15a5dec4f295dca82ca3598bfe0502d36fa40d6bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_sammet, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 00:01:14 compute-0 podman[270042]: 2025-11-26 00:01:14.405665227 +0000 UTC m=+0.128673848 container attach 0bba892621aadd765d7affa15a5dec4f295dca82ca3598bfe0502d36fa40d6bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_sammet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 26 00:01:14 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v976: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:01:14 compute-0 nova_compute[252550]: 2025-11-26 00:01:14.823 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 00:01:14 compute-0 nova_compute[252550]: 2025-11-26 00:01:14.824 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 00:01:14 compute-0 nova_compute[252550]: 2025-11-26 00:01:14.824 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 00:01:14 compute-0 nova_compute[252550]: 2025-11-26 00:01:14.824 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 26 00:01:15 compute-0 focused_sammet[270058]: {
Nov 26 00:01:15 compute-0 focused_sammet[270058]:     "0": [
Nov 26 00:01:15 compute-0 focused_sammet[270058]:         {
Nov 26 00:01:15 compute-0 focused_sammet[270058]:             "devices": [
Nov 26 00:01:15 compute-0 focused_sammet[270058]:                 "/dev/loop3"
Nov 26 00:01:15 compute-0 focused_sammet[270058]:             ],
Nov 26 00:01:15 compute-0 focused_sammet[270058]:             "lv_name": "ceph_lv0",
Nov 26 00:01:15 compute-0 focused_sammet[270058]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 00:01:15 compute-0 focused_sammet[270058]:             "lv_size": "21470642176",
Nov 26 00:01:15 compute-0 focused_sammet[270058]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c5ddf08a-6193-41e0-8332-60b5083aa62e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 00:01:15 compute-0 focused_sammet[270058]:             "lv_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 26 00:01:15 compute-0 focused_sammet[270058]:             "name": "ceph_lv0",
Nov 26 00:01:15 compute-0 focused_sammet[270058]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 00:01:15 compute-0 focused_sammet[270058]:             "tags": {
Nov 26 00:01:15 compute-0 focused_sammet[270058]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 00:01:15 compute-0 focused_sammet[270058]:                 "ceph.block_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 26 00:01:15 compute-0 focused_sammet[270058]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 00:01:15 compute-0 focused_sammet[270058]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 26 00:01:15 compute-0 focused_sammet[270058]:                 "ceph.cluster_name": "ceph",
Nov 26 00:01:15 compute-0 focused_sammet[270058]:                 "ceph.crush_device_class": "",
Nov 26 00:01:15 compute-0 focused_sammet[270058]:                 "ceph.encrypted": "0",
Nov 26 00:01:15 compute-0 focused_sammet[270058]:                 "ceph.osd_fsid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 26 00:01:15 compute-0 focused_sammet[270058]:                 "ceph.osd_id": "0",
Nov 26 00:01:15 compute-0 focused_sammet[270058]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 00:01:15 compute-0 focused_sammet[270058]:                 "ceph.type": "block",
Nov 26 00:01:15 compute-0 focused_sammet[270058]:                 "ceph.vdo": "0"
Nov 26 00:01:15 compute-0 focused_sammet[270058]:             },
Nov 26 00:01:15 compute-0 focused_sammet[270058]:             "type": "block",
Nov 26 00:01:15 compute-0 focused_sammet[270058]:             "vg_name": "ceph_vg0"
Nov 26 00:01:15 compute-0 focused_sammet[270058]:         }
Nov 26 00:01:15 compute-0 focused_sammet[270058]:     ],
Nov 26 00:01:15 compute-0 focused_sammet[270058]:     "1": [
Nov 26 00:01:15 compute-0 focused_sammet[270058]:         {
Nov 26 00:01:15 compute-0 focused_sammet[270058]:             "devices": [
Nov 26 00:01:15 compute-0 focused_sammet[270058]:                 "/dev/loop4"
Nov 26 00:01:15 compute-0 focused_sammet[270058]:             ],
Nov 26 00:01:15 compute-0 focused_sammet[270058]:             "lv_name": "ceph_lv1",
Nov 26 00:01:15 compute-0 focused_sammet[270058]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 00:01:15 compute-0 focused_sammet[270058]:             "lv_size": "21470642176",
Nov 26 00:01:15 compute-0 focused_sammet[270058]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21bbab34-bea3-466b-8bf7-812749fcef47,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 00:01:15 compute-0 focused_sammet[270058]:             "lv_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 26 00:01:15 compute-0 focused_sammet[270058]:             "name": "ceph_lv1",
Nov 26 00:01:15 compute-0 focused_sammet[270058]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 00:01:15 compute-0 focused_sammet[270058]:             "tags": {
Nov 26 00:01:15 compute-0 focused_sammet[270058]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 00:01:15 compute-0 focused_sammet[270058]:                 "ceph.block_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 26 00:01:15 compute-0 focused_sammet[270058]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 00:01:15 compute-0 focused_sammet[270058]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 26 00:01:15 compute-0 focused_sammet[270058]:                 "ceph.cluster_name": "ceph",
Nov 26 00:01:15 compute-0 focused_sammet[270058]:                 "ceph.crush_device_class": "",
Nov 26 00:01:15 compute-0 focused_sammet[270058]:                 "ceph.encrypted": "0",
Nov 26 00:01:15 compute-0 focused_sammet[270058]:                 "ceph.osd_fsid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 26 00:01:15 compute-0 focused_sammet[270058]:                 "ceph.osd_id": "1",
Nov 26 00:01:15 compute-0 focused_sammet[270058]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 00:01:15 compute-0 focused_sammet[270058]:                 "ceph.type": "block",
Nov 26 00:01:15 compute-0 focused_sammet[270058]:                 "ceph.vdo": "0"
Nov 26 00:01:15 compute-0 focused_sammet[270058]:             },
Nov 26 00:01:15 compute-0 focused_sammet[270058]:             "type": "block",
Nov 26 00:01:15 compute-0 focused_sammet[270058]:             "vg_name": "ceph_vg1"
Nov 26 00:01:15 compute-0 focused_sammet[270058]:         }
Nov 26 00:01:15 compute-0 focused_sammet[270058]:     ],
Nov 26 00:01:15 compute-0 focused_sammet[270058]:     "2": [
Nov 26 00:01:15 compute-0 focused_sammet[270058]:         {
Nov 26 00:01:15 compute-0 focused_sammet[270058]:             "devices": [
Nov 26 00:01:15 compute-0 focused_sammet[270058]:                 "/dev/loop5"
Nov 26 00:01:15 compute-0 focused_sammet[270058]:             ],
Nov 26 00:01:15 compute-0 focused_sammet[270058]:             "lv_name": "ceph_lv2",
Nov 26 00:01:15 compute-0 focused_sammet[270058]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 00:01:15 compute-0 focused_sammet[270058]:             "lv_size": "21470642176",
Nov 26 00:01:15 compute-0 focused_sammet[270058]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=019d967b-1a56-4e90-8682-a890da577e20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 00:01:15 compute-0 focused_sammet[270058]:             "lv_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 26 00:01:15 compute-0 focused_sammet[270058]:             "name": "ceph_lv2",
Nov 26 00:01:15 compute-0 focused_sammet[270058]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 00:01:15 compute-0 focused_sammet[270058]:             "tags": {
Nov 26 00:01:15 compute-0 focused_sammet[270058]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 00:01:15 compute-0 focused_sammet[270058]:                 "ceph.block_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 26 00:01:15 compute-0 focused_sammet[270058]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 00:01:15 compute-0 focused_sammet[270058]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 26 00:01:15 compute-0 focused_sammet[270058]:                 "ceph.cluster_name": "ceph",
Nov 26 00:01:15 compute-0 focused_sammet[270058]:                 "ceph.crush_device_class": "",
Nov 26 00:01:15 compute-0 focused_sammet[270058]:                 "ceph.encrypted": "0",
Nov 26 00:01:15 compute-0 focused_sammet[270058]:                 "ceph.osd_fsid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 26 00:01:15 compute-0 focused_sammet[270058]:                 "ceph.osd_id": "2",
Nov 26 00:01:15 compute-0 focused_sammet[270058]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 00:01:15 compute-0 focused_sammet[270058]:                 "ceph.type": "block",
Nov 26 00:01:15 compute-0 focused_sammet[270058]:                 "ceph.vdo": "0"
Nov 26 00:01:15 compute-0 focused_sammet[270058]:             },
Nov 26 00:01:15 compute-0 focused_sammet[270058]:             "type": "block",
Nov 26 00:01:15 compute-0 focused_sammet[270058]:             "vg_name": "ceph_vg2"
Nov 26 00:01:15 compute-0 focused_sammet[270058]:         }
Nov 26 00:01:15 compute-0 focused_sammet[270058]:     ]
Nov 26 00:01:15 compute-0 focused_sammet[270058]: }
Nov 26 00:01:15 compute-0 systemd[1]: libpod-0bba892621aadd765d7affa15a5dec4f295dca82ca3598bfe0502d36fa40d6bf.scope: Deactivated successfully.
Nov 26 00:01:15 compute-0 podman[270042]: 2025-11-26 00:01:15.138202783 +0000 UTC m=+0.861211404 container died 0bba892621aadd765d7affa15a5dec4f295dca82ca3598bfe0502d36fa40d6bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_sammet, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 26 00:01:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-c91af29b734f6a03eec00a346235b91c2657133b51e2775d5455162b4a587ca4-merged.mount: Deactivated successfully.
Nov 26 00:01:15 compute-0 podman[270042]: 2025-11-26 00:01:15.215555446 +0000 UTC m=+0.938564067 container remove 0bba892621aadd765d7affa15a5dec4f295dca82ca3598bfe0502d36fa40d6bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_sammet, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 00:01:15 compute-0 systemd[1]: libpod-conmon-0bba892621aadd765d7affa15a5dec4f295dca82ca3598bfe0502d36fa40d6bf.scope: Deactivated successfully.
Nov 26 00:01:15 compute-0 sudo[269938]: pam_unix(sudo:session): session closed for user root
Nov 26 00:01:15 compute-0 sudo[270082]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 00:01:15 compute-0 sudo[270082]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:01:15 compute-0 sudo[270082]: pam_unix(sudo:session): session closed for user root
Nov 26 00:01:15 compute-0 sudo[270107]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 00:01:15 compute-0 sudo[270107]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:01:15 compute-0 sudo[270107]: pam_unix(sudo:session): session closed for user root
Nov 26 00:01:15 compute-0 sudo[270132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 00:01:15 compute-0 sudo[270132]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:01:15 compute-0 sudo[270132]: pam_unix(sudo:session): session closed for user root
Nov 26 00:01:15 compute-0 sudo[270157]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 -- raw list --format json
Nov 26 00:01:15 compute-0 sudo[270157]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:01:15 compute-0 podman[270221]: 2025-11-26 00:01:15.806955564 +0000 UTC m=+0.043717409 container create a8d219822cf72d23399415131fb2326f6fdd3e956f480a1bc2a1744eb2a036cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 26 00:01:15 compute-0 systemd[1]: Started libpod-conmon-a8d219822cf72d23399415131fb2326f6fdd3e956f480a1bc2a1744eb2a036cd.scope.
Nov 26 00:01:15 compute-0 podman[270221]: 2025-11-26 00:01:15.786772216 +0000 UTC m=+0.023534051 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 00:01:15 compute-0 systemd[1]: Started libcrun container.
Nov 26 00:01:15 compute-0 podman[270221]: 2025-11-26 00:01:15.899710946 +0000 UTC m=+0.136472811 container init a8d219822cf72d23399415131fb2326f6fdd3e956f480a1bc2a1744eb2a036cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_faraday, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 00:01:15 compute-0 podman[270221]: 2025-11-26 00:01:15.910595692 +0000 UTC m=+0.147357537 container start a8d219822cf72d23399415131fb2326f6fdd3e956f480a1bc2a1744eb2a036cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_faraday, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 26 00:01:15 compute-0 podman[270221]: 2025-11-26 00:01:15.914532359 +0000 UTC m=+0.151294194 container attach a8d219822cf72d23399415131fb2326f6fdd3e956f480a1bc2a1744eb2a036cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_faraday, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 26 00:01:15 compute-0 cranky_faraday[270238]: 167 167
Nov 26 00:01:15 compute-0 systemd[1]: libpod-a8d219822cf72d23399415131fb2326f6fdd3e956f480a1bc2a1744eb2a036cd.scope: Deactivated successfully.
Nov 26 00:01:15 compute-0 podman[270221]: 2025-11-26 00:01:15.918819515 +0000 UTC m=+0.155581320 container died a8d219822cf72d23399415131fb2326f6fdd3e956f480a1bc2a1744eb2a036cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_faraday, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 26 00:01:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-cc6ee7cc456f855df071c32baa746a665f4813950de86a94f85a461bec1b2342-merged.mount: Deactivated successfully.
Nov 26 00:01:15 compute-0 podman[270221]: 2025-11-26 00:01:15.964934159 +0000 UTC m=+0.201695984 container remove a8d219822cf72d23399415131fb2326f6fdd3e956f480a1bc2a1744eb2a036cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_faraday, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 26 00:01:15 compute-0 systemd[1]: libpod-conmon-a8d219822cf72d23399415131fb2326f6fdd3e956f480a1bc2a1744eb2a036cd.scope: Deactivated successfully.
Nov 26 00:01:16 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 00:01:16 compute-0 podman[270261]: 2025-11-26 00:01:16.163778105 +0000 UTC m=+0.070997921 container create df583771cbd667fd82cd4ad1a45ea0f0cc57f9aecf9313bd2d7b4d76d08ba50e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_dirac, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 00:01:16 compute-0 systemd[1]: Started libpod-conmon-df583771cbd667fd82cd4ad1a45ea0f0cc57f9aecf9313bd2d7b4d76d08ba50e.scope.
Nov 26 00:01:16 compute-0 podman[270261]: 2025-11-26 00:01:16.135290611 +0000 UTC m=+0.042510527 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 00:01:16 compute-0 systemd[1]: Started libcrun container.
Nov 26 00:01:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19628467f24cbaa743312079b9f7961a42659c6c366db2295ee595939fe5c753/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 00:01:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19628467f24cbaa743312079b9f7961a42659c6c366db2295ee595939fe5c753/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 00:01:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19628467f24cbaa743312079b9f7961a42659c6c366db2295ee595939fe5c753/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 00:01:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19628467f24cbaa743312079b9f7961a42659c6c366db2295ee595939fe5c753/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 00:01:16 compute-0 podman[270261]: 2025-11-26 00:01:16.276765697 +0000 UTC m=+0.183985613 container init df583771cbd667fd82cd4ad1a45ea0f0cc57f9aecf9313bd2d7b4d76d08ba50e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_dirac, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 26 00:01:16 compute-0 podman[270261]: 2025-11-26 00:01:16.294346984 +0000 UTC m=+0.201566830 container start df583771cbd667fd82cd4ad1a45ea0f0cc57f9aecf9313bd2d7b4d76d08ba50e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 26 00:01:16 compute-0 podman[270261]: 2025-11-26 00:01:16.298621831 +0000 UTC m=+0.205841727 container attach df583771cbd667fd82cd4ad1a45ea0f0cc57f9aecf9313bd2d7b4d76d08ba50e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_dirac, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 26 00:01:16 compute-0 ceph-mon[75654]: pgmap v976: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:01:16 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v977: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:01:17 compute-0 gallant_dirac[270277]: {
Nov 26 00:01:17 compute-0 gallant_dirac[270277]:     "019d967b-1a56-4e90-8682-a890da577e20": {
Nov 26 00:01:17 compute-0 gallant_dirac[270277]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 26 00:01:17 compute-0 gallant_dirac[270277]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 00:01:17 compute-0 gallant_dirac[270277]:         "osd_id": 2,
Nov 26 00:01:17 compute-0 gallant_dirac[270277]:         "osd_uuid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 26 00:01:17 compute-0 gallant_dirac[270277]:         "type": "bluestore"
Nov 26 00:01:17 compute-0 gallant_dirac[270277]:     },
Nov 26 00:01:17 compute-0 gallant_dirac[270277]:     "21bbab34-bea3-466b-8bf7-812749fcef47": {
Nov 26 00:01:17 compute-0 gallant_dirac[270277]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 26 00:01:17 compute-0 gallant_dirac[270277]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 00:01:17 compute-0 gallant_dirac[270277]:         "osd_id": 1,
Nov 26 00:01:17 compute-0 gallant_dirac[270277]:         "osd_uuid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 26 00:01:17 compute-0 gallant_dirac[270277]:         "type": "bluestore"
Nov 26 00:01:17 compute-0 gallant_dirac[270277]:     },
Nov 26 00:01:17 compute-0 gallant_dirac[270277]:     "c5ddf08a-6193-41e0-8332-60b5083aa62e": {
Nov 26 00:01:17 compute-0 gallant_dirac[270277]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 26 00:01:17 compute-0 gallant_dirac[270277]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 00:01:17 compute-0 gallant_dirac[270277]:         "osd_id": 0,
Nov 26 00:01:17 compute-0 gallant_dirac[270277]:         "osd_uuid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 26 00:01:17 compute-0 gallant_dirac[270277]:         "type": "bluestore"
Nov 26 00:01:17 compute-0 gallant_dirac[270277]:     }
Nov 26 00:01:17 compute-0 gallant_dirac[270277]: }
Nov 26 00:01:17 compute-0 systemd[1]: libpod-df583771cbd667fd82cd4ad1a45ea0f0cc57f9aecf9313bd2d7b4d76d08ba50e.scope: Deactivated successfully.
Nov 26 00:01:17 compute-0 podman[270261]: 2025-11-26 00:01:17.436502066 +0000 UTC m=+1.343721912 container died df583771cbd667fd82cd4ad1a45ea0f0cc57f9aecf9313bd2d7b4d76d08ba50e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_dirac, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 00:01:17 compute-0 systemd[1]: libpod-df583771cbd667fd82cd4ad1a45ea0f0cc57f9aecf9313bd2d7b4d76d08ba50e.scope: Consumed 1.147s CPU time.
Nov 26 00:01:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-19628467f24cbaa743312079b9f7961a42659c6c366db2295ee595939fe5c753-merged.mount: Deactivated successfully.
Nov 26 00:01:17 compute-0 podman[270261]: 2025-11-26 00:01:17.528164389 +0000 UTC m=+1.435384235 container remove df583771cbd667fd82cd4ad1a45ea0f0cc57f9aecf9313bd2d7b4d76d08ba50e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_dirac, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 00:01:17 compute-0 systemd[1]: libpod-conmon-df583771cbd667fd82cd4ad1a45ea0f0cc57f9aecf9313bd2d7b4d76d08ba50e.scope: Deactivated successfully.
Nov 26 00:01:17 compute-0 sudo[270157]: pam_unix(sudo:session): session closed for user root
Nov 26 00:01:17 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 00:01:17 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 26 00:01:17 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 00:01:17 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 26 00:01:17 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev 664715d1-979d-4f49-969e-6e6c299942e2 does not exist
Nov 26 00:01:17 compute-0 sudo[270324]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 00:01:17 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 26 00:01:17 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3523454714' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 00:01:17 compute-0 sudo[270324]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:01:17 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 26 00:01:17 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3523454714' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 00:01:17 compute-0 sudo[270324]: pam_unix(sudo:session): session closed for user root
Nov 26 00:01:17 compute-0 sudo[270349]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 26 00:01:17 compute-0 sudo[270349]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:01:17 compute-0 sudo[270349]: pam_unix(sudo:session): session closed for user root
Nov 26 00:01:18 compute-0 ceph-mon[75654]: pgmap v977: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:01:18 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 26 00:01:18 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 26 00:01:18 compute-0 ceph-mon[75654]: from='client.? 192.168.122.10:0/3523454714' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 00:01:18 compute-0 ceph-mon[75654]: from='client.? 192.168.122.10:0/3523454714' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 00:01:18 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v978: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:01:19 compute-0 sshd-session[269371]: Received disconnect from 45.78.216.132 port 58760:11: Bye Bye [preauth]
Nov 26 00:01:19 compute-0 sshd-session[269371]: Disconnected from authenticating user root 45.78.216.132 port 58760 [preauth]
Nov 26 00:01:20 compute-0 ceph-mon[75654]: pgmap v978: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:01:20 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v979: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:01:21 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 00:01:22 compute-0 ceph-mon[75654]: pgmap v979: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:01:22 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v980: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:01:24 compute-0 ceph-mon[75654]: pgmap v980: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:01:24 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v981: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:01:25 compute-0 podman[270374]: 2025-11-26 00:01:25.238079656 +0000 UTC m=+0.064497843 container health_status b33bb0f8771548517ad62d4e0193b93d84fd686bd9113462f17aea9d51587084 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118)
Nov 26 00:01:26 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 00:01:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 00:01:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 00:01:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 00:01:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 00:01:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 00:01:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 00:01:26 compute-0 nova_compute[252550]: 2025-11-26 00:01:26.168 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 00:01:26 compute-0 ceph-mon[75654]: pgmap v981: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:01:26 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v982: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:01:27 compute-0 ceph-mon[75654]: pgmap v982: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:01:28 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v983: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:01:29 compute-0 ceph-mon[75654]: pgmap v983: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:01:30 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v984: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:01:31 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 00:01:31 compute-0 ceph-mon[75654]: pgmap v984: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:01:32 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v985: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:01:33 compute-0 ceph-mon[75654]: pgmap v985: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:01:34 compute-0 podman[270395]: 2025-11-26 00:01:34.243109006 +0000 UTC m=+0.078127706 container health_status 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3)
Nov 26 00:01:34 compute-0 podman[270396]: 2025-11-26 00:01:34.26793432 +0000 UTC m=+0.084969201 container health_status 9c33443d6f21f9126b440289b7d384a59ef92406382d0e40a2b11f1da5dc47d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 26 00:01:34 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v986: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:01:35 compute-0 ceph-mon[75654]: pgmap v986: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:01:36 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 00:01:36 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v987: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:01:37 compute-0 ceph-mon[75654]: pgmap v987: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:01:38 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v988: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:01:39 compute-0 ceph-mon[75654]: pgmap v988: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:01:40 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v989: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:01:40 compute-0 ovn_metadata_agent[160720]: 2025-11-26 00:01:40.769 160725 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 26 00:01:40 compute-0 ovn_metadata_agent[160720]: 2025-11-26 00:01:40.770 160725 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 26 00:01:40 compute-0 ovn_metadata_agent[160720]: 2025-11-26 00:01:40.770 160725 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 26 00:01:41 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 00:01:41 compute-0 ceph-mon[75654]: pgmap v989: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:01:42 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v990: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:01:43 compute-0 ceph-mon[75654]: pgmap v990: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:01:43 compute-0 ceph-mon[75654]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Nov 26 00:01:43 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/26-00:01:43.818885) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 26 00:01:43 compute-0 ceph-mon[75654]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Nov 26 00:01:43 compute-0 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764115303818926, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1467, "num_deletes": 251, "total_data_size": 1549501, "memory_usage": 1579312, "flush_reason": "Manual Compaction"}
Nov 26 00:01:43 compute-0 ceph-mon[75654]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Nov 26 00:01:43 compute-0 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764115303835161, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 1517613, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19177, "largest_seqno": 20643, "table_properties": {"data_size": 1510820, "index_size": 3932, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14102, "raw_average_key_size": 19, "raw_value_size": 1497121, "raw_average_value_size": 2102, "num_data_blocks": 181, "num_entries": 712, "num_filter_entries": 712, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764115146, "oldest_key_time": 1764115146, "file_creation_time": 1764115303, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cf57a6b1-796f-4cfa-b350-53eb10a4554d", "db_session_id": "Q7VS70283MEZ1V621ZPR", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Nov 26 00:01:43 compute-0 ceph-mon[75654]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 16390 microseconds, and 8685 cpu microseconds.
Nov 26 00:01:43 compute-0 ceph-mon[75654]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 00:01:43 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/26-00:01:43.835273) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 1517613 bytes OK
Nov 26 00:01:43 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/26-00:01:43.835296) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Nov 26 00:01:43 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/26-00:01:43.837830) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Nov 26 00:01:43 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/26-00:01:43.837854) EVENT_LOG_v1 {"time_micros": 1764115303837847, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 26 00:01:43 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/26-00:01:43.837875) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 26 00:01:43 compute-0 ceph-mon[75654]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 1543061, prev total WAL file size 1543061, number of live WAL files 2.
Nov 26 00:01:43 compute-0 ceph-mon[75654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 00:01:43 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/26-00:01:43.838881) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Nov 26 00:01:43 compute-0 ceph-mon[75654]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 26 00:01:43 compute-0 ceph-mon[75654]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(1482KB)], [47(5321KB)]
Nov 26 00:01:43 compute-0 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764115303838934, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 6966687, "oldest_snapshot_seqno": -1}
Nov 26 00:01:43 compute-0 ceph-mon[75654]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 4033 keys, 5780313 bytes, temperature: kUnknown
Nov 26 00:01:43 compute-0 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764115303888664, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 5780313, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 5751116, "index_size": 18015, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10117, "raw_key_size": 97355, "raw_average_key_size": 24, "raw_value_size": 5676385, "raw_average_value_size": 1407, "num_data_blocks": 765, "num_entries": 4033, "num_filter_entries": 4033, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764113467, "oldest_key_time": 0, "file_creation_time": 1764115303, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cf57a6b1-796f-4cfa-b350-53eb10a4554d", "db_session_id": "Q7VS70283MEZ1V621ZPR", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Nov 26 00:01:43 compute-0 ceph-mon[75654]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 00:01:43 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/26-00:01:43.889072) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 5780313 bytes
Nov 26 00:01:43 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/26-00:01:43.890994) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 139.7 rd, 115.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 5.2 +0.0 blob) out(5.5 +0.0 blob), read-write-amplify(8.4) write-amplify(3.8) OK, records in: 4547, records dropped: 514 output_compression: NoCompression
Nov 26 00:01:43 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/26-00:01:43.891024) EVENT_LOG_v1 {"time_micros": 1764115303891010, "job": 24, "event": "compaction_finished", "compaction_time_micros": 49877, "compaction_time_cpu_micros": 27024, "output_level": 6, "num_output_files": 1, "total_output_size": 5780313, "num_input_records": 4547, "num_output_records": 4033, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 26 00:01:43 compute-0 ceph-mon[75654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 00:01:43 compute-0 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764115303891799, "job": 24, "event": "table_file_deletion", "file_number": 49}
Nov 26 00:01:43 compute-0 ceph-mon[75654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 00:01:43 compute-0 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764115303893691, "job": 24, "event": "table_file_deletion", "file_number": 47}
Nov 26 00:01:43 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/26-00:01:43.838755) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 00:01:43 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/26-00:01:43.893818) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 00:01:43 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/26-00:01:43.893825) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 00:01:43 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/26-00:01:43.893828) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 00:01:43 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/26-00:01:43.893831) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 00:01:43 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/26-00:01:43.893834) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 00:01:44 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v991: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:01:45 compute-0 ceph-mon[75654]: pgmap v991: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:01:46 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 00:01:46 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v992: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:01:47 compute-0 ceph-mon[75654]: pgmap v992: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:01:48 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v993: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:01:49 compute-0 ceph-mon[75654]: pgmap v993: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:01:50 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v994: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:01:51 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 00:01:51 compute-0 ceph-mon[75654]: pgmap v994: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:01:52 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v995: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:01:53 compute-0 ceph-mon[75654]: pgmap v995: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:01:54 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v996: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:01:55 compute-0 ceph-mon[75654]: pgmap v996: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:01:56 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 00:01:56 compute-0 ceph-mgr[75954]: [balancer INFO root] Optimize plan auto_2025-11-26_00:01:56
Nov 26 00:01:56 compute-0 ceph-mgr[75954]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 00:01:56 compute-0 ceph-mgr[75954]: [balancer INFO root] do_upmap
Nov 26 00:01:56 compute-0 ceph-mgr[75954]: [balancer INFO root] pools ['vms', '.mgr', 'volumes', 'cephfs.cephfs.data', 'images', 'backups', 'cephfs.cephfs.meta']
Nov 26 00:01:56 compute-0 ceph-mgr[75954]: [balancer INFO root] prepared 0/10 changes
Nov 26 00:01:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 00:01:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 00:01:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 00:01:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 00:01:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 00:01:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 00:01:56 compute-0 podman[270441]: 2025-11-26 00:01:56.263592668 +0000 UTC m=+0.084196883 container health_status b33bb0f8771548517ad62d4e0193b93d84fd686bd9113462f17aea9d51587084 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd)
Nov 26 00:01:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 00:01:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 00:01:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 00:01:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 00:01:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 00:01:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 00:01:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 00:01:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 00:01:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 00:01:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 00:01:56 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v997: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:01:56 compute-0 ceph-mgr[75954]: client.0 ms_handle_reset on v2:192.168.122.100:6800/855624559
Nov 26 00:01:57 compute-0 ceph-mon[75654]: pgmap v997: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:01:58 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v998: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:01:59 compute-0 ceph-mon[75654]: pgmap v998: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:02:00 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v999: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:02:01 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 00:02:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 00:02:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 00:02:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 00:02:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 00:02:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 26 00:02:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 00:02:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 26 00:02:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 00:02:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 00:02:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 00:02:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 26 00:02:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 00:02:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 26 00:02:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 00:02:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 00:02:01 compute-0 ceph-mon[75654]: pgmap v999: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:02:02 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1000: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:02:03 compute-0 ceph-mon[75654]: pgmap v1000: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:02:04 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1001: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:02:05 compute-0 podman[270462]: 2025-11-26 00:02:05.250388664 +0000 UTC m=+0.073286702 container health_status 9c33443d6f21f9126b440289b7d384a59ef92406382d0e40a2b11f1da5dc47d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 26 00:02:05 compute-0 podman[270461]: 2025-11-26 00:02:05.278701789 +0000 UTC m=+0.102678536 container health_status 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 26 00:02:05 compute-0 ceph-mon[75654]: pgmap v1001: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:02:06 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 00:02:06 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1002: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:02:07 compute-0 nova_compute[252550]: 2025-11-26 00:02:07.839 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 00:02:07 compute-0 ceph-mon[75654]: pgmap v1002: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:02:08 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1003: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:02:09 compute-0 ceph-mon[75654]: pgmap v1003: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:02:10 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1004: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:02:10 compute-0 nova_compute[252550]: 2025-11-26 00:02:10.817 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 00:02:10 compute-0 nova_compute[252550]: 2025-11-26 00:02:10.821 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 00:02:10 compute-0 nova_compute[252550]: 2025-11-26 00:02:10.822 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 00:02:10 compute-0 nova_compute[252550]: 2025-11-26 00:02:10.822 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 26 00:02:10 compute-0 nova_compute[252550]: 2025-11-26 00:02:10.822 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 00:02:10 compute-0 nova_compute[252550]: 2025-11-26 00:02:10.850 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 26 00:02:10 compute-0 nova_compute[252550]: 2025-11-26 00:02:10.850 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 26 00:02:10 compute-0 nova_compute[252550]: 2025-11-26 00:02:10.850 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 26 00:02:10 compute-0 nova_compute[252550]: 2025-11-26 00:02:10.851 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 26 00:02:10 compute-0 nova_compute[252550]: 2025-11-26 00:02:10.851 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 26 00:02:11 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 00:02:11 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 00:02:11 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1840071452' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 00:02:11 compute-0 nova_compute[252550]: 2025-11-26 00:02:11.244 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.393s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 26 00:02:11 compute-0 nova_compute[252550]: 2025-11-26 00:02:11.318 252558 DEBUG oslo_concurrency.processutils [None req-20a2ea17-03f4-40a4-8984-ba7a295ebaf6 4487e45d40e24fa79dfb4b1c78f270a1 e292ca9fa90d488aac0df28d7a91e252 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 26 00:02:11 compute-0 nova_compute[252550]: 2025-11-26 00:02:11.350 252558 DEBUG oslo_concurrency.processutils [None req-20a2ea17-03f4-40a4-8984-ba7a295ebaf6 4487e45d40e24fa79dfb4b1c78f270a1 e292ca9fa90d488aac0df28d7a91e252 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.032s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 26 00:02:11 compute-0 nova_compute[252550]: 2025-11-26 00:02:11.441 252558 WARNING nova.virt.libvirt.driver [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 26 00:02:11 compute-0 nova_compute[252550]: 2025-11-26 00:02:11.442 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5178MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 26 00:02:11 compute-0 nova_compute[252550]: 2025-11-26 00:02:11.442 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 26 00:02:11 compute-0 nova_compute[252550]: 2025-11-26 00:02:11.442 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 26 00:02:11 compute-0 nova_compute[252550]: 2025-11-26 00:02:11.543 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 26 00:02:11 compute-0 nova_compute[252550]: 2025-11-26 00:02:11.543 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 26 00:02:11 compute-0 nova_compute[252550]: 2025-11-26 00:02:11.563 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 26 00:02:11 compute-0 ceph-mon[75654]: pgmap v1004: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:02:11 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/1840071452' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 00:02:12 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 00:02:12 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/620524020' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 00:02:12 compute-0 nova_compute[252550]: 2025-11-26 00:02:12.020 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 26 00:02:12 compute-0 nova_compute[252550]: 2025-11-26 00:02:12.026 252558 DEBUG nova.compute.provider_tree [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Inventory has not changed in ProviderTree for provider: 08547965-b35f-4b7b-95d8-902f06aa011c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 26 00:02:12 compute-0 nova_compute[252550]: 2025-11-26 00:02:12.042 252558 DEBUG nova.scheduler.client.report [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Inventory has not changed for provider 08547965-b35f-4b7b-95d8-902f06aa011c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 26 00:02:12 compute-0 nova_compute[252550]: 2025-11-26 00:02:12.045 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 26 00:02:12 compute-0 nova_compute[252550]: 2025-11-26 00:02:12.046 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.603s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 26 00:02:12 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1005: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:02:12 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/620524020' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 00:02:13 compute-0 ceph-mon[75654]: pgmap v1005: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:02:14 compute-0 nova_compute[252550]: 2025-11-26 00:02:14.047 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 00:02:14 compute-0 nova_compute[252550]: 2025-11-26 00:02:14.048 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 26 00:02:14 compute-0 nova_compute[252550]: 2025-11-26 00:02:14.048 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 26 00:02:14 compute-0 nova_compute[252550]: 2025-11-26 00:02:14.100 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 26 00:02:14 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1006: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:02:14 compute-0 nova_compute[252550]: 2025-11-26 00:02:14.822 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 00:02:15 compute-0 ceph-mon[75654]: pgmap v1006: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:02:16 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 00:02:16 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1007: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:02:16 compute-0 nova_compute[252550]: 2025-11-26 00:02:16.822 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 00:02:16 compute-0 nova_compute[252550]: 2025-11-26 00:02:16.822 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 00:02:17 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 26 00:02:17 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1933523110' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 00:02:17 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 26 00:02:17 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1933523110' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 00:02:17 compute-0 sudo[270552]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 00:02:17 compute-0 sudo[270552]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:02:17 compute-0 sudo[270552]: pam_unix(sudo:session): session closed for user root
Nov 26 00:02:17 compute-0 sudo[270577]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 00:02:17 compute-0 sudo[270577]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:02:17 compute-0 sudo[270577]: pam_unix(sudo:session): session closed for user root
Nov 26 00:02:17 compute-0 ceph-mon[75654]: pgmap v1007: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:02:17 compute-0 ceph-mon[75654]: from='client.? 192.168.122.10:0/1933523110' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 00:02:17 compute-0 ceph-mon[75654]: from='client.? 192.168.122.10:0/1933523110' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 00:02:18 compute-0 sudo[270602]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 00:02:18 compute-0 sudo[270602]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:02:18 compute-0 sudo[270602]: pam_unix(sudo:session): session closed for user root
Nov 26 00:02:18 compute-0 sudo[270627]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 26 00:02:18 compute-0 sudo[270627]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:02:18 compute-0 ovn_metadata_agent[160720]: 2025-11-26 00:02:18.479 160725 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '32:82:13', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '36:f3:66:b7:57:d1'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 26 00:02:18 compute-0 ovn_metadata_agent[160720]: 2025-11-26 00:02:18.480 160725 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 26 00:02:18 compute-0 sudo[270627]: pam_unix(sudo:session): session closed for user root
Nov 26 00:02:18 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1008: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:02:18 compute-0 sudo[270683]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 00:02:18 compute-0 sudo[270683]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:02:18 compute-0 sudo[270683]: pam_unix(sudo:session): session closed for user root
Nov 26 00:02:18 compute-0 sudo[270708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 00:02:18 compute-0 sudo[270708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:02:18 compute-0 sudo[270708]: pam_unix(sudo:session): session closed for user root
Nov 26 00:02:18 compute-0 sudo[270733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 00:02:18 compute-0 sudo[270733]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:02:18 compute-0 sudo[270733]: pam_unix(sudo:session): session closed for user root
Nov 26 00:02:18 compute-0 sudo[270758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Nov 26 00:02:18 compute-0 sudo[270758]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:02:19 compute-0 sudo[270758]: pam_unix(sudo:session): session closed for user root
Nov 26 00:02:19 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 00:02:19 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 26 00:02:19 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 00:02:19 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 26 00:02:19 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 00:02:19 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 00:02:19 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 00:02:19 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 00:02:19 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 00:02:19 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 26 00:02:19 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev e54af197-5119-4952-b67c-d161c073fbb8 does not exist
Nov 26 00:02:19 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev 946cb404-f589-4b48-b6f9-68f2da4b1644 does not exist
Nov 26 00:02:19 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev 1a0d8446-22a1-4f9d-a0a5-e9337f2f58e0 does not exist
Nov 26 00:02:19 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 00:02:19 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 00:02:19 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 00:02:19 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 00:02:19 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 00:02:19 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 00:02:19 compute-0 sudo[270801]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 00:02:19 compute-0 sudo[270801]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:02:19 compute-0 sudo[270801]: pam_unix(sudo:session): session closed for user root
Nov 26 00:02:19 compute-0 sudo[270826]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 00:02:19 compute-0 sudo[270826]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:02:19 compute-0 sudo[270826]: pam_unix(sudo:session): session closed for user root
Nov 26 00:02:19 compute-0 sudo[270851]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 00:02:19 compute-0 sudo[270851]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:02:19 compute-0 sudo[270851]: pam_unix(sudo:session): session closed for user root
Nov 26 00:02:19 compute-0 sudo[270876]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 26 00:02:19 compute-0 sudo[270876]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:02:19 compute-0 podman[270941]: 2025-11-26 00:02:19.732609285 +0000 UTC m=+0.044260430 container create 2ac94431f71b225ee68136fbc54b1dd6054894e715cd80e391a62ed5519c37a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_archimedes, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 00:02:19 compute-0 systemd[1]: Started libpod-conmon-2ac94431f71b225ee68136fbc54b1dd6054894e715cd80e391a62ed5519c37a5.scope.
Nov 26 00:02:19 compute-0 systemd[1]: Started libcrun container.
Nov 26 00:02:19 compute-0 podman[270941]: 2025-11-26 00:02:19.711249566 +0000 UTC m=+0.022900751 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 00:02:19 compute-0 podman[270941]: 2025-11-26 00:02:19.822119309 +0000 UTC m=+0.133770484 container init 2ac94431f71b225ee68136fbc54b1dd6054894e715cd80e391a62ed5519c37a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_archimedes, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 00:02:19 compute-0 podman[270941]: 2025-11-26 00:02:19.834893769 +0000 UTC m=+0.146544904 container start 2ac94431f71b225ee68136fbc54b1dd6054894e715cd80e391a62ed5519c37a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_archimedes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 00:02:19 compute-0 podman[270941]: 2025-11-26 00:02:19.838248858 +0000 UTC m=+0.149900033 container attach 2ac94431f71b225ee68136fbc54b1dd6054894e715cd80e391a62ed5519c37a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_archimedes, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 00:02:19 compute-0 pensive_archimedes[270957]: 167 167
Nov 26 00:02:19 compute-0 systemd[1]: libpod-2ac94431f71b225ee68136fbc54b1dd6054894e715cd80e391a62ed5519c37a5.scope: Deactivated successfully.
Nov 26 00:02:19 compute-0 podman[270941]: 2025-11-26 00:02:19.84660111 +0000 UTC m=+0.158252255 container died 2ac94431f71b225ee68136fbc54b1dd6054894e715cd80e391a62ed5519c37a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_archimedes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 00:02:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-a141b1d3d0167bee771144495a81e27a3325ea263f4126e66cf65744dac7ec5a-merged.mount: Deactivated successfully.
Nov 26 00:02:19 compute-0 podman[270941]: 2025-11-26 00:02:19.88751599 +0000 UTC m=+0.199167145 container remove 2ac94431f71b225ee68136fbc54b1dd6054894e715cd80e391a62ed5519c37a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_archimedes, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Nov 26 00:02:19 compute-0 systemd[1]: libpod-conmon-2ac94431f71b225ee68136fbc54b1dd6054894e715cd80e391a62ed5519c37a5.scope: Deactivated successfully.
Nov 26 00:02:20 compute-0 podman[270983]: 2025-11-26 00:02:20.10230548 +0000 UTC m=+0.045237056 container create d7416575e92f8ff60de467d22d23ed2be93fa9a623378d40fe8bd3cbc25e80c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_mendeleev, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 26 00:02:20 compute-0 ceph-mon[75654]: pgmap v1008: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:02:20 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 26 00:02:20 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 26 00:02:20 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 00:02:20 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 00:02:20 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 26 00:02:20 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 00:02:20 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 00:02:20 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 00:02:20 compute-0 systemd[1]: Started libpod-conmon-d7416575e92f8ff60de467d22d23ed2be93fa9a623378d40fe8bd3cbc25e80c2.scope.
Nov 26 00:02:20 compute-0 podman[270983]: 2025-11-26 00:02:20.087440054 +0000 UTC m=+0.030371660 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 00:02:20 compute-0 systemd[1]: Started libcrun container.
Nov 26 00:02:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fbf08538d290a8db7a0af23ff28a016b552ef966c7790c4877ace4bd035e110/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 00:02:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fbf08538d290a8db7a0af23ff28a016b552ef966c7790c4877ace4bd035e110/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 00:02:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fbf08538d290a8db7a0af23ff28a016b552ef966c7790c4877ace4bd035e110/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 00:02:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fbf08538d290a8db7a0af23ff28a016b552ef966c7790c4877ace4bd035e110/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 00:02:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fbf08538d290a8db7a0af23ff28a016b552ef966c7790c4877ace4bd035e110/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 00:02:20 compute-0 podman[270983]: 2025-11-26 00:02:20.21348025 +0000 UTC m=+0.156411876 container init d7416575e92f8ff60de467d22d23ed2be93fa9a623378d40fe8bd3cbc25e80c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_mendeleev, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 00:02:20 compute-0 podman[270983]: 2025-11-26 00:02:20.226302982 +0000 UTC m=+0.169234568 container start d7416575e92f8ff60de467d22d23ed2be93fa9a623378d40fe8bd3cbc25e80c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_mendeleev, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 00:02:20 compute-0 podman[270983]: 2025-11-26 00:02:20.230671318 +0000 UTC m=+0.173602934 container attach d7416575e92f8ff60de467d22d23ed2be93fa9a623378d40fe8bd3cbc25e80c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_mendeleev, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 26 00:02:20 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1009: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:02:21 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 00:02:21 compute-0 dazzling_mendeleev[270999]: --> passed data devices: 0 physical, 3 LVM
Nov 26 00:02:21 compute-0 dazzling_mendeleev[270999]: --> relative data size: 1.0
Nov 26 00:02:21 compute-0 dazzling_mendeleev[270999]: --> All data devices are unavailable
Nov 26 00:02:21 compute-0 systemd[1]: libpod-d7416575e92f8ff60de467d22d23ed2be93fa9a623378d40fe8bd3cbc25e80c2.scope: Deactivated successfully.
Nov 26 00:02:21 compute-0 podman[270983]: 2025-11-26 00:02:21.20707837 +0000 UTC m=+1.150009946 container died d7416575e92f8ff60de467d22d23ed2be93fa9a623378d40fe8bd3cbc25e80c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_mendeleev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 26 00:02:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-2fbf08538d290a8db7a0af23ff28a016b552ef966c7790c4877ace4bd035e110-merged.mount: Deactivated successfully.
Nov 26 00:02:21 compute-0 podman[270983]: 2025-11-26 00:02:21.267951691 +0000 UTC m=+1.210883257 container remove d7416575e92f8ff60de467d22d23ed2be93fa9a623378d40fe8bd3cbc25e80c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_mendeleev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 00:02:21 compute-0 systemd[1]: libpod-conmon-d7416575e92f8ff60de467d22d23ed2be93fa9a623378d40fe8bd3cbc25e80c2.scope: Deactivated successfully.
Nov 26 00:02:21 compute-0 sudo[270876]: pam_unix(sudo:session): session closed for user root
Nov 26 00:02:21 compute-0 sudo[271042]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 00:02:21 compute-0 sudo[271042]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:02:21 compute-0 sudo[271042]: pam_unix(sudo:session): session closed for user root
Nov 26 00:02:21 compute-0 sudo[271067]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 00:02:21 compute-0 sudo[271067]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:02:21 compute-0 sudo[271067]: pam_unix(sudo:session): session closed for user root
Nov 26 00:02:21 compute-0 sudo[271092]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 00:02:21 compute-0 sudo[271092]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:02:21 compute-0 sudo[271092]: pam_unix(sudo:session): session closed for user root
Nov 26 00:02:21 compute-0 sudo[271117]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 -- lvm list --format json
Nov 26 00:02:21 compute-0 sudo[271117]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:02:21 compute-0 podman[271181]: 2025-11-26 00:02:21.933201456 +0000 UTC m=+0.062526305 container create 1c482c8efd5b1b4be50988d278438fa236a8de63fefd877b312cd55e81f06b77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_mirzakhani, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 00:02:21 compute-0 systemd[1]: Started libpod-conmon-1c482c8efd5b1b4be50988d278438fa236a8de63fefd877b312cd55e81f06b77.scope.
Nov 26 00:02:21 compute-0 systemd[1]: Started libcrun container.
Nov 26 00:02:22 compute-0 podman[271181]: 2025-11-26 00:02:21.910124832 +0000 UTC m=+0.039449781 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 00:02:22 compute-0 podman[271181]: 2025-11-26 00:02:22.017748548 +0000 UTC m=+0.147073437 container init 1c482c8efd5b1b4be50988d278438fa236a8de63fefd877b312cd55e81f06b77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_mirzakhani, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Nov 26 00:02:22 compute-0 podman[271181]: 2025-11-26 00:02:22.024635072 +0000 UTC m=+0.153959941 container start 1c482c8efd5b1b4be50988d278438fa236a8de63fefd877b312cd55e81f06b77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_mirzakhani, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 00:02:22 compute-0 podman[271181]: 2025-11-26 00:02:22.027933499 +0000 UTC m=+0.157258398 container attach 1c482c8efd5b1b4be50988d278438fa236a8de63fefd877b312cd55e81f06b77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_mirzakhani, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True)
Nov 26 00:02:22 compute-0 upbeat_mirzakhani[271197]: 167 167
Nov 26 00:02:22 compute-0 systemd[1]: libpod-1c482c8efd5b1b4be50988d278438fa236a8de63fefd877b312cd55e81f06b77.scope: Deactivated successfully.
Nov 26 00:02:22 compute-0 podman[271181]: 2025-11-26 00:02:22.029767978 +0000 UTC m=+0.159092837 container died 1c482c8efd5b1b4be50988d278438fa236a8de63fefd877b312cd55e81f06b77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Nov 26 00:02:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-ac98f94467210b7e748fe1a46662cadd0ae157affcb77098449d4e50c9e51839-merged.mount: Deactivated successfully.
Nov 26 00:02:22 compute-0 podman[271181]: 2025-11-26 00:02:22.074233172 +0000 UTC m=+0.203558041 container remove 1c482c8efd5b1b4be50988d278438fa236a8de63fefd877b312cd55e81f06b77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_mirzakhani, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 00:02:22 compute-0 systemd[1]: libpod-conmon-1c482c8efd5b1b4be50988d278438fa236a8de63fefd877b312cd55e81f06b77.scope: Deactivated successfully.
Nov 26 00:02:22 compute-0 ceph-mon[75654]: pgmap v1009: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:02:22 compute-0 podman[271222]: 2025-11-26 00:02:22.282056857 +0000 UTC m=+0.042732070 container create 1125e2b2e6c0412201f6ce736982c65b689bcb56b9aaf5db66208635aaee8c74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_benz, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507)
Nov 26 00:02:22 compute-0 systemd[1]: Started libpod-conmon-1125e2b2e6c0412201f6ce736982c65b689bcb56b9aaf5db66208635aaee8c74.scope.
Nov 26 00:02:22 compute-0 systemd[1]: Started libcrun container.
Nov 26 00:02:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e51a06521f9141211918754f3f1c5323fcfda09a0ac67e481a482d68b5df5a4d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 00:02:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e51a06521f9141211918754f3f1c5323fcfda09a0ac67e481a482d68b5df5a4d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 00:02:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e51a06521f9141211918754f3f1c5323fcfda09a0ac67e481a482d68b5df5a4d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 00:02:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e51a06521f9141211918754f3f1c5323fcfda09a0ac67e481a482d68b5df5a4d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 00:02:22 compute-0 podman[271222]: 2025-11-26 00:02:22.266281886 +0000 UTC m=+0.026957089 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 00:02:22 compute-0 podman[271222]: 2025-11-26 00:02:22.370360738 +0000 UTC m=+0.131035941 container init 1125e2b2e6c0412201f6ce736982c65b689bcb56b9aaf5db66208635aaee8c74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_benz, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 26 00:02:22 compute-0 podman[271222]: 2025-11-26 00:02:22.380360724 +0000 UTC m=+0.141035907 container start 1125e2b2e6c0412201f6ce736982c65b689bcb56b9aaf5db66208635aaee8c74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 26 00:02:22 compute-0 podman[271222]: 2025-11-26 00:02:22.384287069 +0000 UTC m=+0.144962262 container attach 1125e2b2e6c0412201f6ce736982c65b689bcb56b9aaf5db66208635aaee8c74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_benz, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 00:02:22 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1010: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:02:23 compute-0 vibrant_benz[271238]: {
Nov 26 00:02:23 compute-0 vibrant_benz[271238]:     "0": [
Nov 26 00:02:23 compute-0 vibrant_benz[271238]:         {
Nov 26 00:02:23 compute-0 vibrant_benz[271238]:             "devices": [
Nov 26 00:02:23 compute-0 vibrant_benz[271238]:                 "/dev/loop3"
Nov 26 00:02:23 compute-0 vibrant_benz[271238]:             ],
Nov 26 00:02:23 compute-0 vibrant_benz[271238]:             "lv_name": "ceph_lv0",
Nov 26 00:02:23 compute-0 vibrant_benz[271238]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 00:02:23 compute-0 vibrant_benz[271238]:             "lv_size": "21470642176",
Nov 26 00:02:23 compute-0 vibrant_benz[271238]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c5ddf08a-6193-41e0-8332-60b5083aa62e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 00:02:23 compute-0 vibrant_benz[271238]:             "lv_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 26 00:02:23 compute-0 vibrant_benz[271238]:             "name": "ceph_lv0",
Nov 26 00:02:23 compute-0 vibrant_benz[271238]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 00:02:23 compute-0 vibrant_benz[271238]:             "tags": {
Nov 26 00:02:23 compute-0 vibrant_benz[271238]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 00:02:23 compute-0 vibrant_benz[271238]:                 "ceph.block_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 26 00:02:23 compute-0 vibrant_benz[271238]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 00:02:23 compute-0 vibrant_benz[271238]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 26 00:02:23 compute-0 vibrant_benz[271238]:                 "ceph.cluster_name": "ceph",
Nov 26 00:02:23 compute-0 vibrant_benz[271238]:                 "ceph.crush_device_class": "",
Nov 26 00:02:23 compute-0 vibrant_benz[271238]:                 "ceph.encrypted": "0",
Nov 26 00:02:23 compute-0 vibrant_benz[271238]:                 "ceph.osd_fsid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 26 00:02:23 compute-0 vibrant_benz[271238]:                 "ceph.osd_id": "0",
Nov 26 00:02:23 compute-0 vibrant_benz[271238]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 00:02:23 compute-0 vibrant_benz[271238]:                 "ceph.type": "block",
Nov 26 00:02:23 compute-0 vibrant_benz[271238]:                 "ceph.vdo": "0"
Nov 26 00:02:23 compute-0 vibrant_benz[271238]:             },
Nov 26 00:02:23 compute-0 vibrant_benz[271238]:             "type": "block",
Nov 26 00:02:23 compute-0 vibrant_benz[271238]:             "vg_name": "ceph_vg0"
Nov 26 00:02:23 compute-0 vibrant_benz[271238]:         }
Nov 26 00:02:23 compute-0 vibrant_benz[271238]:     ],
Nov 26 00:02:23 compute-0 vibrant_benz[271238]:     "1": [
Nov 26 00:02:23 compute-0 vibrant_benz[271238]:         {
Nov 26 00:02:23 compute-0 vibrant_benz[271238]:             "devices": [
Nov 26 00:02:23 compute-0 vibrant_benz[271238]:                 "/dev/loop4"
Nov 26 00:02:23 compute-0 vibrant_benz[271238]:             ],
Nov 26 00:02:23 compute-0 vibrant_benz[271238]:             "lv_name": "ceph_lv1",
Nov 26 00:02:23 compute-0 vibrant_benz[271238]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 00:02:23 compute-0 vibrant_benz[271238]:             "lv_size": "21470642176",
Nov 26 00:02:23 compute-0 vibrant_benz[271238]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21bbab34-bea3-466b-8bf7-812749fcef47,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 00:02:23 compute-0 vibrant_benz[271238]:             "lv_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 26 00:02:23 compute-0 vibrant_benz[271238]:             "name": "ceph_lv1",
Nov 26 00:02:23 compute-0 vibrant_benz[271238]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 00:02:23 compute-0 vibrant_benz[271238]:             "tags": {
Nov 26 00:02:23 compute-0 vibrant_benz[271238]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 00:02:23 compute-0 vibrant_benz[271238]:                 "ceph.block_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 26 00:02:23 compute-0 vibrant_benz[271238]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 00:02:23 compute-0 vibrant_benz[271238]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 26 00:02:23 compute-0 vibrant_benz[271238]:                 "ceph.cluster_name": "ceph",
Nov 26 00:02:23 compute-0 vibrant_benz[271238]:                 "ceph.crush_device_class": "",
Nov 26 00:02:23 compute-0 vibrant_benz[271238]:                 "ceph.encrypted": "0",
Nov 26 00:02:23 compute-0 vibrant_benz[271238]:                 "ceph.osd_fsid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 26 00:02:23 compute-0 vibrant_benz[271238]:                 "ceph.osd_id": "1",
Nov 26 00:02:23 compute-0 vibrant_benz[271238]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 00:02:23 compute-0 vibrant_benz[271238]:                 "ceph.type": "block",
Nov 26 00:02:23 compute-0 vibrant_benz[271238]:                 "ceph.vdo": "0"
Nov 26 00:02:23 compute-0 vibrant_benz[271238]:             },
Nov 26 00:02:23 compute-0 vibrant_benz[271238]:             "type": "block",
Nov 26 00:02:23 compute-0 vibrant_benz[271238]:             "vg_name": "ceph_vg1"
Nov 26 00:02:23 compute-0 vibrant_benz[271238]:         }
Nov 26 00:02:23 compute-0 vibrant_benz[271238]:     ],
Nov 26 00:02:23 compute-0 vibrant_benz[271238]:     "2": [
Nov 26 00:02:23 compute-0 vibrant_benz[271238]:         {
Nov 26 00:02:23 compute-0 vibrant_benz[271238]:             "devices": [
Nov 26 00:02:23 compute-0 vibrant_benz[271238]:                 "/dev/loop5"
Nov 26 00:02:23 compute-0 vibrant_benz[271238]:             ],
Nov 26 00:02:23 compute-0 vibrant_benz[271238]:             "lv_name": "ceph_lv2",
Nov 26 00:02:23 compute-0 vibrant_benz[271238]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 00:02:23 compute-0 vibrant_benz[271238]:             "lv_size": "21470642176",
Nov 26 00:02:23 compute-0 vibrant_benz[271238]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=019d967b-1a56-4e90-8682-a890da577e20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 00:02:23 compute-0 vibrant_benz[271238]:             "lv_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 26 00:02:23 compute-0 vibrant_benz[271238]:             "name": "ceph_lv2",
Nov 26 00:02:23 compute-0 vibrant_benz[271238]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 00:02:23 compute-0 vibrant_benz[271238]:             "tags": {
Nov 26 00:02:23 compute-0 vibrant_benz[271238]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 00:02:23 compute-0 vibrant_benz[271238]:                 "ceph.block_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 26 00:02:23 compute-0 vibrant_benz[271238]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 00:02:23 compute-0 vibrant_benz[271238]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 26 00:02:23 compute-0 vibrant_benz[271238]:                 "ceph.cluster_name": "ceph",
Nov 26 00:02:23 compute-0 vibrant_benz[271238]:                 "ceph.crush_device_class": "",
Nov 26 00:02:23 compute-0 vibrant_benz[271238]:                 "ceph.encrypted": "0",
Nov 26 00:02:23 compute-0 vibrant_benz[271238]:                 "ceph.osd_fsid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 26 00:02:23 compute-0 vibrant_benz[271238]:                 "ceph.osd_id": "2",
Nov 26 00:02:23 compute-0 vibrant_benz[271238]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 00:02:23 compute-0 vibrant_benz[271238]:                 "ceph.type": "block",
Nov 26 00:02:23 compute-0 vibrant_benz[271238]:                 "ceph.vdo": "0"
Nov 26 00:02:23 compute-0 vibrant_benz[271238]:             },
Nov 26 00:02:23 compute-0 vibrant_benz[271238]:             "type": "block",
Nov 26 00:02:23 compute-0 vibrant_benz[271238]:             "vg_name": "ceph_vg2"
Nov 26 00:02:23 compute-0 vibrant_benz[271238]:         }
Nov 26 00:02:23 compute-0 vibrant_benz[271238]:     ]
Nov 26 00:02:23 compute-0 vibrant_benz[271238]: }
Nov 26 00:02:23 compute-0 systemd[1]: libpod-1125e2b2e6c0412201f6ce736982c65b689bcb56b9aaf5db66208635aaee8c74.scope: Deactivated successfully.
Nov 26 00:02:23 compute-0 podman[271222]: 2025-11-26 00:02:23.176489955 +0000 UTC m=+0.937165138 container died 1125e2b2e6c0412201f6ce736982c65b689bcb56b9aaf5db66208635aaee8c74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_benz, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 00:02:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-e51a06521f9141211918754f3f1c5323fcfda09a0ac67e481a482d68b5df5a4d-merged.mount: Deactivated successfully.
Nov 26 00:02:23 compute-0 podman[271222]: 2025-11-26 00:02:23.228702886 +0000 UTC m=+0.989378069 container remove 1125e2b2e6c0412201f6ce736982c65b689bcb56b9aaf5db66208635aaee8c74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_benz, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 00:02:23 compute-0 systemd[1]: libpod-conmon-1125e2b2e6c0412201f6ce736982c65b689bcb56b9aaf5db66208635aaee8c74.scope: Deactivated successfully.
Nov 26 00:02:23 compute-0 sudo[271117]: pam_unix(sudo:session): session closed for user root
Nov 26 00:02:23 compute-0 sudo[271258]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 00:02:23 compute-0 sudo[271258]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:02:23 compute-0 sudo[271258]: pam_unix(sudo:session): session closed for user root
Nov 26 00:02:23 compute-0 sudo[271283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 00:02:23 compute-0 sudo[271283]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:02:23 compute-0 sudo[271283]: pam_unix(sudo:session): session closed for user root
Nov 26 00:02:23 compute-0 sudo[271308]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 00:02:23 compute-0 sudo[271308]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:02:23 compute-0 sudo[271308]: pam_unix(sudo:session): session closed for user root
Nov 26 00:02:23 compute-0 sudo[271333]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 -- raw list --format json
Nov 26 00:02:23 compute-0 sudo[271333]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:02:23 compute-0 podman[271398]: 2025-11-26 00:02:23.812977845 +0000 UTC m=+0.036277257 container create 402c9c2baa12eeccc7df61e6d51a52b3e4cdc83c3d608e4e25cf39a254ee57c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_cori, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 00:02:23 compute-0 systemd[1]: Started libpod-conmon-402c9c2baa12eeccc7df61e6d51a52b3e4cdc83c3d608e4e25cf39a254ee57c5.scope.
Nov 26 00:02:23 compute-0 systemd[1]: Started libcrun container.
Nov 26 00:02:23 compute-0 podman[271398]: 2025-11-26 00:02:23.886726119 +0000 UTC m=+0.110025551 container init 402c9c2baa12eeccc7df61e6d51a52b3e4cdc83c3d608e4e25cf39a254ee57c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_cori, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 00:02:23 compute-0 podman[271398]: 2025-11-26 00:02:23.79628546 +0000 UTC m=+0.019584892 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 00:02:23 compute-0 podman[271398]: 2025-11-26 00:02:23.894127846 +0000 UTC m=+0.117427258 container start 402c9c2baa12eeccc7df61e6d51a52b3e4cdc83c3d608e4e25cf39a254ee57c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_cori, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 26 00:02:23 compute-0 podman[271398]: 2025-11-26 00:02:23.897329291 +0000 UTC m=+0.120628773 container attach 402c9c2baa12eeccc7df61e6d51a52b3e4cdc83c3d608e4e25cf39a254ee57c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_cori, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 26 00:02:23 compute-0 blissful_cori[271414]: 167 167
Nov 26 00:02:23 compute-0 systemd[1]: libpod-402c9c2baa12eeccc7df61e6d51a52b3e4cdc83c3d608e4e25cf39a254ee57c5.scope: Deactivated successfully.
Nov 26 00:02:23 compute-0 podman[271398]: 2025-11-26 00:02:23.899611792 +0000 UTC m=+0.122911204 container died 402c9c2baa12eeccc7df61e6d51a52b3e4cdc83c3d608e4e25cf39a254ee57c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 26 00:02:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-cf1b189b18f95de93ea584406d4e8f2c86721c318a7d8a8c2a35f12c5d2ab56b-merged.mount: Deactivated successfully.
Nov 26 00:02:23 compute-0 podman[271398]: 2025-11-26 00:02:23.932896589 +0000 UTC m=+0.156196011 container remove 402c9c2baa12eeccc7df61e6d51a52b3e4cdc83c3d608e4e25cf39a254ee57c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_cori, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True)
Nov 26 00:02:23 compute-0 systemd[1]: libpod-conmon-402c9c2baa12eeccc7df61e6d51a52b3e4cdc83c3d608e4e25cf39a254ee57c5.scope: Deactivated successfully.
Nov 26 00:02:24 compute-0 podman[271437]: 2025-11-26 00:02:24.079523783 +0000 UTC m=+0.040495210 container create 0ca55a79861b4932ee8a85b156a9845b219cff2943cad598c5dce4440e890331 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_lewin, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 26 00:02:24 compute-0 systemd[1]: Started libpod-conmon-0ca55a79861b4932ee8a85b156a9845b219cff2943cad598c5dce4440e890331.scope.
Nov 26 00:02:24 compute-0 systemd[1]: Started libcrun container.
Nov 26 00:02:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4eb05abac29a4b8ffc898b3c44b072fed46b63a19e27362cb77eb3412a501d56/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 00:02:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4eb05abac29a4b8ffc898b3c44b072fed46b63a19e27362cb77eb3412a501d56/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 00:02:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4eb05abac29a4b8ffc898b3c44b072fed46b63a19e27362cb77eb3412a501d56/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 00:02:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4eb05abac29a4b8ffc898b3c44b072fed46b63a19e27362cb77eb3412a501d56/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 00:02:24 compute-0 podman[271437]: 2025-11-26 00:02:24.062286824 +0000 UTC m=+0.023258271 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 00:02:24 compute-0 podman[271437]: 2025-11-26 00:02:24.165117563 +0000 UTC m=+0.126088990 container init 0ca55a79861b4932ee8a85b156a9845b219cff2943cad598c5dce4440e890331 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_lewin, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 26 00:02:24 compute-0 podman[271437]: 2025-11-26 00:02:24.170446224 +0000 UTC m=+0.131417651 container start 0ca55a79861b4932ee8a85b156a9845b219cff2943cad598c5dce4440e890331 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 26 00:02:24 compute-0 ceph-mon[75654]: pgmap v1010: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:02:24 compute-0 podman[271437]: 2025-11-26 00:02:24.175684034 +0000 UTC m=+0.136655461 container attach 0ca55a79861b4932ee8a85b156a9845b219cff2943cad598c5dce4440e890331 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 00:02:24 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1011: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:02:25 compute-0 focused_lewin[271454]: {
Nov 26 00:02:25 compute-0 focused_lewin[271454]:     "019d967b-1a56-4e90-8682-a890da577e20": {
Nov 26 00:02:25 compute-0 focused_lewin[271454]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 26 00:02:25 compute-0 focused_lewin[271454]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 00:02:25 compute-0 focused_lewin[271454]:         "osd_id": 2,
Nov 26 00:02:25 compute-0 focused_lewin[271454]:         "osd_uuid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 26 00:02:25 compute-0 focused_lewin[271454]:         "type": "bluestore"
Nov 26 00:02:25 compute-0 focused_lewin[271454]:     },
Nov 26 00:02:25 compute-0 focused_lewin[271454]:     "21bbab34-bea3-466b-8bf7-812749fcef47": {
Nov 26 00:02:25 compute-0 focused_lewin[271454]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 26 00:02:25 compute-0 focused_lewin[271454]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 00:02:25 compute-0 focused_lewin[271454]:         "osd_id": 1,
Nov 26 00:02:25 compute-0 focused_lewin[271454]:         "osd_uuid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 26 00:02:25 compute-0 focused_lewin[271454]:         "type": "bluestore"
Nov 26 00:02:25 compute-0 focused_lewin[271454]:     },
Nov 26 00:02:25 compute-0 focused_lewin[271454]:     "c5ddf08a-6193-41e0-8332-60b5083aa62e": {
Nov 26 00:02:25 compute-0 focused_lewin[271454]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 26 00:02:25 compute-0 focused_lewin[271454]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 00:02:25 compute-0 focused_lewin[271454]:         "osd_id": 0,
Nov 26 00:02:25 compute-0 focused_lewin[271454]:         "osd_uuid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 26 00:02:25 compute-0 focused_lewin[271454]:         "type": "bluestore"
Nov 26 00:02:25 compute-0 focused_lewin[271454]:     }
Nov 26 00:02:25 compute-0 focused_lewin[271454]: }
Nov 26 00:02:25 compute-0 systemd[1]: libpod-0ca55a79861b4932ee8a85b156a9845b219cff2943cad598c5dce4440e890331.scope: Deactivated successfully.
Nov 26 00:02:25 compute-0 conmon[271454]: conmon 0ca55a79861b4932ee8a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0ca55a79861b4932ee8a85b156a9845b219cff2943cad598c5dce4440e890331.scope/container/memory.events
Nov 26 00:02:25 compute-0 podman[271437]: 2025-11-26 00:02:25.081725981 +0000 UTC m=+1.042697408 container died 0ca55a79861b4932ee8a85b156a9845b219cff2943cad598c5dce4440e890331 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_lewin, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 26 00:02:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-4eb05abac29a4b8ffc898b3c44b072fed46b63a19e27362cb77eb3412a501d56-merged.mount: Deactivated successfully.
Nov 26 00:02:25 compute-0 podman[271437]: 2025-11-26 00:02:25.13048239 +0000 UTC m=+1.091453817 container remove 0ca55a79861b4932ee8a85b156a9845b219cff2943cad598c5dce4440e890331 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_lewin, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 00:02:25 compute-0 systemd[1]: libpod-conmon-0ca55a79861b4932ee8a85b156a9845b219cff2943cad598c5dce4440e890331.scope: Deactivated successfully.
Nov 26 00:02:25 compute-0 sudo[271333]: pam_unix(sudo:session): session closed for user root
Nov 26 00:02:25 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 00:02:25 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 26 00:02:25 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 00:02:25 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 26 00:02:25 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev 22cf43c3-1525-458b-ad15-667ff3304ceb does not exist
Nov 26 00:02:25 compute-0 sudo[271497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 00:02:25 compute-0 sudo[271497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:02:25 compute-0 sudo[271497]: pam_unix(sudo:session): session closed for user root
Nov 26 00:02:25 compute-0 sudo[271522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 26 00:02:25 compute-0 sudo[271522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:02:25 compute-0 sudo[271522]: pam_unix(sudo:session): session closed for user root
Nov 26 00:02:26 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 00:02:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 00:02:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 00:02:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 00:02:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 00:02:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 00:02:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 00:02:26 compute-0 ceph-mon[75654]: pgmap v1011: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:02:26 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 26 00:02:26 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 26 00:02:26 compute-0 ovn_metadata_agent[160720]: 2025-11-26 00:02:26.482 160725 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=2ba84045-48af-49e3-86f7-35b32300977f, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 26 00:02:26 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1012: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:02:27 compute-0 podman[271547]: 2025-11-26 00:02:27.229209009 +0000 UTC m=+0.062633129 container health_status b33bb0f8771548517ad62d4e0193b93d84fd686bd9113462f17aea9d51587084 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 26 00:02:28 compute-0 ceph-mon[75654]: pgmap v1012: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:02:28 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1013: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:02:29 compute-0 ceph-mon[75654]: pgmap v1013: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:02:30 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1014: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:02:31 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 00:02:31 compute-0 ceph-mon[75654]: pgmap v1014: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:02:32 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1015: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:02:33 compute-0 ceph-mon[75654]: pgmap v1015: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:02:34 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1016: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:02:36 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 00:02:36 compute-0 ceph-mon[75654]: pgmap v1016: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:02:36 compute-0 podman[271570]: 2025-11-26 00:02:36.305612162 +0000 UTC m=+0.111048898 container health_status 9c33443d6f21f9126b440289b7d384a59ef92406382d0e40a2b11f1da5dc47d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent)
Nov 26 00:02:36 compute-0 podman[271569]: 2025-11-26 00:02:36.339256218 +0000 UTC m=+0.151539366 container health_status 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 26 00:02:36 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1017: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:02:38 compute-0 ceph-mon[75654]: pgmap v1017: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:02:38 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1018: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:02:39 compute-0 ceph-mon[75654]: pgmap v1018: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:02:40 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1019: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:02:40 compute-0 ovn_metadata_agent[160720]: 2025-11-26 00:02:40.771 160725 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 26 00:02:40 compute-0 ovn_metadata_agent[160720]: 2025-11-26 00:02:40.771 160725 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 26 00:02:40 compute-0 ovn_metadata_agent[160720]: 2025-11-26 00:02:40.771 160725 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 26 00:02:41 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 00:02:42 compute-0 ceph-mon[75654]: pgmap v1019: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:02:42 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1020: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:02:44 compute-0 ceph-mon[75654]: pgmap v1020: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:02:44 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1021: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:02:46 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 00:02:46 compute-0 ceph-mon[75654]: pgmap v1021: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:02:46 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1022: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:02:48 compute-0 ceph-mon[75654]: pgmap v1022: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:02:48 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1023: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:02:50 compute-0 ceph-mon[75654]: pgmap v1023: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:02:50 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1024: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:02:51 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 00:02:51 compute-0 ceph-osd[89044]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 00:02:51 compute-0 ceph-osd[89044]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 5992 writes, 24K keys, 5992 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 5992 writes, 1182 syncs, 5.07 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1626 writes, 4173 keys, 1626 commit groups, 1.0 writes per commit group, ingest: 2.37 MB, 0.00 MB/s
                                           Interval WAL: 1626 writes, 724 syncs, 2.25 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 26 00:02:52 compute-0 ceph-mon[75654]: pgmap v1024: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:02:52 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1025: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:02:54 compute-0 ceph-mon[75654]: pgmap v1025: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:02:54 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1026: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:02:55 compute-0 ceph-osd[90055]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 00:02:55 compute-0 ceph-osd[90055]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Cumulative writes: 6033 writes, 24K keys, 6033 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 6033 writes, 1137 syncs, 5.31 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1544 writes, 4445 keys, 1544 commit groups, 1.0 writes per commit group, ingest: 2.24 MB, 0.00 MB/s
                                           Interval WAL: 1544 writes, 647 syncs, 2.39 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 26 00:02:56 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 00:02:56 compute-0 ceph-mgr[75954]: [balancer INFO root] Optimize plan auto_2025-11-26_00:02:56
Nov 26 00:02:56 compute-0 ceph-mgr[75954]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 00:02:56 compute-0 ceph-mgr[75954]: [balancer INFO root] do_upmap
Nov 26 00:02:56 compute-0 ceph-mgr[75954]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'cephfs.cephfs.data', 'images', 'volumes', 'backups', '.mgr', 'vms']
Nov 26 00:02:56 compute-0 ceph-mgr[75954]: [balancer INFO root] prepared 0/10 changes
Nov 26 00:02:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 00:02:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 00:02:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 00:02:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 00:02:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 00:02:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 00:02:56 compute-0 ceph-mon[75654]: pgmap v1026: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:02:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 00:02:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 00:02:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 00:02:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 00:02:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 00:02:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 00:02:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 00:02:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 00:02:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 00:02:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 00:02:56 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1027: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:02:58 compute-0 podman[271614]: 2025-11-26 00:02:58.234077888 +0000 UTC m=+0.064999002 container health_status b33bb0f8771548517ad62d4e0193b93d84fd686bd9113462f17aea9d51587084 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=multipathd)
Nov 26 00:02:58 compute-0 ceph-mon[75654]: pgmap v1027: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:02:58 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1028: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:03:00 compute-0 ceph-mon[75654]: pgmap v1028: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:03:00 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1029: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:03:01 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 00:03:01 compute-0 ceph-osd[91111]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 00:03:01 compute-0 ceph-osd[91111]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 5481 writes, 23K keys, 5481 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 5481 writes, 906 syncs, 6.05 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1309 writes, 3946 keys, 1309 commit groups, 1.0 writes per commit group, ingest: 2.20 MB, 0.00 MB/s
                                           Interval WAL: 1309 writes, 541 syncs, 2.42 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 26 00:03:01 compute-0 ceph-mgr[75954]: [devicehealth INFO root] Check health
Nov 26 00:03:01 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Nov 26 00:03:01 compute-0 sudo[271636]:     ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda
Nov 26 00:03:01 compute-0 sudo[271636]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 26 00:03:01 compute-0 sudo[271636]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167)
Nov 26 00:03:01 compute-0 sudo[271636]: pam_unix(sudo:session): session closed for user root
Nov 26 00:03:01 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Nov 26 00:03:01 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Nov 26 00:03:01 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 26 00:03:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 00:03:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 00:03:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 00:03:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 00:03:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 26 00:03:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 00:03:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 26 00:03:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 00:03:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 00:03:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 00:03:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 26 00:03:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 00:03:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 26 00:03:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 00:03:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 00:03:02 compute-0 ceph-mon[75654]: pgmap v1029: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:03:02 compute-0 ceph-mon[75654]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Nov 26 00:03:02 compute-0 ceph-mon[75654]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Nov 26 00:03:02 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 26 00:03:02 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1030: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:03:04 compute-0 ceph-mon[75654]: pgmap v1030: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:03:04 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1031: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:03:06 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 00:03:06 compute-0 ceph-mon[75654]: pgmap v1031: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:03:06 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1032: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 22 KiB/s wr, 1 op/s
Nov 26 00:03:07 compute-0 podman[271640]: 2025-11-26 00:03:07.264348545 +0000 UTC m=+0.080097006 container health_status 9c33443d6f21f9126b440289b7d384a59ef92406382d0e40a2b11f1da5dc47d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible)
Nov 26 00:03:07 compute-0 podman[271639]: 2025-11-26 00:03:07.285753085 +0000 UTC m=+0.114049480 container health_status 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 26 00:03:08 compute-0 ceph-mon[75654]: pgmap v1032: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 22 KiB/s wr, 1 op/s
Nov 26 00:03:08 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1033: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 22 KiB/s wr, 1 op/s
Nov 26 00:03:08 compute-0 nova_compute[252550]: 2025-11-26 00:03:08.817 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 00:03:08 compute-0 nova_compute[252550]: 2025-11-26 00:03:08.960 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 00:03:10 compute-0 ceph-mon[75654]: pgmap v1033: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 22 KiB/s wr, 1 op/s
Nov 26 00:03:10 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1034: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 22 KiB/s wr, 1 op/s
Nov 26 00:03:10 compute-0 nova_compute[252550]: 2025-11-26 00:03:10.959 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 00:03:11 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 00:03:11 compute-0 nova_compute[252550]: 2025-11-26 00:03:11.822 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 00:03:11 compute-0 nova_compute[252550]: 2025-11-26 00:03:11.822 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 26 00:03:11 compute-0 nova_compute[252550]: 2025-11-26 00:03:11.822 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 00:03:11 compute-0 nova_compute[252550]: 2025-11-26 00:03:11.869 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 26 00:03:11 compute-0 nova_compute[252550]: 2025-11-26 00:03:11.870 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 26 00:03:11 compute-0 nova_compute[252550]: 2025-11-26 00:03:11.870 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 26 00:03:11 compute-0 nova_compute[252550]: 2025-11-26 00:03:11.870 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 26 00:03:11 compute-0 nova_compute[252550]: 2025-11-26 00:03:11.870 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 26 00:03:12 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 00:03:12 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4067884424' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 00:03:12 compute-0 nova_compute[252550]: 2025-11-26 00:03:12.304 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 26 00:03:12 compute-0 ceph-mon[75654]: pgmap v1034: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 22 KiB/s wr, 1 op/s
Nov 26 00:03:12 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/4067884424' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 00:03:12 compute-0 nova_compute[252550]: 2025-11-26 00:03:12.487 252558 WARNING nova.virt.libvirt.driver [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 26 00:03:12 compute-0 nova_compute[252550]: 2025-11-26 00:03:12.489 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5165MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 26 00:03:12 compute-0 nova_compute[252550]: 2025-11-26 00:03:12.490 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 26 00:03:12 compute-0 nova_compute[252550]: 2025-11-26 00:03:12.490 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 26 00:03:12 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1035: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 22 KiB/s wr, 1 op/s
Nov 26 00:03:12 compute-0 nova_compute[252550]: 2025-11-26 00:03:12.997 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 26 00:03:12 compute-0 nova_compute[252550]: 2025-11-26 00:03:12.997 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 26 00:03:13 compute-0 nova_compute[252550]: 2025-11-26 00:03:13.018 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 26 00:03:13 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 00:03:13 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/338706563' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 00:03:13 compute-0 nova_compute[252550]: 2025-11-26 00:03:13.468 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 26 00:03:13 compute-0 nova_compute[252550]: 2025-11-26 00:03:13.474 252558 DEBUG nova.compute.provider_tree [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Inventory has not changed in ProviderTree for provider: 08547965-b35f-4b7b-95d8-902f06aa011c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 26 00:03:13 compute-0 nova_compute[252550]: 2025-11-26 00:03:13.638 252558 DEBUG nova.scheduler.client.report [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Inventory has not changed for provider 08547965-b35f-4b7b-95d8-902f06aa011c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 26 00:03:13 compute-0 nova_compute[252550]: 2025-11-26 00:03:13.642 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 26 00:03:13 compute-0 nova_compute[252550]: 2025-11-26 00:03:13.642 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.152s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 26 00:03:14 compute-0 ceph-mon[75654]: pgmap v1035: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 22 KiB/s wr, 1 op/s
Nov 26 00:03:14 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/338706563' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 00:03:14 compute-0 nova_compute[252550]: 2025-11-26 00:03:14.644 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 00:03:14 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1036: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 22 KiB/s wr, 1 op/s
Nov 26 00:03:14 compute-0 nova_compute[252550]: 2025-11-26 00:03:14.822 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 00:03:14 compute-0 nova_compute[252550]: 2025-11-26 00:03:14.822 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 26 00:03:14 compute-0 nova_compute[252550]: 2025-11-26 00:03:14.823 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 26 00:03:14 compute-0 nova_compute[252550]: 2025-11-26 00:03:14.963 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 26 00:03:16 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 00:03:16 compute-0 ceph-mon[75654]: pgmap v1036: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 22 KiB/s wr, 1 op/s
Nov 26 00:03:16 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1037: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 22 KiB/s wr, 1 op/s
Nov 26 00:03:16 compute-0 nova_compute[252550]: 2025-11-26 00:03:16.821 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 00:03:16 compute-0 nova_compute[252550]: 2025-11-26 00:03:16.821 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 00:03:17 compute-0 sshd-session[271682]: Invalid user ubuntu from 45.78.216.132 port 38952
Nov 26 00:03:17 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 26 00:03:17 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3172055492' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 00:03:17 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 26 00:03:17 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3172055492' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 00:03:18 compute-0 ceph-mon[75654]: pgmap v1037: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 22 KiB/s wr, 1 op/s
Nov 26 00:03:18 compute-0 ceph-mon[75654]: from='client.? 192.168.122.10:0/3172055492' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 00:03:18 compute-0 ceph-mon[75654]: from='client.? 192.168.122.10:0/3172055492' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 00:03:18 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1038: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:03:18 compute-0 nova_compute[252550]: 2025-11-26 00:03:18.822 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 00:03:18 compute-0 sshd-session[271682]: Received disconnect from 45.78.216.132 port 38952:11: Bye Bye [preauth]
Nov 26 00:03:18 compute-0 sshd-session[271682]: Disconnected from invalid user ubuntu 45.78.216.132 port 38952 [preauth]
Nov 26 00:03:19 compute-0 ceph-mon[75654]: pgmap v1038: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:03:20 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1039: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:03:21 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 00:03:21 compute-0 ceph-mon[75654]: pgmap v1039: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:03:22 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1040: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:03:23 compute-0 ceph-mon[75654]: pgmap v1040: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:03:24 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1041: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:03:25 compute-0 sudo[271727]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 00:03:25 compute-0 sudo[271727]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:03:25 compute-0 sudo[271727]: pam_unix(sudo:session): session closed for user root
Nov 26 00:03:25 compute-0 sudo[271752]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 00:03:25 compute-0 sudo[271752]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:03:25 compute-0 sudo[271752]: pam_unix(sudo:session): session closed for user root
Nov 26 00:03:25 compute-0 sudo[271777]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 00:03:25 compute-0 sudo[271777]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:03:25 compute-0 sudo[271777]: pam_unix(sudo:session): session closed for user root
Nov 26 00:03:25 compute-0 sudo[271802]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 26 00:03:25 compute-0 sudo[271802]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:03:25 compute-0 ceph-mon[75654]: pgmap v1041: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:03:25 compute-0 sudo[271802]: pam_unix(sudo:session): session closed for user root
Nov 26 00:03:25 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 00:03:25 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 26 00:03:25 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 00:03:25 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 26 00:03:25 compute-0 sudo[271849]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 00:03:25 compute-0 sudo[271849]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:03:25 compute-0 sudo[271849]: pam_unix(sudo:session): session closed for user root
Nov 26 00:03:25 compute-0 sudo[271874]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 00:03:25 compute-0 sudo[271874]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:03:25 compute-0 sudo[271874]: pam_unix(sudo:session): session closed for user root
Nov 26 00:03:26 compute-0 sudo[271899]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 00:03:26 compute-0 sudo[271899]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:03:26 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 00:03:26 compute-0 sudo[271899]: pam_unix(sudo:session): session closed for user root
Nov 26 00:03:26 compute-0 sudo[271924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 26 00:03:26 compute-0 sudo[271924]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:03:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 00:03:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 00:03:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 00:03:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 00:03:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 00:03:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 00:03:26 compute-0 sudo[271924]: pam_unix(sudo:session): session closed for user root
Nov 26 00:03:26 compute-0 sudo[271982]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 00:03:26 compute-0 sudo[271982]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:03:26 compute-0 sudo[271982]: pam_unix(sudo:session): session closed for user root
Nov 26 00:03:26 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1042: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:03:26 compute-0 sudo[272007]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 00:03:26 compute-0 sudo[272007]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:03:26 compute-0 sudo[272007]: pam_unix(sudo:session): session closed for user root
Nov 26 00:03:26 compute-0 sudo[272032]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 00:03:26 compute-0 sudo[272032]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:03:26 compute-0 sudo[272032]: pam_unix(sudo:session): session closed for user root
Nov 26 00:03:26 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 26 00:03:26 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 26 00:03:26 compute-0 sudo[272057]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 -- inventory --format=json-pretty --filter-for-batch
Nov 26 00:03:26 compute-0 sudo[272057]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:03:27 compute-0 podman[272122]: 2025-11-26 00:03:27.20920286 +0000 UTC m=+0.058536250 container create ec3163055881a602715de6740bc8c9a5426542cbedd33fd668be115155f04f28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_vaughan, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 26 00:03:27 compute-0 systemd[1]: Started libpod-conmon-ec3163055881a602715de6740bc8c9a5426542cbedd33fd668be115155f04f28.scope.
Nov 26 00:03:27 compute-0 systemd[1]: Started libcrun container.
Nov 26 00:03:27 compute-0 podman[272122]: 2025-11-26 00:03:27.18556252 +0000 UTC m=+0.034896000 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 00:03:27 compute-0 podman[272122]: 2025-11-26 00:03:27.292867168 +0000 UTC m=+0.142200568 container init ec3163055881a602715de6740bc8c9a5426542cbedd33fd668be115155f04f28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_vaughan, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True)
Nov 26 00:03:27 compute-0 podman[272122]: 2025-11-26 00:03:27.299716761 +0000 UTC m=+0.149050151 container start ec3163055881a602715de6740bc8c9a5426542cbedd33fd668be115155f04f28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_vaughan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 26 00:03:27 compute-0 podman[272122]: 2025-11-26 00:03:27.302533006 +0000 UTC m=+0.151866396 container attach ec3163055881a602715de6740bc8c9a5426542cbedd33fd668be115155f04f28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 00:03:27 compute-0 flamboyant_vaughan[272138]: 167 167
Nov 26 00:03:27 compute-0 systemd[1]: libpod-ec3163055881a602715de6740bc8c9a5426542cbedd33fd668be115155f04f28.scope: Deactivated successfully.
Nov 26 00:03:27 compute-0 podman[272122]: 2025-11-26 00:03:27.307327433 +0000 UTC m=+0.156660863 container died ec3163055881a602715de6740bc8c9a5426542cbedd33fd668be115155f04f28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_vaughan, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 00:03:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-c8c36fb46443b8fa6441e3535a61963f6e41ecf2e254bc804b3448f456359af9-merged.mount: Deactivated successfully.
Nov 26 00:03:27 compute-0 podman[272122]: 2025-11-26 00:03:27.355321162 +0000 UTC m=+0.204654552 container remove ec3163055881a602715de6740bc8c9a5426542cbedd33fd668be115155f04f28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 00:03:27 compute-0 systemd[1]: libpod-conmon-ec3163055881a602715de6740bc8c9a5426542cbedd33fd668be115155f04f28.scope: Deactivated successfully.
Nov 26 00:03:27 compute-0 podman[272162]: 2025-11-26 00:03:27.505389159 +0000 UTC m=+0.040653974 container create 6be032d9275d8bc9b528e75fccbb28131aca3016cc910e1a323f7a123f3ae057 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_albattani, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 26 00:03:27 compute-0 systemd[1]: Started libpod-conmon-6be032d9275d8bc9b528e75fccbb28131aca3016cc910e1a323f7a123f3ae057.scope.
Nov 26 00:03:27 compute-0 systemd[1]: Started libcrun container.
Nov 26 00:03:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25faf611a375340d4aa9649619e653f65554545afd2d77bfa3f44f99f0864db7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 00:03:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25faf611a375340d4aa9649619e653f65554545afd2d77bfa3f44f99f0864db7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 00:03:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25faf611a375340d4aa9649619e653f65554545afd2d77bfa3f44f99f0864db7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 00:03:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25faf611a375340d4aa9649619e653f65554545afd2d77bfa3f44f99f0864db7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 00:03:27 compute-0 podman[272162]: 2025-11-26 00:03:27.486460074 +0000 UTC m=+0.021724919 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 00:03:27 compute-0 podman[272162]: 2025-11-26 00:03:27.590091025 +0000 UTC m=+0.125355880 container init 6be032d9275d8bc9b528e75fccbb28131aca3016cc910e1a323f7a123f3ae057 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_albattani, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 00:03:27 compute-0 podman[272162]: 2025-11-26 00:03:27.596246779 +0000 UTC m=+0.131511574 container start 6be032d9275d8bc9b528e75fccbb28131aca3016cc910e1a323f7a123f3ae057 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_albattani, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True)
Nov 26 00:03:27 compute-0 podman[272162]: 2025-11-26 00:03:27.607503419 +0000 UTC m=+0.142768224 container attach 6be032d9275d8bc9b528e75fccbb28131aca3016cc910e1a323f7a123f3ae057 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_albattani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 26 00:03:27 compute-0 ceph-mon[75654]: pgmap v1042: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:03:28 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1043: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:03:28 compute-0 systemd[1]: libpod-6be032d9275d8bc9b528e75fccbb28131aca3016cc910e1a323f7a123f3ae057.scope: Deactivated successfully.
Nov 26 00:03:28 compute-0 interesting_albattani[272178]: [
Nov 26 00:03:28 compute-0 interesting_albattani[272178]:     {
Nov 26 00:03:28 compute-0 interesting_albattani[272178]:         "available": false,
Nov 26 00:03:28 compute-0 interesting_albattani[272178]:         "ceph_device": false,
Nov 26 00:03:28 compute-0 interesting_albattani[272178]:         "device_id": "QEMU_DVD-ROM_QM00001",
Nov 26 00:03:28 compute-0 conmon[272178]: conmon 6be032d9275d8bc9b528 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6be032d9275d8bc9b528e75fccbb28131aca3016cc910e1a323f7a123f3ae057.scope/container/memory.events
Nov 26 00:03:28 compute-0 interesting_albattani[272178]:         "lsm_data": {},
Nov 26 00:03:28 compute-0 interesting_albattani[272178]:         "lvs": [],
Nov 26 00:03:28 compute-0 interesting_albattani[272178]:         "path": "/dev/sr0",
Nov 26 00:03:28 compute-0 interesting_albattani[272178]:         "rejected_reasons": [
Nov 26 00:03:28 compute-0 interesting_albattani[272178]:             "Insufficient space (<5GB)",
Nov 26 00:03:28 compute-0 interesting_albattani[272178]:             "Has a FileSystem"
Nov 26 00:03:28 compute-0 interesting_albattani[272178]:         ],
Nov 26 00:03:28 compute-0 interesting_albattani[272178]:         "sys_api": {
Nov 26 00:03:28 compute-0 interesting_albattani[272178]:             "actuators": null,
Nov 26 00:03:28 compute-0 interesting_albattani[272178]:             "device_nodes": "sr0",
Nov 26 00:03:28 compute-0 interesting_albattani[272178]:             "devname": "sr0",
Nov 26 00:03:28 compute-0 interesting_albattani[272178]:             "human_readable_size": "482.00 KB",
Nov 26 00:03:28 compute-0 interesting_albattani[272178]:             "id_bus": "ata",
Nov 26 00:03:28 compute-0 interesting_albattani[272178]:             "model": "QEMU DVD-ROM",
Nov 26 00:03:28 compute-0 interesting_albattani[272178]:             "nr_requests": "2",
Nov 26 00:03:28 compute-0 interesting_albattani[272178]:             "parent": "/dev/sr0",
Nov 26 00:03:28 compute-0 interesting_albattani[272178]:             "partitions": {},
Nov 26 00:03:28 compute-0 interesting_albattani[272178]:             "path": "/dev/sr0",
Nov 26 00:03:28 compute-0 interesting_albattani[272178]:             "removable": "1",
Nov 26 00:03:28 compute-0 interesting_albattani[272178]:             "rev": "2.5+",
Nov 26 00:03:28 compute-0 interesting_albattani[272178]:             "ro": "0",
Nov 26 00:03:28 compute-0 interesting_albattani[272178]:             "rotational": "1",
Nov 26 00:03:28 compute-0 systemd[1]: libpod-6be032d9275d8bc9b528e75fccbb28131aca3016cc910e1a323f7a123f3ae057.scope: Consumed 1.368s CPU time.
Nov 26 00:03:28 compute-0 interesting_albattani[272178]:             "sas_address": "",
Nov 26 00:03:28 compute-0 interesting_albattani[272178]:             "sas_device_handle": "",
Nov 26 00:03:28 compute-0 interesting_albattani[272178]:             "scheduler_mode": "mq-deadline",
Nov 26 00:03:28 compute-0 interesting_albattani[272178]:             "sectors": 0,
Nov 26 00:03:28 compute-0 interesting_albattani[272178]:             "sectorsize": "2048",
Nov 26 00:03:28 compute-0 interesting_albattani[272178]:             "size": 493568.0,
Nov 26 00:03:28 compute-0 interesting_albattani[272178]:             "support_discard": "2048",
Nov 26 00:03:28 compute-0 interesting_albattani[272178]:             "type": "disk",
Nov 26 00:03:28 compute-0 interesting_albattani[272178]:             "vendor": "QEMU"
Nov 26 00:03:28 compute-0 interesting_albattani[272178]:         }
Nov 26 00:03:28 compute-0 interesting_albattani[272178]:     }
Nov 26 00:03:28 compute-0 interesting_albattani[272178]: ]
Nov 26 00:03:28 compute-0 podman[272162]: 2025-11-26 00:03:28.932954693 +0000 UTC m=+1.468219478 container died 6be032d9275d8bc9b528e75fccbb28131aca3016cc910e1a323f7a123f3ae057 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_albattani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 26 00:03:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-25faf611a375340d4aa9649619e653f65554545afd2d77bfa3f44f99f0864db7-merged.mount: Deactivated successfully.
Nov 26 00:03:28 compute-0 podman[272162]: 2025-11-26 00:03:28.99627153 +0000 UTC m=+1.531536325 container remove 6be032d9275d8bc9b528e75fccbb28131aca3016cc910e1a323f7a123f3ae057 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_albattani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 00:03:29 compute-0 systemd[1]: libpod-conmon-6be032d9275d8bc9b528e75fccbb28131aca3016cc910e1a323f7a123f3ae057.scope: Deactivated successfully.
Nov 26 00:03:29 compute-0 sudo[272057]: pam_unix(sudo:session): session closed for user root
Nov 26 00:03:29 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 00:03:29 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 26 00:03:29 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 00:03:29 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 26 00:03:29 compute-0 podman[274027]: 2025-11-26 00:03:29.061035215 +0000 UTC m=+0.088493998 container health_status b33bb0f8771548517ad62d4e0193b93d84fd686bd9113462f17aea9d51587084 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 26 00:03:29 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 00:03:29 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 00:03:29 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 00:03:29 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 00:03:29 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 00:03:29 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 26 00:03:29 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev 13fe1762-5a3f-4c85-adc9-33b36fc66922 does not exist
Nov 26 00:03:29 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev 2a236bc9-45a5-40a0-b653-7d38732eb71e does not exist
Nov 26 00:03:29 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev 62424cb3-ab3b-4507-9a4b-61d2ef2505e5 does not exist
Nov 26 00:03:29 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 00:03:29 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 00:03:29 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 00:03:29 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 00:03:29 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 00:03:29 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 00:03:29 compute-0 sudo[274061]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 00:03:29 compute-0 sudo[274061]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:03:29 compute-0 sudo[274061]: pam_unix(sudo:session): session closed for user root
Nov 26 00:03:29 compute-0 sudo[274087]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 00:03:29 compute-0 sudo[274087]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:03:29 compute-0 sudo[274087]: pam_unix(sudo:session): session closed for user root
Nov 26 00:03:29 compute-0 sudo[274112]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 00:03:29 compute-0 sudo[274112]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:03:29 compute-0 sudo[274112]: pam_unix(sudo:session): session closed for user root
Nov 26 00:03:29 compute-0 sudo[274137]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 26 00:03:29 compute-0 sudo[274137]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:03:29 compute-0 podman[274203]: 2025-11-26 00:03:29.656643739 +0000 UTC m=+0.039927474 container create f2996f4a657d11b850d20ba393981756a7d9ea3137276f784ea7977068a10643 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_dewdney, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 26 00:03:29 compute-0 systemd[1]: Started libpod-conmon-f2996f4a657d11b850d20ba393981756a7d9ea3137276f784ea7977068a10643.scope.
Nov 26 00:03:29 compute-0 systemd[1]: Started libcrun container.
Nov 26 00:03:29 compute-0 podman[274203]: 2025-11-26 00:03:29.709874687 +0000 UTC m=+0.093158442 container init f2996f4a657d11b850d20ba393981756a7d9ea3137276f784ea7977068a10643 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 00:03:29 compute-0 podman[274203]: 2025-11-26 00:03:29.717039059 +0000 UTC m=+0.100322794 container start f2996f4a657d11b850d20ba393981756a7d9ea3137276f784ea7977068a10643 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 26 00:03:29 compute-0 podman[274203]: 2025-11-26 00:03:29.720565712 +0000 UTC m=+0.103849467 container attach f2996f4a657d11b850d20ba393981756a7d9ea3137276f784ea7977068a10643 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_dewdney, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 00:03:29 compute-0 vigilant_dewdney[274219]: 167 167
Nov 26 00:03:29 compute-0 systemd[1]: libpod-f2996f4a657d11b850d20ba393981756a7d9ea3137276f784ea7977068a10643.scope: Deactivated successfully.
Nov 26 00:03:29 compute-0 conmon[274219]: conmon f2996f4a657d11b850d2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f2996f4a657d11b850d20ba393981756a7d9ea3137276f784ea7977068a10643.scope/container/memory.events
Nov 26 00:03:29 compute-0 podman[274203]: 2025-11-26 00:03:29.722340369 +0000 UTC m=+0.105624104 container died f2996f4a657d11b850d20ba393981756a7d9ea3137276f784ea7977068a10643 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 00:03:29 compute-0 podman[274203]: 2025-11-26 00:03:29.638309681 +0000 UTC m=+0.021593456 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 00:03:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-2941f6ea7371be60e709733823f102a00a69f3a227d60127aa0a308e7eb03771-merged.mount: Deactivated successfully.
Nov 26 00:03:29 compute-0 podman[274203]: 2025-11-26 00:03:29.753065538 +0000 UTC m=+0.136349273 container remove f2996f4a657d11b850d20ba393981756a7d9ea3137276f784ea7977068a10643 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_dewdney, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 26 00:03:29 compute-0 systemd[1]: libpod-conmon-f2996f4a657d11b850d20ba393981756a7d9ea3137276f784ea7977068a10643.scope: Deactivated successfully.
Nov 26 00:03:29 compute-0 podman[274243]: 2025-11-26 00:03:29.895174574 +0000 UTC m=+0.040111850 container create f138c9dd36c9a547645bb2da5b147ac178b366c16346dcd9d52b7cf37e7cc0fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_borg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 26 00:03:29 compute-0 systemd[1]: Started libpod-conmon-f138c9dd36c9a547645bb2da5b147ac178b366c16346dcd9d52b7cf37e7cc0fb.scope.
Nov 26 00:03:29 compute-0 podman[274243]: 2025-11-26 00:03:29.876495825 +0000 UTC m=+0.021433141 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 00:03:29 compute-0 systemd[1]: Started libcrun container.
Nov 26 00:03:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df31d91edfced4b6e796107e28cfe97d725bf99db2d432baa19369590bd5d573/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 00:03:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df31d91edfced4b6e796107e28cfe97d725bf99db2d432baa19369590bd5d573/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 00:03:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df31d91edfced4b6e796107e28cfe97d725bf99db2d432baa19369590bd5d573/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 00:03:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df31d91edfced4b6e796107e28cfe97d725bf99db2d432baa19369590bd5d573/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 00:03:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df31d91edfced4b6e796107e28cfe97d725bf99db2d432baa19369590bd5d573/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 00:03:30 compute-0 podman[274243]: 2025-11-26 00:03:30.005827991 +0000 UTC m=+0.150765347 container init f138c9dd36c9a547645bb2da5b147ac178b366c16346dcd9d52b7cf37e7cc0fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_borg, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 00:03:30 compute-0 podman[274243]: 2025-11-26 00:03:30.020281625 +0000 UTC m=+0.165218901 container start f138c9dd36c9a547645bb2da5b147ac178b366c16346dcd9d52b7cf37e7cc0fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_borg, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 00:03:30 compute-0 podman[274243]: 2025-11-26 00:03:30.02381022 +0000 UTC m=+0.168747586 container attach f138c9dd36c9a547645bb2da5b147ac178b366c16346dcd9d52b7cf37e7cc0fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_borg, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 00:03:30 compute-0 ceph-mon[75654]: pgmap v1043: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:03:30 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 26 00:03:30 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 26 00:03:30 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 00:03:30 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 00:03:30 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 26 00:03:30 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 00:03:30 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 00:03:30 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 00:03:30 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1044: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:03:30 compute-0 crazy_borg[274259]: --> passed data devices: 0 physical, 3 LVM
Nov 26 00:03:30 compute-0 crazy_borg[274259]: --> relative data size: 1.0
Nov 26 00:03:30 compute-0 crazy_borg[274259]: --> All data devices are unavailable
Nov 26 00:03:30 compute-0 systemd[1]: libpod-f138c9dd36c9a547645bb2da5b147ac178b366c16346dcd9d52b7cf37e7cc0fb.scope: Deactivated successfully.
Nov 26 00:03:30 compute-0 podman[274243]: 2025-11-26 00:03:30.985936537 +0000 UTC m=+1.130873843 container died f138c9dd36c9a547645bb2da5b147ac178b366c16346dcd9d52b7cf37e7cc0fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_borg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 26 00:03:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-df31d91edfced4b6e796107e28cfe97d725bf99db2d432baa19369590bd5d573-merged.mount: Deactivated successfully.
Nov 26 00:03:31 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 00:03:31 compute-0 podman[274243]: 2025-11-26 00:03:31.042666817 +0000 UTC m=+1.187604093 container remove f138c9dd36c9a547645bb2da5b147ac178b366c16346dcd9d52b7cf37e7cc0fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_borg, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 26 00:03:31 compute-0 systemd[1]: libpod-conmon-f138c9dd36c9a547645bb2da5b147ac178b366c16346dcd9d52b7cf37e7cc0fb.scope: Deactivated successfully.
Nov 26 00:03:31 compute-0 sudo[274137]: pam_unix(sudo:session): session closed for user root
Nov 26 00:03:31 compute-0 sudo[274300]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 00:03:31 compute-0 sudo[274300]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:03:31 compute-0 sudo[274300]: pam_unix(sudo:session): session closed for user root
Nov 26 00:03:31 compute-0 sudo[274325]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 00:03:31 compute-0 sudo[274325]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:03:31 compute-0 sudo[274325]: pam_unix(sudo:session): session closed for user root
Nov 26 00:03:31 compute-0 sudo[274350]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 00:03:31 compute-0 sudo[274350]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:03:31 compute-0 sudo[274350]: pam_unix(sudo:session): session closed for user root
Nov 26 00:03:31 compute-0 sudo[274375]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 -- lvm list --format json
Nov 26 00:03:31 compute-0 sudo[274375]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:03:31 compute-0 podman[274442]: 2025-11-26 00:03:31.630122005 +0000 UTC m=+0.047454465 container create e00d8e0b2ab3e2480d1b818dc574195e08970aef823c926770cb08d0e188e3c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_hertz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 26 00:03:31 compute-0 systemd[1]: Started libpod-conmon-e00d8e0b2ab3e2480d1b818dc574195e08970aef823c926770cb08d0e188e3c8.scope.
Nov 26 00:03:31 compute-0 systemd[1]: Started libcrun container.
Nov 26 00:03:31 compute-0 podman[274442]: 2025-11-26 00:03:31.604969135 +0000 UTC m=+0.022301685 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 00:03:31 compute-0 podman[274442]: 2025-11-26 00:03:31.701562118 +0000 UTC m=+0.118894658 container init e00d8e0b2ab3e2480d1b818dc574195e08970aef823c926770cb08d0e188e3c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_hertz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 00:03:31 compute-0 podman[274442]: 2025-11-26 00:03:31.708229086 +0000 UTC m=+0.125561566 container start e00d8e0b2ab3e2480d1b818dc574195e08970aef823c926770cb08d0e188e3c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_hertz, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 00:03:31 compute-0 podman[274442]: 2025-11-26 00:03:31.711232146 +0000 UTC m=+0.128564686 container attach e00d8e0b2ab3e2480d1b818dc574195e08970aef823c926770cb08d0e188e3c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_hertz, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 00:03:31 compute-0 priceless_hertz[274458]: 167 167
Nov 26 00:03:31 compute-0 systemd[1]: libpod-e00d8e0b2ab3e2480d1b818dc574195e08970aef823c926770cb08d0e188e3c8.scope: Deactivated successfully.
Nov 26 00:03:31 compute-0 podman[274442]: 2025-11-26 00:03:31.715268793 +0000 UTC m=+0.132601253 container died e00d8e0b2ab3e2480d1b818dc574195e08970aef823c926770cb08d0e188e3c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_hertz, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 26 00:03:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-3aee82e13b03bafffb03d97173cb17f35f027474d767c7b425581f33c4e6f606-merged.mount: Deactivated successfully.
Nov 26 00:03:31 compute-0 podman[274442]: 2025-11-26 00:03:31.75273056 +0000 UTC m=+0.170063020 container remove e00d8e0b2ab3e2480d1b818dc574195e08970aef823c926770cb08d0e188e3c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_hertz, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 26 00:03:31 compute-0 systemd[1]: libpod-conmon-e00d8e0b2ab3e2480d1b818dc574195e08970aef823c926770cb08d0e188e3c8.scope: Deactivated successfully.
Nov 26 00:03:31 compute-0 podman[274481]: 2025-11-26 00:03:31.928709968 +0000 UTC m=+0.043470929 container create 760d7896acdf22d06dfbc99ba8a35855317d6bd4d03ee6162760f2b081a236d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_bartik, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 26 00:03:31 compute-0 systemd[1]: Started libpod-conmon-760d7896acdf22d06dfbc99ba8a35855317d6bd4d03ee6162760f2b081a236d9.scope.
Nov 26 00:03:31 compute-0 systemd[1]: Started libcrun container.
Nov 26 00:03:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/128f251fad43ad60c97c5eea7b95c2961f23b9fa0e6a775fe50fe8132c5df6c5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 00:03:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/128f251fad43ad60c97c5eea7b95c2961f23b9fa0e6a775fe50fe8132c5df6c5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 00:03:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/128f251fad43ad60c97c5eea7b95c2961f23b9fa0e6a775fe50fe8132c5df6c5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 00:03:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/128f251fad43ad60c97c5eea7b95c2961f23b9fa0e6a775fe50fe8132c5df6c5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 00:03:32 compute-0 podman[274481]: 2025-11-26 00:03:31.907688348 +0000 UTC m=+0.022449359 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 00:03:32 compute-0 podman[274481]: 2025-11-26 00:03:32.003518921 +0000 UTC m=+0.118279962 container init 760d7896acdf22d06dfbc99ba8a35855317d6bd4d03ee6162760f2b081a236d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_bartik, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 26 00:03:32 compute-0 podman[274481]: 2025-11-26 00:03:32.011210955 +0000 UTC m=+0.125971946 container start 760d7896acdf22d06dfbc99ba8a35855317d6bd4d03ee6162760f2b081a236d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_bartik, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 26 00:03:32 compute-0 podman[274481]: 2025-11-26 00:03:32.014850633 +0000 UTC m=+0.129611674 container attach 760d7896acdf22d06dfbc99ba8a35855317d6bd4d03ee6162760f2b081a236d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 00:03:32 compute-0 ceph-mon[75654]: pgmap v1044: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:03:32 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1045: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:03:32 compute-0 boring_bartik[274497]: {
Nov 26 00:03:32 compute-0 boring_bartik[274497]:     "0": [
Nov 26 00:03:32 compute-0 boring_bartik[274497]:         {
Nov 26 00:03:32 compute-0 boring_bartik[274497]:             "devices": [
Nov 26 00:03:32 compute-0 boring_bartik[274497]:                 "/dev/loop3"
Nov 26 00:03:32 compute-0 boring_bartik[274497]:             ],
Nov 26 00:03:32 compute-0 boring_bartik[274497]:             "lv_name": "ceph_lv0",
Nov 26 00:03:32 compute-0 boring_bartik[274497]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 00:03:32 compute-0 boring_bartik[274497]:             "lv_size": "21470642176",
Nov 26 00:03:32 compute-0 boring_bartik[274497]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c5ddf08a-6193-41e0-8332-60b5083aa62e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 00:03:32 compute-0 boring_bartik[274497]:             "lv_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 26 00:03:32 compute-0 boring_bartik[274497]:             "name": "ceph_lv0",
Nov 26 00:03:32 compute-0 boring_bartik[274497]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 00:03:32 compute-0 boring_bartik[274497]:             "tags": {
Nov 26 00:03:32 compute-0 boring_bartik[274497]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 00:03:32 compute-0 boring_bartik[274497]:                 "ceph.block_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 26 00:03:32 compute-0 boring_bartik[274497]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 00:03:32 compute-0 boring_bartik[274497]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 26 00:03:32 compute-0 boring_bartik[274497]:                 "ceph.cluster_name": "ceph",
Nov 26 00:03:32 compute-0 boring_bartik[274497]:                 "ceph.crush_device_class": "",
Nov 26 00:03:32 compute-0 boring_bartik[274497]:                 "ceph.encrypted": "0",
Nov 26 00:03:32 compute-0 boring_bartik[274497]:                 "ceph.osd_fsid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 26 00:03:32 compute-0 boring_bartik[274497]:                 "ceph.osd_id": "0",
Nov 26 00:03:32 compute-0 boring_bartik[274497]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 00:03:32 compute-0 boring_bartik[274497]:                 "ceph.type": "block",
Nov 26 00:03:32 compute-0 boring_bartik[274497]:                 "ceph.vdo": "0"
Nov 26 00:03:32 compute-0 boring_bartik[274497]:             },
Nov 26 00:03:32 compute-0 boring_bartik[274497]:             "type": "block",
Nov 26 00:03:32 compute-0 boring_bartik[274497]:             "vg_name": "ceph_vg0"
Nov 26 00:03:32 compute-0 boring_bartik[274497]:         }
Nov 26 00:03:32 compute-0 boring_bartik[274497]:     ],
Nov 26 00:03:32 compute-0 boring_bartik[274497]:     "1": [
Nov 26 00:03:32 compute-0 boring_bartik[274497]:         {
Nov 26 00:03:32 compute-0 boring_bartik[274497]:             "devices": [
Nov 26 00:03:32 compute-0 boring_bartik[274497]:                 "/dev/loop4"
Nov 26 00:03:32 compute-0 boring_bartik[274497]:             ],
Nov 26 00:03:32 compute-0 boring_bartik[274497]:             "lv_name": "ceph_lv1",
Nov 26 00:03:32 compute-0 boring_bartik[274497]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 00:03:32 compute-0 boring_bartik[274497]:             "lv_size": "21470642176",
Nov 26 00:03:32 compute-0 boring_bartik[274497]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21bbab34-bea3-466b-8bf7-812749fcef47,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 00:03:32 compute-0 boring_bartik[274497]:             "lv_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 26 00:03:32 compute-0 boring_bartik[274497]:             "name": "ceph_lv1",
Nov 26 00:03:32 compute-0 boring_bartik[274497]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 00:03:32 compute-0 boring_bartik[274497]:             "tags": {
Nov 26 00:03:32 compute-0 boring_bartik[274497]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 00:03:32 compute-0 boring_bartik[274497]:                 "ceph.block_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 26 00:03:32 compute-0 boring_bartik[274497]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 00:03:32 compute-0 boring_bartik[274497]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 26 00:03:32 compute-0 boring_bartik[274497]:                 "ceph.cluster_name": "ceph",
Nov 26 00:03:32 compute-0 boring_bartik[274497]:                 "ceph.crush_device_class": "",
Nov 26 00:03:32 compute-0 boring_bartik[274497]:                 "ceph.encrypted": "0",
Nov 26 00:03:32 compute-0 boring_bartik[274497]:                 "ceph.osd_fsid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 26 00:03:32 compute-0 boring_bartik[274497]:                 "ceph.osd_id": "1",
Nov 26 00:03:32 compute-0 boring_bartik[274497]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 00:03:32 compute-0 boring_bartik[274497]:                 "ceph.type": "block",
Nov 26 00:03:32 compute-0 boring_bartik[274497]:                 "ceph.vdo": "0"
Nov 26 00:03:32 compute-0 boring_bartik[274497]:             },
Nov 26 00:03:32 compute-0 boring_bartik[274497]:             "type": "block",
Nov 26 00:03:32 compute-0 boring_bartik[274497]:             "vg_name": "ceph_vg1"
Nov 26 00:03:32 compute-0 boring_bartik[274497]:         }
Nov 26 00:03:32 compute-0 boring_bartik[274497]:     ],
Nov 26 00:03:32 compute-0 boring_bartik[274497]:     "2": [
Nov 26 00:03:32 compute-0 boring_bartik[274497]:         {
Nov 26 00:03:32 compute-0 boring_bartik[274497]:             "devices": [
Nov 26 00:03:32 compute-0 boring_bartik[274497]:                 "/dev/loop5"
Nov 26 00:03:32 compute-0 boring_bartik[274497]:             ],
Nov 26 00:03:32 compute-0 boring_bartik[274497]:             "lv_name": "ceph_lv2",
Nov 26 00:03:32 compute-0 boring_bartik[274497]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 00:03:32 compute-0 boring_bartik[274497]:             "lv_size": "21470642176",
Nov 26 00:03:32 compute-0 boring_bartik[274497]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=019d967b-1a56-4e90-8682-a890da577e20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 00:03:32 compute-0 boring_bartik[274497]:             "lv_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 26 00:03:32 compute-0 boring_bartik[274497]:             "name": "ceph_lv2",
Nov 26 00:03:32 compute-0 boring_bartik[274497]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 00:03:32 compute-0 boring_bartik[274497]:             "tags": {
Nov 26 00:03:32 compute-0 boring_bartik[274497]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 00:03:32 compute-0 boring_bartik[274497]:                 "ceph.block_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 26 00:03:32 compute-0 boring_bartik[274497]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 00:03:32 compute-0 boring_bartik[274497]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 26 00:03:32 compute-0 boring_bartik[274497]:                 "ceph.cluster_name": "ceph",
Nov 26 00:03:32 compute-0 boring_bartik[274497]:                 "ceph.crush_device_class": "",
Nov 26 00:03:32 compute-0 boring_bartik[274497]:                 "ceph.encrypted": "0",
Nov 26 00:03:32 compute-0 boring_bartik[274497]:                 "ceph.osd_fsid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 26 00:03:32 compute-0 boring_bartik[274497]:                 "ceph.osd_id": "2",
Nov 26 00:03:32 compute-0 boring_bartik[274497]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 00:03:32 compute-0 boring_bartik[274497]:                 "ceph.type": "block",
Nov 26 00:03:32 compute-0 boring_bartik[274497]:                 "ceph.vdo": "0"
Nov 26 00:03:32 compute-0 boring_bartik[274497]:             },
Nov 26 00:03:32 compute-0 boring_bartik[274497]:             "type": "block",
Nov 26 00:03:32 compute-0 boring_bartik[274497]:             "vg_name": "ceph_vg2"
Nov 26 00:03:32 compute-0 boring_bartik[274497]:         }
Nov 26 00:03:32 compute-0 boring_bartik[274497]:     ]
Nov 26 00:03:32 compute-0 boring_bartik[274497]: }
Nov 26 00:03:32 compute-0 systemd[1]: libpod-760d7896acdf22d06dfbc99ba8a35855317d6bd4d03ee6162760f2b081a236d9.scope: Deactivated successfully.
Nov 26 00:03:32 compute-0 podman[274481]: 2025-11-26 00:03:32.794394546 +0000 UTC m=+0.909155517 container died 760d7896acdf22d06dfbc99ba8a35855317d6bd4d03ee6162760f2b081a236d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_bartik, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 00:03:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-128f251fad43ad60c97c5eea7b95c2961f23b9fa0e6a775fe50fe8132c5df6c5-merged.mount: Deactivated successfully.
Nov 26 00:03:32 compute-0 podman[274481]: 2025-11-26 00:03:32.854850737 +0000 UTC m=+0.969611698 container remove 760d7896acdf22d06dfbc99ba8a35855317d6bd4d03ee6162760f2b081a236d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_bartik, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 00:03:32 compute-0 systemd[1]: libpod-conmon-760d7896acdf22d06dfbc99ba8a35855317d6bd4d03ee6162760f2b081a236d9.scope: Deactivated successfully.
Nov 26 00:03:32 compute-0 sudo[274375]: pam_unix(sudo:session): session closed for user root
Nov 26 00:03:32 compute-0 sudo[274520]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 00:03:32 compute-0 sudo[274520]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:03:32 compute-0 sudo[274520]: pam_unix(sudo:session): session closed for user root
Nov 26 00:03:33 compute-0 sudo[274545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 00:03:33 compute-0 sudo[274545]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:03:33 compute-0 sudo[274545]: pam_unix(sudo:session): session closed for user root
Nov 26 00:03:33 compute-0 sudo[274570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 00:03:33 compute-0 sudo[274570]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:03:33 compute-0 sudo[274570]: pam_unix(sudo:session): session closed for user root
Nov 26 00:03:33 compute-0 sudo[274595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 -- raw list --format json
Nov 26 00:03:33 compute-0 sudo[274595]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:03:33 compute-0 podman[274660]: 2025-11-26 00:03:33.445298443 +0000 UTC m=+0.051106271 container create a6d78d9ad9a1a8a00bc4f1088d72051246db33b94c4d610a4108389677caa8a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_brown, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 26 00:03:33 compute-0 systemd[1]: Started libpod-conmon-a6d78d9ad9a1a8a00bc4f1088d72051246db33b94c4d610a4108389677caa8a6.scope.
Nov 26 00:03:33 compute-0 systemd[1]: Started libcrun container.
Nov 26 00:03:33 compute-0 podman[274660]: 2025-11-26 00:03:33.418210092 +0000 UTC m=+0.024017960 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 00:03:33 compute-0 podman[274660]: 2025-11-26 00:03:33.515502563 +0000 UTC m=+0.121310411 container init a6d78d9ad9a1a8a00bc4f1088d72051246db33b94c4d610a4108389677caa8a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_brown, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507)
Nov 26 00:03:33 compute-0 podman[274660]: 2025-11-26 00:03:33.520435055 +0000 UTC m=+0.126242873 container start a6d78d9ad9a1a8a00bc4f1088d72051246db33b94c4d610a4108389677caa8a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_brown, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 26 00:03:33 compute-0 podman[274660]: 2025-11-26 00:03:33.52401339 +0000 UTC m=+0.129821208 container attach a6d78d9ad9a1a8a00bc4f1088d72051246db33b94c4d610a4108389677caa8a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_brown, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 26 00:03:33 compute-0 hungry_brown[274676]: 167 167
Nov 26 00:03:33 compute-0 systemd[1]: libpod-a6d78d9ad9a1a8a00bc4f1088d72051246db33b94c4d610a4108389677caa8a6.scope: Deactivated successfully.
Nov 26 00:03:33 compute-0 conmon[274676]: conmon a6d78d9ad9a1a8a00bc4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a6d78d9ad9a1a8a00bc4f1088d72051246db33b94c4d610a4108389677caa8a6.scope/container/memory.events
Nov 26 00:03:33 compute-0 podman[274660]: 2025-11-26 00:03:33.525625503 +0000 UTC m=+0.131433321 container died a6d78d9ad9a1a8a00bc4f1088d72051246db33b94c4d610a4108389677caa8a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_brown, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 26 00:03:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-e1b916eae0f4476e2bc9a8514ae7ca172047503ea3532767a95696521984c9a3-merged.mount: Deactivated successfully.
Nov 26 00:03:33 compute-0 podman[274660]: 2025-11-26 00:03:33.571160986 +0000 UTC m=+0.176968804 container remove a6d78d9ad9a1a8a00bc4f1088d72051246db33b94c4d610a4108389677caa8a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_brown, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 00:03:33 compute-0 systemd[1]: libpod-conmon-a6d78d9ad9a1a8a00bc4f1088d72051246db33b94c4d610a4108389677caa8a6.scope: Deactivated successfully.
Nov 26 00:03:33 compute-0 podman[274700]: 2025-11-26 00:03:33.756087111 +0000 UTC m=+0.043285054 container create 6716d953adae30e981dc3a7539d428c1dc9112436fa6e35eea765f2aea7776c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_kowalevski, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 26 00:03:33 compute-0 systemd[1]: Started libpod-conmon-6716d953adae30e981dc3a7539d428c1dc9112436fa6e35eea765f2aea7776c7.scope.
Nov 26 00:03:33 compute-0 systemd[1]: Started libcrun container.
Nov 26 00:03:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e19d84b7c29844b1cf7e9ebfd37b399e49fad55b8ccee78a5fb5395d94cd57b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 00:03:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e19d84b7c29844b1cf7e9ebfd37b399e49fad55b8ccee78a5fb5395d94cd57b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 00:03:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e19d84b7c29844b1cf7e9ebfd37b399e49fad55b8ccee78a5fb5395d94cd57b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 00:03:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e19d84b7c29844b1cf7e9ebfd37b399e49fad55b8ccee78a5fb5395d94cd57b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 00:03:33 compute-0 podman[274700]: 2025-11-26 00:03:33.731853286 +0000 UTC m=+0.019051249 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 00:03:33 compute-0 podman[274700]: 2025-11-26 00:03:33.836610767 +0000 UTC m=+0.123808760 container init 6716d953adae30e981dc3a7539d428c1dc9112436fa6e35eea765f2aea7776c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_kowalevski, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 00:03:33 compute-0 podman[274700]: 2025-11-26 00:03:33.847316701 +0000 UTC m=+0.134514664 container start 6716d953adae30e981dc3a7539d428c1dc9112436fa6e35eea765f2aea7776c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_kowalevski, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 26 00:03:33 compute-0 podman[274700]: 2025-11-26 00:03:33.850951958 +0000 UTC m=+0.138149921 container attach 6716d953adae30e981dc3a7539d428c1dc9112436fa6e35eea765f2aea7776c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_kowalevski, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 26 00:03:34 compute-0 ceph-mon[75654]: pgmap v1045: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:03:34 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1046: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:03:34 compute-0 festive_kowalevski[274717]: {
Nov 26 00:03:34 compute-0 festive_kowalevski[274717]:     "019d967b-1a56-4e90-8682-a890da577e20": {
Nov 26 00:03:34 compute-0 festive_kowalevski[274717]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 26 00:03:34 compute-0 festive_kowalevski[274717]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 00:03:34 compute-0 festive_kowalevski[274717]:         "osd_id": 2,
Nov 26 00:03:34 compute-0 festive_kowalevski[274717]:         "osd_uuid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 26 00:03:34 compute-0 festive_kowalevski[274717]:         "type": "bluestore"
Nov 26 00:03:34 compute-0 festive_kowalevski[274717]:     },
Nov 26 00:03:34 compute-0 festive_kowalevski[274717]:     "21bbab34-bea3-466b-8bf7-812749fcef47": {
Nov 26 00:03:34 compute-0 festive_kowalevski[274717]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 26 00:03:34 compute-0 festive_kowalevski[274717]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 00:03:34 compute-0 festive_kowalevski[274717]:         "osd_id": 1,
Nov 26 00:03:34 compute-0 festive_kowalevski[274717]:         "osd_uuid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 26 00:03:34 compute-0 festive_kowalevski[274717]:         "type": "bluestore"
Nov 26 00:03:34 compute-0 festive_kowalevski[274717]:     },
Nov 26 00:03:34 compute-0 festive_kowalevski[274717]:     "c5ddf08a-6193-41e0-8332-60b5083aa62e": {
Nov 26 00:03:34 compute-0 festive_kowalevski[274717]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 26 00:03:34 compute-0 festive_kowalevski[274717]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 00:03:34 compute-0 festive_kowalevski[274717]:         "osd_id": 0,
Nov 26 00:03:34 compute-0 festive_kowalevski[274717]:         "osd_uuid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 26 00:03:34 compute-0 festive_kowalevski[274717]:         "type": "bluestore"
Nov 26 00:03:34 compute-0 festive_kowalevski[274717]:     }
Nov 26 00:03:34 compute-0 festive_kowalevski[274717]: }
Nov 26 00:03:34 compute-0 systemd[1]: libpod-6716d953adae30e981dc3a7539d428c1dc9112436fa6e35eea765f2aea7776c7.scope: Deactivated successfully.
Nov 26 00:03:34 compute-0 systemd[1]: libpod-6716d953adae30e981dc3a7539d428c1dc9112436fa6e35eea765f2aea7776c7.scope: Consumed 1.018s CPU time.
Nov 26 00:03:34 compute-0 podman[274750]: 2025-11-26 00:03:34.915641077 +0000 UTC m=+0.038786764 container died 6716d953adae30e981dc3a7539d428c1dc9112436fa6e35eea765f2aea7776c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_kowalevski, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 26 00:03:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-2e19d84b7c29844b1cf7e9ebfd37b399e49fad55b8ccee78a5fb5395d94cd57b-merged.mount: Deactivated successfully.
Nov 26 00:03:34 compute-0 podman[274750]: 2025-11-26 00:03:34.99533686 +0000 UTC m=+0.118482517 container remove 6716d953adae30e981dc3a7539d428c1dc9112436fa6e35eea765f2aea7776c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_kowalevski, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 00:03:35 compute-0 systemd[1]: libpod-conmon-6716d953adae30e981dc3a7539d428c1dc9112436fa6e35eea765f2aea7776c7.scope: Deactivated successfully.
Nov 26 00:03:35 compute-0 sudo[274595]: pam_unix(sudo:session): session closed for user root
Nov 26 00:03:35 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 00:03:35 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 26 00:03:35 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 00:03:35 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 26 00:03:35 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev d347c54a-fdca-4919-aece-cba335bec199 does not exist
Nov 26 00:03:35 compute-0 sudo[274762]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 00:03:35 compute-0 sudo[274762]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:03:35 compute-0 sudo[274762]: pam_unix(sudo:session): session closed for user root
Nov 26 00:03:35 compute-0 sudo[274787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 26 00:03:35 compute-0 sudo[274787]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:03:35 compute-0 sudo[274787]: pam_unix(sudo:session): session closed for user root
Nov 26 00:03:36 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 00:03:36 compute-0 ceph-mon[75654]: pgmap v1046: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:03:36 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 26 00:03:36 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 26 00:03:36 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1047: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:03:38 compute-0 ceph-mon[75654]: pgmap v1047: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:03:38 compute-0 podman[274813]: 2025-11-26 00:03:38.239085098 +0000 UTC m=+0.065330862 container health_status 9c33443d6f21f9126b440289b7d384a59ef92406382d0e40a2b11f1da5dc47d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 26 00:03:38 compute-0 podman[274812]: 2025-11-26 00:03:38.312580366 +0000 UTC m=+0.138498190 container health_status 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 26 00:03:38 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1048: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:03:40 compute-0 ceph-mon[75654]: pgmap v1048: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:03:40 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1049: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:03:40 compute-0 ovn_metadata_agent[160720]: 2025-11-26 00:03:40.772 160725 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 26 00:03:40 compute-0 ovn_metadata_agent[160720]: 2025-11-26 00:03:40.772 160725 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 26 00:03:40 compute-0 ovn_metadata_agent[160720]: 2025-11-26 00:03:40.773 160725 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 26 00:03:41 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 00:03:42 compute-0 ceph-mon[75654]: pgmap v1049: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:03:42 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1050: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:03:44 compute-0 ceph-mon[75654]: pgmap v1050: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:03:44 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1051: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:03:46 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 00:03:46 compute-0 ceph-mon[75654]: pgmap v1051: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:03:46 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1052: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:03:48 compute-0 ceph-mon[75654]: pgmap v1052: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:03:48 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1053: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:03:50 compute-0 ceph-mon[75654]: pgmap v1053: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:03:50 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1054: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:03:51 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 00:03:52 compute-0 ceph-mon[75654]: pgmap v1054: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:03:52 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1055: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:03:54 compute-0 ceph-mon[75654]: pgmap v1055: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:03:54 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1056: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:03:56 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 00:03:56 compute-0 ceph-mgr[75954]: [balancer INFO root] Optimize plan auto_2025-11-26_00:03:56
Nov 26 00:03:56 compute-0 ceph-mgr[75954]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 00:03:56 compute-0 ceph-mgr[75954]: [balancer INFO root] do_upmap
Nov 26 00:03:56 compute-0 ceph-mgr[75954]: [balancer INFO root] pools ['.mgr', 'volumes', 'vms', 'backups', 'cephfs.cephfs.meta', 'images', 'cephfs.cephfs.data']
Nov 26 00:03:56 compute-0 ceph-mgr[75954]: [balancer INFO root] prepared 0/10 changes
Nov 26 00:03:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 00:03:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 00:03:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 00:03:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 00:03:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 00:03:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 00:03:56 compute-0 ceph-mon[75654]: pgmap v1056: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:03:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 00:03:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 00:03:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 00:03:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 00:03:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 00:03:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 00:03:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 00:03:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 00:03:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 00:03:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 00:03:56 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1057: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:03:58 compute-0 ceph-mon[75654]: pgmap v1057: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:03:58 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1058: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:03:59 compute-0 podman[274859]: 2025-11-26 00:03:59.285760222 +0000 UTC m=+0.107453544 container health_status b33bb0f8771548517ad62d4e0193b93d84fd686bd9113462f17aea9d51587084 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 26 00:04:00 compute-0 ceph-mon[75654]: pgmap v1058: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:04:00 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1059: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:04:01 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 00:04:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 00:04:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 00:04:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 00:04:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 00:04:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 26 00:04:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 00:04:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 26 00:04:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 00:04:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 00:04:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 00:04:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 26 00:04:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 00:04:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 26 00:04:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 00:04:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 00:04:02 compute-0 ceph-mon[75654]: pgmap v1059: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:04:02 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1060: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:04:04 compute-0 ceph-mon[75654]: pgmap v1060: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:04:04 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1061: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:04:06 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 00:04:06 compute-0 ceph-mon[75654]: pgmap v1061: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:04:06 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1062: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:04:08 compute-0 ceph-mon[75654]: pgmap v1062: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:04:08 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1063: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:04:08 compute-0 nova_compute[252550]: 2025-11-26 00:04:08.821 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 00:04:09 compute-0 podman[274880]: 2025-11-26 00:04:09.244347195 +0000 UTC m=+0.051897363 container health_status 9c33443d6f21f9126b440289b7d384a59ef92406382d0e40a2b11f1da5dc47d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 26 00:04:09 compute-0 podman[274879]: 2025-11-26 00:04:09.283049536 +0000 UTC m=+0.093790489 container health_status 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Nov 26 00:04:10 compute-0 ceph-mon[75654]: pgmap v1063: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:04:10 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1064: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:04:11 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 00:04:11 compute-0 nova_compute[252550]: 2025-11-26 00:04:11.817 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 00:04:11 compute-0 nova_compute[252550]: 2025-11-26 00:04:11.821 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 00:04:11 compute-0 nova_compute[252550]: 2025-11-26 00:04:11.848 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 26 00:04:11 compute-0 nova_compute[252550]: 2025-11-26 00:04:11.848 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 26 00:04:11 compute-0 nova_compute[252550]: 2025-11-26 00:04:11.849 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 26 00:04:11 compute-0 nova_compute[252550]: 2025-11-26 00:04:11.849 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 26 00:04:11 compute-0 nova_compute[252550]: 2025-11-26 00:04:11.849 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 26 00:04:12 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 00:04:12 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1070606882' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 00:04:12 compute-0 ceph-mon[75654]: pgmap v1064: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:04:12 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/1070606882' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 00:04:12 compute-0 nova_compute[252550]: 2025-11-26 00:04:12.241 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.391s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 26 00:04:12 compute-0 nova_compute[252550]: 2025-11-26 00:04:12.408 252558 WARNING nova.virt.libvirt.driver [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 26 00:04:12 compute-0 nova_compute[252550]: 2025-11-26 00:04:12.409 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5168MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 26 00:04:12 compute-0 nova_compute[252550]: 2025-11-26 00:04:12.409 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 26 00:04:12 compute-0 nova_compute[252550]: 2025-11-26 00:04:12.409 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 26 00:04:12 compute-0 nova_compute[252550]: 2025-11-26 00:04:12.503 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 26 00:04:12 compute-0 nova_compute[252550]: 2025-11-26 00:04:12.504 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 26 00:04:12 compute-0 nova_compute[252550]: 2025-11-26 00:04:12.528 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 26 00:04:12 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1065: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:04:12 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 00:04:12 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/182130943' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 00:04:12 compute-0 nova_compute[252550]: 2025-11-26 00:04:12.920 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.392s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 26 00:04:12 compute-0 nova_compute[252550]: 2025-11-26 00:04:12.927 252558 DEBUG nova.compute.provider_tree [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Inventory has not changed in ProviderTree for provider: 08547965-b35f-4b7b-95d8-902f06aa011c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 26 00:04:12 compute-0 nova_compute[252550]: 2025-11-26 00:04:12.950 252558 DEBUG nova.scheduler.client.report [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Inventory has not changed for provider 08547965-b35f-4b7b-95d8-902f06aa011c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 26 00:04:12 compute-0 nova_compute[252550]: 2025-11-26 00:04:12.952 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 26 00:04:12 compute-0 nova_compute[252550]: 2025-11-26 00:04:12.952 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.543s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 26 00:04:13 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/182130943' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 00:04:14 compute-0 ceph-mon[75654]: pgmap v1065: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:04:14 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1066: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:04:14 compute-0 nova_compute[252550]: 2025-11-26 00:04:14.953 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 00:04:14 compute-0 nova_compute[252550]: 2025-11-26 00:04:14.953 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 00:04:14 compute-0 nova_compute[252550]: 2025-11-26 00:04:14.954 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 26 00:04:16 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 00:04:16 compute-0 ceph-mon[75654]: pgmap v1066: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:04:16 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1067: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:04:16 compute-0 nova_compute[252550]: 2025-11-26 00:04:16.822 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 00:04:16 compute-0 nova_compute[252550]: 2025-11-26 00:04:16.823 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 26 00:04:16 compute-0 nova_compute[252550]: 2025-11-26 00:04:16.823 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 26 00:04:16 compute-0 nova_compute[252550]: 2025-11-26 00:04:16.842 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 26 00:04:16 compute-0 nova_compute[252550]: 2025-11-26 00:04:16.842 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 00:04:17 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 26 00:04:17 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3850789591' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 00:04:17 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 26 00:04:17 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3850789591' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 00:04:17 compute-0 nova_compute[252550]: 2025-11-26 00:04:17.822 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 00:04:18 compute-0 ceph-mon[75654]: pgmap v1067: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:04:18 compute-0 ceph-mon[75654]: from='client.? 192.168.122.10:0/3850789591' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 00:04:18 compute-0 ceph-mon[75654]: from='client.? 192.168.122.10:0/3850789591' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 00:04:18 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1068: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:04:19 compute-0 nova_compute[252550]: 2025-11-26 00:04:19.822 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 00:04:20 compute-0 ceph-mon[75654]: pgmap v1068: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:04:20 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1069: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:04:21 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 00:04:22 compute-0 ceph-mon[75654]: pgmap v1069: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:04:22 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1070: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:04:24 compute-0 ceph-mon[75654]: pgmap v1070: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:04:24 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1071: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:04:26 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 00:04:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 00:04:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 00:04:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 00:04:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 00:04:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 00:04:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 00:04:26 compute-0 ceph-mon[75654]: pgmap v1071: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:04:26 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1072: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:04:28 compute-0 ceph-mon[75654]: pgmap v1072: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:04:28 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1073: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:04:30 compute-0 podman[274969]: 2025-11-26 00:04:30.250112028 +0000 UTC m=+0.079328655 container health_status b33bb0f8771548517ad62d4e0193b93d84fd686bd9113462f17aea9d51587084 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd)
Nov 26 00:04:30 compute-0 ceph-mon[75654]: pgmap v1073: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:04:30 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1074: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:04:31 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 00:04:32 compute-0 ceph-mon[75654]: pgmap v1074: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:04:32 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1075: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:04:33 compute-0 rsyslogd[1001]: imjournal: 15556 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Nov 26 00:04:34 compute-0 ceph-mon[75654]: pgmap v1075: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:04:34 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1076: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:04:35 compute-0 sudo[274989]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 00:04:35 compute-0 sudo[274989]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:04:35 compute-0 sudo[274989]: pam_unix(sudo:session): session closed for user root
Nov 26 00:04:35 compute-0 sudo[275014]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 00:04:35 compute-0 sudo[275014]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:04:35 compute-0 sudo[275014]: pam_unix(sudo:session): session closed for user root
Nov 26 00:04:35 compute-0 sudo[275039]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 00:04:35 compute-0 sudo[275039]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:04:35 compute-0 sudo[275039]: pam_unix(sudo:session): session closed for user root
Nov 26 00:04:35 compute-0 sudo[275064]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 26 00:04:35 compute-0 sudo[275064]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:04:36 compute-0 sudo[275064]: pam_unix(sudo:session): session closed for user root
Nov 26 00:04:36 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 00:04:36 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 26 00:04:36 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 26 00:04:36 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 00:04:36 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 00:04:36 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 00:04:36 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 00:04:36 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 00:04:36 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 26 00:04:36 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev d716486c-4ceb-42f0-8003-fa594302b4f3 does not exist
Nov 26 00:04:36 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev b61a43f8-0d52-40b3-9506-6815ebde114c does not exist
Nov 26 00:04:36 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev 746eeba7-49a1-4897-9ae6-c0c43c563189 does not exist
Nov 26 00:04:36 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 00:04:36 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 00:04:36 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 00:04:36 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 00:04:36 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 00:04:36 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 00:04:36 compute-0 sudo[275121]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 00:04:36 compute-0 sudo[275121]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:04:36 compute-0 sudo[275121]: pam_unix(sudo:session): session closed for user root
Nov 26 00:04:36 compute-0 sudo[275146]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 00:04:36 compute-0 sudo[275146]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:04:36 compute-0 sudo[275146]: pam_unix(sudo:session): session closed for user root
Nov 26 00:04:36 compute-0 sudo[275171]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 00:04:36 compute-0 sudo[275171]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:04:36 compute-0 sudo[275171]: pam_unix(sudo:session): session closed for user root
Nov 26 00:04:36 compute-0 ceph-mon[75654]: pgmap v1076: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:04:36 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 26 00:04:36 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 00:04:36 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 00:04:36 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 26 00:04:36 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 00:04:36 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 00:04:36 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 00:04:36 compute-0 sudo[275196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 26 00:04:36 compute-0 sudo[275196]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:04:36 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1077: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:04:36 compute-0 podman[275260]: 2025-11-26 00:04:36.733228238 +0000 UTC m=+0.047355962 container create f803276c10cd9d6583b8d93ee7d0b24ee52708ff54d7f7c798aa7fcc97f050e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_visvesvaraya, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 26 00:04:36 compute-0 systemd[1]: Started libpod-conmon-f803276c10cd9d6583b8d93ee7d0b24ee52708ff54d7f7c798aa7fcc97f050e2.scope.
Nov 26 00:04:36 compute-0 systemd[1]: Started libcrun container.
Nov 26 00:04:36 compute-0 podman[275260]: 2025-11-26 00:04:36.712782664 +0000 UTC m=+0.026910418 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 00:04:36 compute-0 podman[275260]: 2025-11-26 00:04:36.819103296 +0000 UTC m=+0.133231050 container init f803276c10cd9d6583b8d93ee7d0b24ee52708ff54d7f7c798aa7fcc97f050e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_visvesvaraya, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 26 00:04:36 compute-0 podman[275260]: 2025-11-26 00:04:36.827233342 +0000 UTC m=+0.141361066 container start f803276c10cd9d6583b8d93ee7d0b24ee52708ff54d7f7c798aa7fcc97f050e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_visvesvaraya, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 00:04:36 compute-0 podman[275260]: 2025-11-26 00:04:36.830688774 +0000 UTC m=+0.144816498 container attach f803276c10cd9d6583b8d93ee7d0b24ee52708ff54d7f7c798aa7fcc97f050e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_visvesvaraya, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 26 00:04:36 compute-0 quizzical_visvesvaraya[275276]: 167 167
Nov 26 00:04:36 compute-0 systemd[1]: libpod-f803276c10cd9d6583b8d93ee7d0b24ee52708ff54d7f7c798aa7fcc97f050e2.scope: Deactivated successfully.
Nov 26 00:04:36 compute-0 podman[275260]: 2025-11-26 00:04:36.83503697 +0000 UTC m=+0.149164704 container died f803276c10cd9d6583b8d93ee7d0b24ee52708ff54d7f7c798aa7fcc97f050e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_visvesvaraya, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 26 00:04:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-39994a60c336645d3802a0dfb20014844ebffbac699a1c483c11d8a572a753cd-merged.mount: Deactivated successfully.
Nov 26 00:04:36 compute-0 podman[275260]: 2025-11-26 00:04:36.881663252 +0000 UTC m=+0.195791006 container remove f803276c10cd9d6583b8d93ee7d0b24ee52708ff54d7f7c798aa7fcc97f050e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_visvesvaraya, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 26 00:04:36 compute-0 systemd[1]: libpod-conmon-f803276c10cd9d6583b8d93ee7d0b24ee52708ff54d7f7c798aa7fcc97f050e2.scope: Deactivated successfully.
Nov 26 00:04:37 compute-0 podman[275298]: 2025-11-26 00:04:37.109652045 +0000 UTC m=+0.045496103 container create 8973a73975a7793b891d78a0f89381b8673e80e6946f147887778ed31db51d7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_banach, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 00:04:37 compute-0 systemd[1]: Started libpod-conmon-8973a73975a7793b891d78a0f89381b8673e80e6946f147887778ed31db51d7e.scope.
Nov 26 00:04:37 compute-0 podman[275298]: 2025-11-26 00:04:37.088355068 +0000 UTC m=+0.024199166 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 00:04:37 compute-0 systemd[1]: Started libcrun container.
Nov 26 00:04:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5ba6924dd64c08c24f59ce07831a5b44f398fff3d8a33e1ee1ad5d223a56708/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 00:04:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5ba6924dd64c08c24f59ce07831a5b44f398fff3d8a33e1ee1ad5d223a56708/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 00:04:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5ba6924dd64c08c24f59ce07831a5b44f398fff3d8a33e1ee1ad5d223a56708/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 00:04:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5ba6924dd64c08c24f59ce07831a5b44f398fff3d8a33e1ee1ad5d223a56708/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 00:04:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5ba6924dd64c08c24f59ce07831a5b44f398fff3d8a33e1ee1ad5d223a56708/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 00:04:37 compute-0 podman[275298]: 2025-11-26 00:04:37.220456156 +0000 UTC m=+0.156300324 container init 8973a73975a7793b891d78a0f89381b8673e80e6946f147887778ed31db51d7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_banach, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 26 00:04:37 compute-0 podman[275298]: 2025-11-26 00:04:37.23187062 +0000 UTC m=+0.167714718 container start 8973a73975a7793b891d78a0f89381b8673e80e6946f147887778ed31db51d7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_banach, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 00:04:37 compute-0 podman[275298]: 2025-11-26 00:04:37.236321919 +0000 UTC m=+0.172165987 container attach 8973a73975a7793b891d78a0f89381b8673e80e6946f147887778ed31db51d7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_banach, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 26 00:04:38 compute-0 quizzical_banach[275314]: --> passed data devices: 0 physical, 3 LVM
Nov 26 00:04:38 compute-0 quizzical_banach[275314]: --> relative data size: 1.0
Nov 26 00:04:38 compute-0 quizzical_banach[275314]: --> All data devices are unavailable
Nov 26 00:04:38 compute-0 systemd[1]: libpod-8973a73975a7793b891d78a0f89381b8673e80e6946f147887778ed31db51d7e.scope: Deactivated successfully.
Nov 26 00:04:38 compute-0 podman[275298]: 2025-11-26 00:04:38.366594694 +0000 UTC m=+1.302438792 container died 8973a73975a7793b891d78a0f89381b8673e80e6946f147887778ed31db51d7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_banach, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 00:04:38 compute-0 systemd[1]: libpod-8973a73975a7793b891d78a0f89381b8673e80e6946f147887778ed31db51d7e.scope: Consumed 1.085s CPU time.
Nov 26 00:04:38 compute-0 ceph-mon[75654]: pgmap v1077: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:04:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-b5ba6924dd64c08c24f59ce07831a5b44f398fff3d8a33e1ee1ad5d223a56708-merged.mount: Deactivated successfully.
Nov 26 00:04:38 compute-0 podman[275298]: 2025-11-26 00:04:38.42463872 +0000 UTC m=+1.360482788 container remove 8973a73975a7793b891d78a0f89381b8673e80e6946f147887778ed31db51d7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_banach, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 00:04:38 compute-0 systemd[1]: libpod-conmon-8973a73975a7793b891d78a0f89381b8673e80e6946f147887778ed31db51d7e.scope: Deactivated successfully.
Nov 26 00:04:38 compute-0 sudo[275196]: pam_unix(sudo:session): session closed for user root
Nov 26 00:04:38 compute-0 sudo[275356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 00:04:38 compute-0 sudo[275356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:04:38 compute-0 sudo[275356]: pam_unix(sudo:session): session closed for user root
Nov 26 00:04:38 compute-0 sudo[275381]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 00:04:38 compute-0 sudo[275381]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:04:38 compute-0 sudo[275381]: pam_unix(sudo:session): session closed for user root
Nov 26 00:04:38 compute-0 sudo[275406]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 00:04:38 compute-0 sudo[275406]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:04:38 compute-0 sudo[275406]: pam_unix(sudo:session): session closed for user root
Nov 26 00:04:38 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1078: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:04:38 compute-0 sudo[275431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 -- lvm list --format json
Nov 26 00:04:38 compute-0 sudo[275431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:04:39 compute-0 podman[275496]: 2025-11-26 00:04:39.172360386 +0000 UTC m=+0.049375116 container create 80a576eee938f255481fa6f38f3ca406802e1abbb6b3ed51c5df2bbe224313db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_lovelace, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 00:04:39 compute-0 systemd[1]: Started libpod-conmon-80a576eee938f255481fa6f38f3ca406802e1abbb6b3ed51c5df2bbe224313db.scope.
Nov 26 00:04:39 compute-0 podman[275496]: 2025-11-26 00:04:39.146375175 +0000 UTC m=+0.023389965 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 00:04:39 compute-0 systemd[1]: Started libcrun container.
Nov 26 00:04:39 compute-0 podman[275496]: 2025-11-26 00:04:39.260466463 +0000 UTC m=+0.137481183 container init 80a576eee938f255481fa6f38f3ca406802e1abbb6b3ed51c5df2bbe224313db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_lovelace, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 00:04:39 compute-0 podman[275496]: 2025-11-26 00:04:39.269304259 +0000 UTC m=+0.146318959 container start 80a576eee938f255481fa6f38f3ca406802e1abbb6b3ed51c5df2bbe224313db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 26 00:04:39 compute-0 podman[275496]: 2025-11-26 00:04:39.271850396 +0000 UTC m=+0.148865116 container attach 80a576eee938f255481fa6f38f3ca406802e1abbb6b3ed51c5df2bbe224313db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_lovelace, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507)
Nov 26 00:04:39 compute-0 focused_lovelace[275512]: 167 167
Nov 26 00:04:39 compute-0 systemd[1]: libpod-80a576eee938f255481fa6f38f3ca406802e1abbb6b3ed51c5df2bbe224313db.scope: Deactivated successfully.
Nov 26 00:04:39 compute-0 conmon[275512]: conmon 80a576eee938f255481f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-80a576eee938f255481fa6f38f3ca406802e1abbb6b3ed51c5df2bbe224313db.scope/container/memory.events
Nov 26 00:04:39 compute-0 podman[275496]: 2025-11-26 00:04:39.27646726 +0000 UTC m=+0.153482000 container died 80a576eee938f255481fa6f38f3ca406802e1abbb6b3ed51c5df2bbe224313db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_lovelace, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 00:04:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-31c1713c5080e2f5a7f23a07600e47ec2120cbf452104e58d41704ff34c8b672-merged.mount: Deactivated successfully.
Nov 26 00:04:39 compute-0 podman[275496]: 2025-11-26 00:04:39.37974455 +0000 UTC m=+0.256759250 container remove 80a576eee938f255481fa6f38f3ca406802e1abbb6b3ed51c5df2bbe224313db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_lovelace, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 00:04:39 compute-0 systemd[1]: libpod-conmon-80a576eee938f255481fa6f38f3ca406802e1abbb6b3ed51c5df2bbe224313db.scope: Deactivated successfully.
Nov 26 00:04:39 compute-0 podman[275517]: 2025-11-26 00:04:39.433059941 +0000 UTC m=+0.062742213 container health_status 9c33443d6f21f9126b440289b7d384a59ef92406382d0e40a2b11f1da5dc47d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 26 00:04:39 compute-0 podman[275533]: 2025-11-26 00:04:39.460789369 +0000 UTC m=+0.086625949 container health_status 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible)
Nov 26 00:04:39 compute-0 podman[275581]: 2025-11-26 00:04:39.543223674 +0000 UTC m=+0.041343221 container create ae0c1ad1d7c9ab9c4b4987003d804e60475c2d4fbd4139070581e2bc664ab9cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_cannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 00:04:39 compute-0 systemd[1]: Started libpod-conmon-ae0c1ad1d7c9ab9c4b4987003d804e60475c2d4fbd4139070581e2bc664ab9cf.scope.
Nov 26 00:04:39 compute-0 systemd[1]: Started libcrun container.
Nov 26 00:04:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef3d0cf41169eb06face8f4093f6a7d204bbadc8478620d375668b9aeabc457a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 00:04:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef3d0cf41169eb06face8f4093f6a7d204bbadc8478620d375668b9aeabc457a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 00:04:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef3d0cf41169eb06face8f4093f6a7d204bbadc8478620d375668b9aeabc457a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 00:04:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef3d0cf41169eb06face8f4093f6a7d204bbadc8478620d375668b9aeabc457a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 00:04:39 compute-0 podman[275581]: 2025-11-26 00:04:39.524595679 +0000 UTC m=+0.022715216 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 00:04:39 compute-0 podman[275581]: 2025-11-26 00:04:39.633350496 +0000 UTC m=+0.131470073 container init ae0c1ad1d7c9ab9c4b4987003d804e60475c2d4fbd4139070581e2bc664ab9cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_cannon, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 26 00:04:39 compute-0 podman[275581]: 2025-11-26 00:04:39.644962135 +0000 UTC m=+0.143081692 container start ae0c1ad1d7c9ab9c4b4987003d804e60475c2d4fbd4139070581e2bc664ab9cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_cannon, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 26 00:04:39 compute-0 podman[275581]: 2025-11-26 00:04:39.648868809 +0000 UTC m=+0.146988356 container attach ae0c1ad1d7c9ab9c4b4987003d804e60475c2d4fbd4139070581e2bc664ab9cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_cannon, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 26 00:04:40 compute-0 lucid_cannon[275598]: {
Nov 26 00:04:40 compute-0 lucid_cannon[275598]:     "0": [
Nov 26 00:04:40 compute-0 lucid_cannon[275598]:         {
Nov 26 00:04:40 compute-0 lucid_cannon[275598]:             "devices": [
Nov 26 00:04:40 compute-0 lucid_cannon[275598]:                 "/dev/loop3"
Nov 26 00:04:40 compute-0 lucid_cannon[275598]:             ],
Nov 26 00:04:40 compute-0 lucid_cannon[275598]:             "lv_name": "ceph_lv0",
Nov 26 00:04:40 compute-0 lucid_cannon[275598]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 00:04:40 compute-0 lucid_cannon[275598]:             "lv_size": "21470642176",
Nov 26 00:04:40 compute-0 lucid_cannon[275598]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c5ddf08a-6193-41e0-8332-60b5083aa62e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 00:04:40 compute-0 lucid_cannon[275598]:             "lv_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 26 00:04:40 compute-0 lucid_cannon[275598]:             "name": "ceph_lv0",
Nov 26 00:04:40 compute-0 lucid_cannon[275598]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 00:04:40 compute-0 lucid_cannon[275598]:             "tags": {
Nov 26 00:04:40 compute-0 lucid_cannon[275598]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 00:04:40 compute-0 lucid_cannon[275598]:                 "ceph.block_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 26 00:04:40 compute-0 lucid_cannon[275598]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 00:04:40 compute-0 lucid_cannon[275598]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 26 00:04:40 compute-0 lucid_cannon[275598]:                 "ceph.cluster_name": "ceph",
Nov 26 00:04:40 compute-0 lucid_cannon[275598]:                 "ceph.crush_device_class": "",
Nov 26 00:04:40 compute-0 lucid_cannon[275598]:                 "ceph.encrypted": "0",
Nov 26 00:04:40 compute-0 lucid_cannon[275598]:                 "ceph.osd_fsid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 26 00:04:40 compute-0 lucid_cannon[275598]:                 "ceph.osd_id": "0",
Nov 26 00:04:40 compute-0 lucid_cannon[275598]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 00:04:40 compute-0 lucid_cannon[275598]:                 "ceph.type": "block",
Nov 26 00:04:40 compute-0 lucid_cannon[275598]:                 "ceph.vdo": "0"
Nov 26 00:04:40 compute-0 lucid_cannon[275598]:             },
Nov 26 00:04:40 compute-0 lucid_cannon[275598]:             "type": "block",
Nov 26 00:04:40 compute-0 lucid_cannon[275598]:             "vg_name": "ceph_vg0"
Nov 26 00:04:40 compute-0 lucid_cannon[275598]:         }
Nov 26 00:04:40 compute-0 lucid_cannon[275598]:     ],
Nov 26 00:04:40 compute-0 lucid_cannon[275598]:     "1": [
Nov 26 00:04:40 compute-0 lucid_cannon[275598]:         {
Nov 26 00:04:40 compute-0 lucid_cannon[275598]:             "devices": [
Nov 26 00:04:40 compute-0 lucid_cannon[275598]:                 "/dev/loop4"
Nov 26 00:04:40 compute-0 lucid_cannon[275598]:             ],
Nov 26 00:04:40 compute-0 lucid_cannon[275598]:             "lv_name": "ceph_lv1",
Nov 26 00:04:40 compute-0 lucid_cannon[275598]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 00:04:40 compute-0 lucid_cannon[275598]:             "lv_size": "21470642176",
Nov 26 00:04:40 compute-0 lucid_cannon[275598]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21bbab34-bea3-466b-8bf7-812749fcef47,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 00:04:40 compute-0 lucid_cannon[275598]:             "lv_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 26 00:04:40 compute-0 lucid_cannon[275598]:             "name": "ceph_lv1",
Nov 26 00:04:40 compute-0 lucid_cannon[275598]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 00:04:40 compute-0 lucid_cannon[275598]:             "tags": {
Nov 26 00:04:40 compute-0 lucid_cannon[275598]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 00:04:40 compute-0 lucid_cannon[275598]:                 "ceph.block_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 26 00:04:40 compute-0 lucid_cannon[275598]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 00:04:40 compute-0 lucid_cannon[275598]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 26 00:04:40 compute-0 lucid_cannon[275598]:                 "ceph.cluster_name": "ceph",
Nov 26 00:04:40 compute-0 lucid_cannon[275598]:                 "ceph.crush_device_class": "",
Nov 26 00:04:40 compute-0 lucid_cannon[275598]:                 "ceph.encrypted": "0",
Nov 26 00:04:40 compute-0 lucid_cannon[275598]:                 "ceph.osd_fsid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 26 00:04:40 compute-0 lucid_cannon[275598]:                 "ceph.osd_id": "1",
Nov 26 00:04:40 compute-0 lucid_cannon[275598]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 00:04:40 compute-0 lucid_cannon[275598]:                 "ceph.type": "block",
Nov 26 00:04:40 compute-0 lucid_cannon[275598]:                 "ceph.vdo": "0"
Nov 26 00:04:40 compute-0 lucid_cannon[275598]:             },
Nov 26 00:04:40 compute-0 lucid_cannon[275598]:             "type": "block",
Nov 26 00:04:40 compute-0 lucid_cannon[275598]:             "vg_name": "ceph_vg1"
Nov 26 00:04:40 compute-0 lucid_cannon[275598]:         }
Nov 26 00:04:40 compute-0 lucid_cannon[275598]:     ],
Nov 26 00:04:40 compute-0 lucid_cannon[275598]:     "2": [
Nov 26 00:04:40 compute-0 lucid_cannon[275598]:         {
Nov 26 00:04:40 compute-0 lucid_cannon[275598]:             "devices": [
Nov 26 00:04:40 compute-0 lucid_cannon[275598]:                 "/dev/loop5"
Nov 26 00:04:40 compute-0 lucid_cannon[275598]:             ],
Nov 26 00:04:40 compute-0 lucid_cannon[275598]:             "lv_name": "ceph_lv2",
Nov 26 00:04:40 compute-0 lucid_cannon[275598]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 00:04:40 compute-0 lucid_cannon[275598]:             "lv_size": "21470642176",
Nov 26 00:04:40 compute-0 lucid_cannon[275598]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=019d967b-1a56-4e90-8682-a890da577e20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 00:04:40 compute-0 lucid_cannon[275598]:             "lv_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 26 00:04:40 compute-0 lucid_cannon[275598]:             "name": "ceph_lv2",
Nov 26 00:04:40 compute-0 lucid_cannon[275598]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 00:04:40 compute-0 lucid_cannon[275598]:             "tags": {
Nov 26 00:04:40 compute-0 lucid_cannon[275598]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 00:04:40 compute-0 lucid_cannon[275598]:                 "ceph.block_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 26 00:04:40 compute-0 lucid_cannon[275598]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 00:04:40 compute-0 lucid_cannon[275598]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 26 00:04:40 compute-0 lucid_cannon[275598]:                 "ceph.cluster_name": "ceph",
Nov 26 00:04:40 compute-0 lucid_cannon[275598]:                 "ceph.crush_device_class": "",
Nov 26 00:04:40 compute-0 lucid_cannon[275598]:                 "ceph.encrypted": "0",
Nov 26 00:04:40 compute-0 lucid_cannon[275598]:                 "ceph.osd_fsid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 26 00:04:40 compute-0 lucid_cannon[275598]:                 "ceph.osd_id": "2",
Nov 26 00:04:40 compute-0 lucid_cannon[275598]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 00:04:40 compute-0 lucid_cannon[275598]:                 "ceph.type": "block",
Nov 26 00:04:40 compute-0 lucid_cannon[275598]:                 "ceph.vdo": "0"
Nov 26 00:04:40 compute-0 lucid_cannon[275598]:             },
Nov 26 00:04:40 compute-0 lucid_cannon[275598]:             "type": "block",
Nov 26 00:04:40 compute-0 lucid_cannon[275598]:             "vg_name": "ceph_vg2"
Nov 26 00:04:40 compute-0 lucid_cannon[275598]:         }
Nov 26 00:04:40 compute-0 lucid_cannon[275598]:     ]
Nov 26 00:04:40 compute-0 lucid_cannon[275598]: }
Nov 26 00:04:40 compute-0 systemd[1]: libpod-ae0c1ad1d7c9ab9c4b4987003d804e60475c2d4fbd4139070581e2bc664ab9cf.scope: Deactivated successfully.
Nov 26 00:04:40 compute-0 podman[275581]: 2025-11-26 00:04:40.375900134 +0000 UTC m=+0.874019691 container died ae0c1ad1d7c9ab9c4b4987003d804e60475c2d4fbd4139070581e2bc664ab9cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_cannon, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 26 00:04:40 compute-0 ceph-mon[75654]: pgmap v1078: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:04:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-ef3d0cf41169eb06face8f4093f6a7d204bbadc8478620d375668b9aeabc457a-merged.mount: Deactivated successfully.
Nov 26 00:04:40 compute-0 podman[275581]: 2025-11-26 00:04:40.48087398 +0000 UTC m=+0.978993527 container remove ae0c1ad1d7c9ab9c4b4987003d804e60475c2d4fbd4139070581e2bc664ab9cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_cannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 26 00:04:40 compute-0 systemd[1]: libpod-conmon-ae0c1ad1d7c9ab9c4b4987003d804e60475c2d4fbd4139070581e2bc664ab9cf.scope: Deactivated successfully.
Nov 26 00:04:40 compute-0 sudo[275431]: pam_unix(sudo:session): session closed for user root
Nov 26 00:04:40 compute-0 sudo[275622]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 00:04:40 compute-0 sudo[275622]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:04:40 compute-0 sudo[275622]: pam_unix(sudo:session): session closed for user root
Nov 26 00:04:40 compute-0 sudo[275647]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 00:04:40 compute-0 sudo[275647]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:04:40 compute-0 sudo[275647]: pam_unix(sudo:session): session closed for user root
Nov 26 00:04:40 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1079: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:04:40 compute-0 sudo[275672]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 00:04:40 compute-0 sudo[275672]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:04:40 compute-0 sudo[275672]: pam_unix(sudo:session): session closed for user root
Nov 26 00:04:40 compute-0 ovn_metadata_agent[160720]: 2025-11-26 00:04:40.772 160725 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 26 00:04:40 compute-0 ovn_metadata_agent[160720]: 2025-11-26 00:04:40.774 160725 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 26 00:04:40 compute-0 ovn_metadata_agent[160720]: 2025-11-26 00:04:40.774 160725 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 26 00:04:40 compute-0 sudo[275697]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 -- raw list --format json
Nov 26 00:04:40 compute-0 sudo[275697]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:04:41 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 00:04:41 compute-0 podman[275761]: 2025-11-26 00:04:41.278619199 +0000 UTC m=+0.070156720 container create bdf535d036f9875b934b9b0144da487ce4e90e6937891bfa1f3879a4244cdfe3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_ellis, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 26 00:04:41 compute-0 systemd[1]: Started libpod-conmon-bdf535d036f9875b934b9b0144da487ce4e90e6937891bfa1f3879a4244cdfe3.scope.
Nov 26 00:04:41 compute-0 podman[275761]: 2025-11-26 00:04:41.25127612 +0000 UTC m=+0.042813741 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 00:04:41 compute-0 systemd[1]: Started libcrun container.
Nov 26 00:04:41 compute-0 podman[275761]: 2025-11-26 00:04:41.382100375 +0000 UTC m=+0.173637926 container init bdf535d036f9875b934b9b0144da487ce4e90e6937891bfa1f3879a4244cdfe3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_ellis, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 26 00:04:41 compute-0 podman[275761]: 2025-11-26 00:04:41.394215207 +0000 UTC m=+0.185752728 container start bdf535d036f9875b934b9b0144da487ce4e90e6937891bfa1f3879a4244cdfe3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_ellis, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 26 00:04:41 compute-0 podman[275761]: 2025-11-26 00:04:41.397567827 +0000 UTC m=+0.189105348 container attach bdf535d036f9875b934b9b0144da487ce4e90e6937891bfa1f3879a4244cdfe3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_ellis, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 26 00:04:41 compute-0 busy_ellis[275777]: 167 167
Nov 26 00:04:41 compute-0 systemd[1]: libpod-bdf535d036f9875b934b9b0144da487ce4e90e6937891bfa1f3879a4244cdfe3.scope: Deactivated successfully.
Nov 26 00:04:41 compute-0 podman[275761]: 2025-11-26 00:04:41.402644431 +0000 UTC m=+0.194181952 container died bdf535d036f9875b934b9b0144da487ce4e90e6937891bfa1f3879a4244cdfe3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_ellis, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 26 00:04:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-d2bb2d82bb17591914721d54a43636ce88c5ca640a926e4a83dd489f3699ba8a-merged.mount: Deactivated successfully.
Nov 26 00:04:41 compute-0 podman[275761]: 2025-11-26 00:04:41.456591249 +0000 UTC m=+0.248128800 container remove bdf535d036f9875b934b9b0144da487ce4e90e6937891bfa1f3879a4244cdfe3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_ellis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 00:04:41 compute-0 systemd[1]: libpod-conmon-bdf535d036f9875b934b9b0144da487ce4e90e6937891bfa1f3879a4244cdfe3.scope: Deactivated successfully.
Nov 26 00:04:41 compute-0 podman[275799]: 2025-11-26 00:04:41.676015853 +0000 UTC m=+0.062315781 container create d031fc5eeb6a0e8e77c298358d128f12e9112d0e0ace260f57fecc9845aa6b5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_jemison, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 26 00:04:41 compute-0 systemd[1]: Started libpod-conmon-d031fc5eeb6a0e8e77c298358d128f12e9112d0e0ace260f57fecc9845aa6b5b.scope.
Nov 26 00:04:41 compute-0 podman[275799]: 2025-11-26 00:04:41.651538241 +0000 UTC m=+0.037838269 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 00:04:41 compute-0 systemd[1]: Started libcrun container.
Nov 26 00:04:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e081b27e5b1b3b4d65e0fbbc56034855bef3a7e8f3e0649a087a21b0416e12a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 00:04:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e081b27e5b1b3b4d65e0fbbc56034855bef3a7e8f3e0649a087a21b0416e12a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 00:04:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e081b27e5b1b3b4d65e0fbbc56034855bef3a7e8f3e0649a087a21b0416e12a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 00:04:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e081b27e5b1b3b4d65e0fbbc56034855bef3a7e8f3e0649a087a21b0416e12a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 00:04:41 compute-0 podman[275799]: 2025-11-26 00:04:41.770625153 +0000 UTC m=+0.156925171 container init d031fc5eeb6a0e8e77c298358d128f12e9112d0e0ace260f57fecc9845aa6b5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_jemison, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 00:04:41 compute-0 podman[275799]: 2025-11-26 00:04:41.777210839 +0000 UTC m=+0.163510797 container start d031fc5eeb6a0e8e77c298358d128f12e9112d0e0ace260f57fecc9845aa6b5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_jemison, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 26 00:04:41 compute-0 podman[275799]: 2025-11-26 00:04:41.780988939 +0000 UTC m=+0.167288877 container attach d031fc5eeb6a0e8e77c298358d128f12e9112d0e0ace260f57fecc9845aa6b5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_jemison, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 26 00:04:42 compute-0 ceph-mon[75654]: pgmap v1079: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:04:42 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1080: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:04:42 compute-0 practical_jemison[275815]: {
Nov 26 00:04:42 compute-0 practical_jemison[275815]:     "019d967b-1a56-4e90-8682-a890da577e20": {
Nov 26 00:04:42 compute-0 practical_jemison[275815]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 26 00:04:42 compute-0 practical_jemison[275815]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 00:04:42 compute-0 practical_jemison[275815]:         "osd_id": 2,
Nov 26 00:04:42 compute-0 practical_jemison[275815]:         "osd_uuid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 26 00:04:42 compute-0 practical_jemison[275815]:         "type": "bluestore"
Nov 26 00:04:42 compute-0 practical_jemison[275815]:     },
Nov 26 00:04:42 compute-0 practical_jemison[275815]:     "21bbab34-bea3-466b-8bf7-812749fcef47": {
Nov 26 00:04:42 compute-0 practical_jemison[275815]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 26 00:04:42 compute-0 practical_jemison[275815]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 00:04:42 compute-0 practical_jemison[275815]:         "osd_id": 1,
Nov 26 00:04:42 compute-0 practical_jemison[275815]:         "osd_uuid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 26 00:04:42 compute-0 practical_jemison[275815]:         "type": "bluestore"
Nov 26 00:04:42 compute-0 practical_jemison[275815]:     },
Nov 26 00:04:42 compute-0 practical_jemison[275815]:     "c5ddf08a-6193-41e0-8332-60b5083aa62e": {
Nov 26 00:04:42 compute-0 practical_jemison[275815]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 26 00:04:42 compute-0 practical_jemison[275815]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 00:04:42 compute-0 practical_jemison[275815]:         "osd_id": 0,
Nov 26 00:04:42 compute-0 practical_jemison[275815]:         "osd_uuid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 26 00:04:42 compute-0 practical_jemison[275815]:         "type": "bluestore"
Nov 26 00:04:42 compute-0 practical_jemison[275815]:     }
Nov 26 00:04:42 compute-0 practical_jemison[275815]: }
Nov 26 00:04:42 compute-0 systemd[1]: libpod-d031fc5eeb6a0e8e77c298358d128f12e9112d0e0ace260f57fecc9845aa6b5b.scope: Deactivated successfully.
Nov 26 00:04:42 compute-0 systemd[1]: libpod-d031fc5eeb6a0e8e77c298358d128f12e9112d0e0ace260f57fecc9845aa6b5b.scope: Consumed 1.066s CPU time.
Nov 26 00:04:42 compute-0 podman[275848]: 2025-11-26 00:04:42.891385726 +0000 UTC m=+0.031668315 container died d031fc5eeb6a0e8e77c298358d128f12e9112d0e0ace260f57fecc9845aa6b5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_jemison, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 26 00:04:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-2e081b27e5b1b3b4d65e0fbbc56034855bef3a7e8f3e0649a087a21b0416e12a-merged.mount: Deactivated successfully.
Nov 26 00:04:42 compute-0 podman[275848]: 2025-11-26 00:04:42.9535011 +0000 UTC m=+0.093783659 container remove d031fc5eeb6a0e8e77c298358d128f12e9112d0e0ace260f57fecc9845aa6b5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_jemison, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 26 00:04:42 compute-0 systemd[1]: libpod-conmon-d031fc5eeb6a0e8e77c298358d128f12e9112d0e0ace260f57fecc9845aa6b5b.scope: Deactivated successfully.
Nov 26 00:04:43 compute-0 sudo[275697]: pam_unix(sudo:session): session closed for user root
Nov 26 00:04:43 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 00:04:43 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 26 00:04:43 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 00:04:43 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 26 00:04:43 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev d4fbb68d-c484-4318-bf15-cf6cca10a9ec does not exist
Nov 26 00:04:43 compute-0 sudo[275863]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 00:04:43 compute-0 sudo[275863]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:04:43 compute-0 sudo[275863]: pam_unix(sudo:session): session closed for user root
Nov 26 00:04:43 compute-0 sudo[275888]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 26 00:04:43 compute-0 sudo[275888]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:04:43 compute-0 sudo[275888]: pam_unix(sudo:session): session closed for user root
Nov 26 00:04:44 compute-0 ceph-mon[75654]: pgmap v1080: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:04:44 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 26 00:04:44 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 26 00:04:44 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1081: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:04:46 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 00:04:46 compute-0 ceph-mon[75654]: pgmap v1081: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:04:46 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1082: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:04:48 compute-0 ceph-mon[75654]: pgmap v1082: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:04:48 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1083: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:04:50 compute-0 ceph-mon[75654]: pgmap v1083: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:04:50 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1084: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:04:51 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 00:04:52 compute-0 ceph-mon[75654]: pgmap v1084: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:04:52 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1085: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:04:54 compute-0 ceph-mon[75654]: pgmap v1085: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:04:54 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1086: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:04:56 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 00:04:56 compute-0 ceph-mgr[75954]: [balancer INFO root] Optimize plan auto_2025-11-26_00:04:56
Nov 26 00:04:56 compute-0 ceph-mgr[75954]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 00:04:56 compute-0 ceph-mgr[75954]: [balancer INFO root] do_upmap
Nov 26 00:04:56 compute-0 ceph-mgr[75954]: [balancer INFO root] pools ['images', '.mgr', 'volumes', 'cephfs.cephfs.meta', 'backups', 'vms', 'cephfs.cephfs.data']
Nov 26 00:04:56 compute-0 ceph-mgr[75954]: [balancer INFO root] prepared 0/10 changes
Nov 26 00:04:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 00:04:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 00:04:56 compute-0 ceph-mon[75654]: pgmap v1086: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:04:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 00:04:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 00:04:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 00:04:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 00:04:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 00:04:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 00:04:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 00:04:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 00:04:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 00:04:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 00:04:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 00:04:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 00:04:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 00:04:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 00:04:56 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1087: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:04:58 compute-0 ceph-mon[75654]: pgmap v1087: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:04:58 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1088: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:05:00 compute-0 ceph-mon[75654]: pgmap v1088: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:05:00 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1089: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:05:01 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 00:05:01 compute-0 podman[275913]: 2025-11-26 00:05:01.258352964 +0000 UTC m=+0.083340421 container health_status b33bb0f8771548517ad62d4e0193b93d84fd686bd9113462f17aea9d51587084 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Nov 26 00:05:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 00:05:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 00:05:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 00:05:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 00:05:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 26 00:05:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 00:05:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 26 00:05:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 00:05:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 00:05:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 00:05:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 26 00:05:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 00:05:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 26 00:05:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 00:05:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 00:05:02 compute-0 ceph-mon[75654]: pgmap v1089: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:05:02 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1090: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:05:04 compute-0 ceph-mon[75654]: pgmap v1090: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:05:04 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1091: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:05:06 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 00:05:06 compute-0 ceph-mon[75654]: pgmap v1091: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:05:06 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1092: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:05:08 compute-0 ceph-mon[75654]: pgmap v1092: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:05:08 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1093: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:05:08 compute-0 nova_compute[252550]: 2025-11-26 00:05:08.822 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 00:05:10 compute-0 ceph-mon[75654]: pgmap v1093: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:05:10 compute-0 podman[275935]: 2025-11-26 00:05:10.264606442 +0000 UTC m=+0.086021753 container health_status 9c33443d6f21f9126b440289b7d384a59ef92406382d0e40a2b11f1da5dc47d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118)
Nov 26 00:05:10 compute-0 podman[275934]: 2025-11-26 00:05:10.310536615 +0000 UTC m=+0.133701073 container health_status 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Nov 26 00:05:10 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1094: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:05:11 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 00:05:11 compute-0 nova_compute[252550]: 2025-11-26 00:05:11.821 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 00:05:11 compute-0 nova_compute[252550]: 2025-11-26 00:05:11.853 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 26 00:05:11 compute-0 nova_compute[252550]: 2025-11-26 00:05:11.854 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 26 00:05:11 compute-0 nova_compute[252550]: 2025-11-26 00:05:11.854 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 26 00:05:11 compute-0 nova_compute[252550]: 2025-11-26 00:05:11.855 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 26 00:05:11 compute-0 nova_compute[252550]: 2025-11-26 00:05:11.855 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 26 00:05:12 compute-0 ceph-mon[75654]: pgmap v1094: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:05:12 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 00:05:12 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/543726678' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 00:05:12 compute-0 nova_compute[252550]: 2025-11-26 00:05:12.338 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 26 00:05:12 compute-0 nova_compute[252550]: 2025-11-26 00:05:12.483 252558 WARNING nova.virt.libvirt.driver [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 26 00:05:12 compute-0 nova_compute[252550]: 2025-11-26 00:05:12.484 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5181MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 26 00:05:12 compute-0 nova_compute[252550]: 2025-11-26 00:05:12.485 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 26 00:05:12 compute-0 nova_compute[252550]: 2025-11-26 00:05:12.485 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 26 00:05:12 compute-0 nova_compute[252550]: 2025-11-26 00:05:12.558 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 26 00:05:12 compute-0 nova_compute[252550]: 2025-11-26 00:05:12.559 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 26 00:05:12 compute-0 nova_compute[252550]: 2025-11-26 00:05:12.574 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 26 00:05:12 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1095: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:05:12 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 00:05:12 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4027728265' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 00:05:13 compute-0 nova_compute[252550]: 2025-11-26 00:05:13.008 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 26 00:05:13 compute-0 nova_compute[252550]: 2025-11-26 00:05:13.013 252558 DEBUG nova.compute.provider_tree [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Inventory has not changed in ProviderTree for provider: 08547965-b35f-4b7b-95d8-902f06aa011c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 26 00:05:13 compute-0 nova_compute[252550]: 2025-11-26 00:05:13.028 252558 DEBUG nova.scheduler.client.report [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Inventory has not changed for provider 08547965-b35f-4b7b-95d8-902f06aa011c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 26 00:05:13 compute-0 nova_compute[252550]: 2025-11-26 00:05:13.030 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 26 00:05:13 compute-0 nova_compute[252550]: 2025-11-26 00:05:13.031 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.546s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 26 00:05:13 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/543726678' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 00:05:13 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/4027728265' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 00:05:14 compute-0 nova_compute[252550]: 2025-11-26 00:05:14.028 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 00:05:14 compute-0 nova_compute[252550]: 2025-11-26 00:05:14.028 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 00:05:14 compute-0 nova_compute[252550]: 2025-11-26 00:05:14.050 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 00:05:14 compute-0 ceph-mon[75654]: pgmap v1095: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:05:14 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1096: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:05:14 compute-0 nova_compute[252550]: 2025-11-26 00:05:14.821 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 00:05:14 compute-0 nova_compute[252550]: 2025-11-26 00:05:14.822 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 26 00:05:16 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 00:05:16 compute-0 ceph-mon[75654]: pgmap v1096: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:05:16 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1097: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:05:16 compute-0 nova_compute[252550]: 2025-11-26 00:05:16.822 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 00:05:17 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 26 00:05:17 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4172957434' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 00:05:17 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 26 00:05:17 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4172957434' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 00:05:18 compute-0 ceph-mon[75654]: pgmap v1097: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:05:18 compute-0 ceph-mon[75654]: from='client.? 192.168.122.10:0/4172957434' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 00:05:18 compute-0 ceph-mon[75654]: from='client.? 192.168.122.10:0/4172957434' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 00:05:18 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1098: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:05:18 compute-0 nova_compute[252550]: 2025-11-26 00:05:18.822 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 00:05:18 compute-0 nova_compute[252550]: 2025-11-26 00:05:18.823 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 26 00:05:18 compute-0 nova_compute[252550]: 2025-11-26 00:05:18.824 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 26 00:05:18 compute-0 nova_compute[252550]: 2025-11-26 00:05:18.841 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 26 00:05:19 compute-0 nova_compute[252550]: 2025-11-26 00:05:19.821 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 00:05:19 compute-0 nova_compute[252550]: 2025-11-26 00:05:19.821 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 00:05:20 compute-0 ceph-mon[75654]: pgmap v1098: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:05:20 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1099: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:05:21 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 00:05:22 compute-0 ceph-mon[75654]: pgmap v1099: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:05:22 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1100: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:05:24 compute-0 ceph-mon[75654]: pgmap v1100: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:05:24 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1101: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:05:26 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 00:05:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 00:05:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 00:05:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 00:05:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 00:05:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 00:05:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 00:05:26 compute-0 ceph-mon[75654]: pgmap v1101: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:05:26 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1102: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:05:28 compute-0 ceph-mon[75654]: pgmap v1102: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:05:28 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1103: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:05:30 compute-0 ceph-mon[75654]: pgmap v1103: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:05:30 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1104: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:05:31 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 00:05:32 compute-0 podman[276022]: 2025-11-26 00:05:32.249230972 +0000 UTC m=+0.080875574 container health_status b33bb0f8771548517ad62d4e0193b93d84fd686bd9113462f17aea9d51587084 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 26 00:05:32 compute-0 ceph-mon[75654]: pgmap v1104: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:05:32 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1105: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:05:34 compute-0 ceph-mon[75654]: pgmap v1105: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:05:34 compute-0 sshd-session[276042]: Invalid user support from 78.128.112.74 port 38614
Nov 26 00:05:34 compute-0 sshd-session[276042]: Connection closed by invalid user support 78.128.112.74 port 38614 [preauth]
Nov 26 00:05:34 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1106: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:05:35 compute-0 ceph-mon[75654]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Nov 26 00:05:35 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/26-00:05:35.319639) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 26 00:05:35 compute-0 ceph-mon[75654]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Nov 26 00:05:35 compute-0 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764115535319735, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 2056, "num_deletes": 251, "total_data_size": 2411298, "memory_usage": 2457768, "flush_reason": "Manual Compaction"}
Nov 26 00:05:35 compute-0 ceph-mon[75654]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Nov 26 00:05:35 compute-0 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764115535337283, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 2329144, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20644, "largest_seqno": 22699, "table_properties": {"data_size": 2319784, "index_size": 5918, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18595, "raw_average_key_size": 19, "raw_value_size": 2301128, "raw_average_value_size": 2471, "num_data_blocks": 271, "num_entries": 931, "num_filter_entries": 931, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764115304, "oldest_key_time": 1764115304, "file_creation_time": 1764115535, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cf57a6b1-796f-4cfa-b350-53eb10a4554d", "db_session_id": "Q7VS70283MEZ1V621ZPR", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Nov 26 00:05:35 compute-0 ceph-mon[75654]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 17685 microseconds, and 7966 cpu microseconds.
Nov 26 00:05:35 compute-0 ceph-mon[75654]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 00:05:35 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/26-00:05:35.337337) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 2329144 bytes OK
Nov 26 00:05:35 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/26-00:05:35.337356) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Nov 26 00:05:35 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/26-00:05:35.339031) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Nov 26 00:05:35 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/26-00:05:35.339045) EVENT_LOG_v1 {"time_micros": 1764115535339040, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 26 00:05:35 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/26-00:05:35.339062) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 26 00:05:35 compute-0 ceph-mon[75654]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 2402679, prev total WAL file size 2402679, number of live WAL files 2.
Nov 26 00:05:35 compute-0 ceph-mon[75654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 00:05:35 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/26-00:05:35.339827) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Nov 26 00:05:35 compute-0 ceph-mon[75654]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 26 00:05:35 compute-0 ceph-mon[75654]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(2274KB)], [50(5644KB)]
Nov 26 00:05:35 compute-0 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764115535339906, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 8109457, "oldest_snapshot_seqno": -1}
Nov 26 00:05:35 compute-0 ceph-mon[75654]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 4450 keys, 6889791 bytes, temperature: kUnknown
Nov 26 00:05:35 compute-0 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764115535383531, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 6889791, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6856503, "index_size": 21082, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11141, "raw_key_size": 106523, "raw_average_key_size": 23, "raw_value_size": 6773065, "raw_average_value_size": 1522, "num_data_blocks": 896, "num_entries": 4450, "num_filter_entries": 4450, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764113467, "oldest_key_time": 0, "file_creation_time": 1764115535, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cf57a6b1-796f-4cfa-b350-53eb10a4554d", "db_session_id": "Q7VS70283MEZ1V621ZPR", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Nov 26 00:05:35 compute-0 ceph-mon[75654]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 00:05:35 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/26-00:05:35.383792) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 6889791 bytes
Nov 26 00:05:35 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/26-00:05:35.389325) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 185.5 rd, 157.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 5.5 +0.0 blob) out(6.6 +0.0 blob), read-write-amplify(6.4) write-amplify(3.0) OK, records in: 4964, records dropped: 514 output_compression: NoCompression
Nov 26 00:05:35 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/26-00:05:35.389355) EVENT_LOG_v1 {"time_micros": 1764115535389341, "job": 26, "event": "compaction_finished", "compaction_time_micros": 43707, "compaction_time_cpu_micros": 22403, "output_level": 6, "num_output_files": 1, "total_output_size": 6889791, "num_input_records": 4964, "num_output_records": 4450, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 26 00:05:35 compute-0 ceph-mon[75654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 00:05:35 compute-0 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764115535390279, "job": 26, "event": "table_file_deletion", "file_number": 52}
Nov 26 00:05:35 compute-0 ceph-mon[75654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 00:05:35 compute-0 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764115535392118, "job": 26, "event": "table_file_deletion", "file_number": 50}
Nov 26 00:05:35 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/26-00:05:35.339714) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 00:05:35 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/26-00:05:35.392165) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 00:05:35 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/26-00:05:35.392187) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 00:05:35 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/26-00:05:35.392189) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 00:05:35 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/26-00:05:35.392191) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 00:05:35 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/26-00:05:35.392194) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 00:05:36 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 00:05:36 compute-0 ceph-mon[75654]: pgmap v1106: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:05:36 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1107: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:05:38 compute-0 ceph-mon[75654]: pgmap v1107: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:05:38 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1108: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:05:40 compute-0 ceph-mon[75654]: pgmap v1108: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:05:40 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1109: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:05:40 compute-0 ovn_metadata_agent[160720]: 2025-11-26 00:05:40.773 160725 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 26 00:05:40 compute-0 ovn_metadata_agent[160720]: 2025-11-26 00:05:40.773 160725 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 26 00:05:40 compute-0 ovn_metadata_agent[160720]: 2025-11-26 00:05:40.773 160725 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 26 00:05:41 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 00:05:41 compute-0 podman[276045]: 2025-11-26 00:05:41.263241867 +0000 UTC m=+0.079016725 container health_status 9c33443d6f21f9126b440289b7d384a59ef92406382d0e40a2b11f1da5dc47d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 26 00:05:41 compute-0 podman[276044]: 2025-11-26 00:05:41.329007449 +0000 UTC m=+0.147559891 container health_status 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Nov 26 00:05:42 compute-0 ceph-mon[75654]: pgmap v1109: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:05:42 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1110: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:05:43 compute-0 sudo[276089]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 00:05:43 compute-0 sudo[276089]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:05:43 compute-0 sudo[276089]: pam_unix(sudo:session): session closed for user root
Nov 26 00:05:43 compute-0 sudo[276114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 00:05:43 compute-0 sudo[276114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:05:43 compute-0 sudo[276114]: pam_unix(sudo:session): session closed for user root
Nov 26 00:05:43 compute-0 sudo[276139]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 00:05:43 compute-0 sudo[276139]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:05:43 compute-0 sudo[276139]: pam_unix(sudo:session): session closed for user root
Nov 26 00:05:43 compute-0 sudo[276164]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 26 00:05:43 compute-0 sudo[276164]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:05:44 compute-0 podman[276261]: 2025-11-26 00:05:44.080446975 +0000 UTC m=+0.113747711 container exec 42789e176a5dd28a3c234ac87642cbe4e8e4b6a7a43641214b6da8a496b18af2 (image=quay.io/ceph/ceph:v18, name=ceph-101922db-575f-58e2-980f-928050464f69-mon-compute-0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 00:05:44 compute-0 podman[276261]: 2025-11-26 00:05:44.182592466 +0000 UTC m=+0.215893202 container exec_died 42789e176a5dd28a3c234ac87642cbe4e8e4b6a7a43641214b6da8a496b18af2 (image=quay.io/ceph/ceph:v18, name=ceph-101922db-575f-58e2-980f-928050464f69-mon-compute-0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 00:05:44 compute-0 ceph-mon[75654]: pgmap v1110: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:05:44 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1111: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:05:44 compute-0 sudo[276164]: pam_unix(sudo:session): session closed for user root
Nov 26 00:05:44 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 00:05:44 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 26 00:05:44 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 00:05:44 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 26 00:05:44 compute-0 sudo[276402]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 00:05:44 compute-0 sudo[276402]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:05:44 compute-0 sudo[276402]: pam_unix(sudo:session): session closed for user root
Nov 26 00:05:45 compute-0 sudo[276427]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 00:05:45 compute-0 sudo[276427]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:05:45 compute-0 sudo[276427]: pam_unix(sudo:session): session closed for user root
Nov 26 00:05:45 compute-0 sudo[276452]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 00:05:45 compute-0 sudo[276452]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:05:45 compute-0 sudo[276452]: pam_unix(sudo:session): session closed for user root
Nov 26 00:05:45 compute-0 sudo[276477]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 26 00:05:45 compute-0 sudo[276477]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:05:45 compute-0 sudo[276477]: pam_unix(sudo:session): session closed for user root
Nov 26 00:05:45 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 00:05:45 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 00:05:45 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 00:05:45 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 00:05:45 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 00:05:45 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 26 00:05:45 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev 218c4abf-746e-4e80-9b35-337f9f6ea5cb does not exist
Nov 26 00:05:45 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev 86d3d06a-dfee-4b00-a912-83987d7fb94c does not exist
Nov 26 00:05:45 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev 5e23668f-6f52-4f36-815e-90a597caad86 does not exist
Nov 26 00:05:45 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 00:05:45 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 00:05:45 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 00:05:45 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 00:05:45 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 00:05:45 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 00:05:45 compute-0 sudo[276533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 00:05:45 compute-0 sudo[276533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:05:45 compute-0 sudo[276533]: pam_unix(sudo:session): session closed for user root
Nov 26 00:05:45 compute-0 sudo[276558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 00:05:45 compute-0 sudo[276558]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:05:45 compute-0 sudo[276558]: pam_unix(sudo:session): session closed for user root
Nov 26 00:05:45 compute-0 sudo[276583]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 00:05:45 compute-0 sudo[276583]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:05:45 compute-0 sudo[276583]: pam_unix(sudo:session): session closed for user root
Nov 26 00:05:45 compute-0 ceph-mon[75654]: pgmap v1111: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:05:45 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 26 00:05:45 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 26 00:05:45 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 00:05:45 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 00:05:45 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 26 00:05:45 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 00:05:45 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 00:05:45 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 00:05:45 compute-0 sudo[276608]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 26 00:05:45 compute-0 sudo[276608]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:05:46 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 00:05:46 compute-0 podman[276675]: 2025-11-26 00:05:46.302313166 +0000 UTC m=+0.038559498 container create 914da555c01becadaa761ecc21133142da00280ab99fdb621990d10dca31cda2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_buck, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 00:05:46 compute-0 systemd[1]: Started libpod-conmon-914da555c01becadaa761ecc21133142da00280ab99fdb621990d10dca31cda2.scope.
Nov 26 00:05:46 compute-0 systemd[1]: Started libcrun container.
Nov 26 00:05:46 compute-0 podman[276675]: 2025-11-26 00:05:46.372358642 +0000 UTC m=+0.108604994 container init 914da555c01becadaa761ecc21133142da00280ab99fdb621990d10dca31cda2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 00:05:46 compute-0 podman[276675]: 2025-11-26 00:05:46.379625626 +0000 UTC m=+0.115871968 container start 914da555c01becadaa761ecc21133142da00280ab99fdb621990d10dca31cda2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_buck, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 26 00:05:46 compute-0 podman[276675]: 2025-11-26 00:05:46.382610055 +0000 UTC m=+0.118856467 container attach 914da555c01becadaa761ecc21133142da00280ab99fdb621990d10dca31cda2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 00:05:46 compute-0 podman[276675]: 2025-11-26 00:05:46.287836951 +0000 UTC m=+0.024083313 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 00:05:46 compute-0 admiring_buck[276691]: 167 167
Nov 26 00:05:46 compute-0 systemd[1]: libpod-914da555c01becadaa761ecc21133142da00280ab99fdb621990d10dca31cda2.scope: Deactivated successfully.
Nov 26 00:05:46 compute-0 podman[276675]: 2025-11-26 00:05:46.385997675 +0000 UTC m=+0.122244097 container died 914da555c01becadaa761ecc21133142da00280ab99fdb621990d10dca31cda2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_buck, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 26 00:05:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-d63d983e07867fdd91c53b84995387d610f0e007d8fc7643a894651feca70ab2-merged.mount: Deactivated successfully.
Nov 26 00:05:46 compute-0 podman[276675]: 2025-11-26 00:05:46.434093096 +0000 UTC m=+0.170339448 container remove 914da555c01becadaa761ecc21133142da00280ab99fdb621990d10dca31cda2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 26 00:05:46 compute-0 systemd[1]: libpod-conmon-914da555c01becadaa761ecc21133142da00280ab99fdb621990d10dca31cda2.scope: Deactivated successfully.
Nov 26 00:05:46 compute-0 podman[276714]: 2025-11-26 00:05:46.617961414 +0000 UTC m=+0.054239186 container create e0aeedd268cad0a7c554eb43a173dbc5ed032520a3b79a93ddbf2923bbaec60b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_wiles, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 00:05:46 compute-0 systemd[1]: Started libpod-conmon-e0aeedd268cad0a7c554eb43a173dbc5ed032520a3b79a93ddbf2923bbaec60b.scope.
Nov 26 00:05:46 compute-0 systemd[1]: Started libcrun container.
Nov 26 00:05:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df1abe5d00327bd6610b56b276c01860c7feb9a3ec9ceb74a046a1c10630afa7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 00:05:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df1abe5d00327bd6610b56b276c01860c7feb9a3ec9ceb74a046a1c10630afa7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 00:05:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df1abe5d00327bd6610b56b276c01860c7feb9a3ec9ceb74a046a1c10630afa7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 00:05:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df1abe5d00327bd6610b56b276c01860c7feb9a3ec9ceb74a046a1c10630afa7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 00:05:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df1abe5d00327bd6610b56b276c01860c7feb9a3ec9ceb74a046a1c10630afa7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 26 00:05:46 compute-0 podman[276714]: 2025-11-26 00:05:46.599899713 +0000 UTC m=+0.036177505 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 00:05:46 compute-0 podman[276714]: 2025-11-26 00:05:46.702385832 +0000 UTC m=+0.138663664 container init e0aeedd268cad0a7c554eb43a173dbc5ed032520a3b79a93ddbf2923bbaec60b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_wiles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 26 00:05:46 compute-0 podman[276714]: 2025-11-26 00:05:46.708915317 +0000 UTC m=+0.145193079 container start e0aeedd268cad0a7c554eb43a173dbc5ed032520a3b79a93ddbf2923bbaec60b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_wiles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 26 00:05:46 compute-0 podman[276714]: 2025-11-26 00:05:46.712018149 +0000 UTC m=+0.148295981 container attach e0aeedd268cad0a7c554eb43a173dbc5ed032520a3b79a93ddbf2923bbaec60b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_wiles, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 00:05:46 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1112: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:05:47 compute-0 focused_wiles[276730]: --> passed data devices: 0 physical, 3 LVM
Nov 26 00:05:47 compute-0 focused_wiles[276730]: --> relative data size: 1.0
Nov 26 00:05:47 compute-0 focused_wiles[276730]: --> All data devices are unavailable
Nov 26 00:05:47 compute-0 systemd[1]: libpod-e0aeedd268cad0a7c554eb43a173dbc5ed032520a3b79a93ddbf2923bbaec60b.scope: Deactivated successfully.
Nov 26 00:05:47 compute-0 podman[276714]: 2025-11-26 00:05:47.754526468 +0000 UTC m=+1.190804260 container died e0aeedd268cad0a7c554eb43a173dbc5ed032520a3b79a93ddbf2923bbaec60b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_wiles, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 26 00:05:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-df1abe5d00327bd6610b56b276c01860c7feb9a3ec9ceb74a046a1c10630afa7-merged.mount: Deactivated successfully.
Nov 26 00:05:47 compute-0 podman[276714]: 2025-11-26 00:05:47.843418665 +0000 UTC m=+1.279696437 container remove e0aeedd268cad0a7c554eb43a173dbc5ed032520a3b79a93ddbf2923bbaec60b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_wiles, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 26 00:05:47 compute-0 systemd[1]: libpod-conmon-e0aeedd268cad0a7c554eb43a173dbc5ed032520a3b79a93ddbf2923bbaec60b.scope: Deactivated successfully.
Nov 26 00:05:47 compute-0 sudo[276608]: pam_unix(sudo:session): session closed for user root
Nov 26 00:05:47 compute-0 sudo[276773]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 00:05:47 compute-0 sudo[276773]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:05:47 compute-0 sudo[276773]: pam_unix(sudo:session): session closed for user root
Nov 26 00:05:48 compute-0 sudo[276798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 00:05:48 compute-0 sudo[276798]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:05:48 compute-0 sudo[276798]: pam_unix(sudo:session): session closed for user root
Nov 26 00:05:48 compute-0 ceph-mon[75654]: pgmap v1112: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:05:48 compute-0 sudo[276823]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 00:05:48 compute-0 sudo[276823]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:05:48 compute-0 sudo[276823]: pam_unix(sudo:session): session closed for user root
Nov 26 00:05:48 compute-0 sudo[276848]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 -- lvm list --format json
Nov 26 00:05:48 compute-0 sudo[276848]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:05:48 compute-0 podman[276915]: 2025-11-26 00:05:48.498847592 +0000 UTC m=+0.055906990 container create 6c30a048d5f2eb6b93f5dcb3cf786050c933d89f69cdc5ef77caad6b43392ab5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_shockley, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 00:05:48 compute-0 systemd[1]: Started libpod-conmon-6c30a048d5f2eb6b93f5dcb3cf786050c933d89f69cdc5ef77caad6b43392ab5.scope.
Nov 26 00:05:48 compute-0 systemd[1]: Started libcrun container.
Nov 26 00:05:48 compute-0 podman[276915]: 2025-11-26 00:05:48.479714033 +0000 UTC m=+0.036773461 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 00:05:48 compute-0 podman[276915]: 2025-11-26 00:05:48.584806953 +0000 UTC m=+0.141866351 container init 6c30a048d5f2eb6b93f5dcb3cf786050c933d89f69cdc5ef77caad6b43392ab5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_shockley, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 26 00:05:48 compute-0 podman[276915]: 2025-11-26 00:05:48.592401524 +0000 UTC m=+0.149460912 container start 6c30a048d5f2eb6b93f5dcb3cf786050c933d89f69cdc5ef77caad6b43392ab5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_shockley, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 26 00:05:48 compute-0 podman[276915]: 2025-11-26 00:05:48.595515667 +0000 UTC m=+0.152575155 container attach 6c30a048d5f2eb6b93f5dcb3cf786050c933d89f69cdc5ef77caad6b43392ab5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_shockley, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 26 00:05:48 compute-0 competent_shockley[276932]: 167 167
Nov 26 00:05:48 compute-0 systemd[1]: libpod-6c30a048d5f2eb6b93f5dcb3cf786050c933d89f69cdc5ef77caad6b43392ab5.scope: Deactivated successfully.
Nov 26 00:05:48 compute-0 podman[276915]: 2025-11-26 00:05:48.596406031 +0000 UTC m=+0.153465409 container died 6c30a048d5f2eb6b93f5dcb3cf786050c933d89f69cdc5ef77caad6b43392ab5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_shockley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 26 00:05:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-52b81ee6979c9a119a888588e917c3bb7cdd46ea6bbd56718820bc26377effa5-merged.mount: Deactivated successfully.
Nov 26 00:05:48 compute-0 podman[276915]: 2025-11-26 00:05:48.640474825 +0000 UTC m=+0.197534203 container remove 6c30a048d5f2eb6b93f5dcb3cf786050c933d89f69cdc5ef77caad6b43392ab5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 26 00:05:48 compute-0 systemd[1]: libpod-conmon-6c30a048d5f2eb6b93f5dcb3cf786050c933d89f69cdc5ef77caad6b43392ab5.scope: Deactivated successfully.
Nov 26 00:05:48 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1113: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:05:48 compute-0 podman[276957]: 2025-11-26 00:05:48.797254231 +0000 UTC m=+0.044330692 container create d64c6a1b61f6a20c078aabf93db0c13ddf2247c79f8244de118b08e45aa39ca2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_moore, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 26 00:05:48 compute-0 systemd[1]: Started libpod-conmon-d64c6a1b61f6a20c078aabf93db0c13ddf2247c79f8244de118b08e45aa39ca2.scope.
Nov 26 00:05:48 compute-0 systemd[1]: Started libcrun container.
Nov 26 00:05:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12ae7b6057bdfc3be9aba2b845b2bb2827bd3050fcbe4eb091d995a83c66a50e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 00:05:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12ae7b6057bdfc3be9aba2b845b2bb2827bd3050fcbe4eb091d995a83c66a50e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 00:05:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12ae7b6057bdfc3be9aba2b845b2bb2827bd3050fcbe4eb091d995a83c66a50e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 00:05:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12ae7b6057bdfc3be9aba2b845b2bb2827bd3050fcbe4eb091d995a83c66a50e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 00:05:48 compute-0 podman[276957]: 2025-11-26 00:05:48.868054567 +0000 UTC m=+0.115131048 container init d64c6a1b61f6a20c078aabf93db0c13ddf2247c79f8244de118b08e45aa39ca2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_moore, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 26 00:05:48 compute-0 podman[276957]: 2025-11-26 00:05:48.778968144 +0000 UTC m=+0.026044615 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 00:05:48 compute-0 podman[276957]: 2025-11-26 00:05:48.876209334 +0000 UTC m=+0.123285785 container start d64c6a1b61f6a20c078aabf93db0c13ddf2247c79f8244de118b08e45aa39ca2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_moore, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Nov 26 00:05:48 compute-0 podman[276957]: 2025-11-26 00:05:48.880105497 +0000 UTC m=+0.127181968 container attach d64c6a1b61f6a20c078aabf93db0c13ddf2247c79f8244de118b08e45aa39ca2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_moore, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 26 00:05:49 compute-0 pensive_moore[276974]: {
Nov 26 00:05:49 compute-0 pensive_moore[276974]:     "0": [
Nov 26 00:05:49 compute-0 pensive_moore[276974]:         {
Nov 26 00:05:49 compute-0 pensive_moore[276974]:             "devices": [
Nov 26 00:05:49 compute-0 pensive_moore[276974]:                 "/dev/loop3"
Nov 26 00:05:49 compute-0 pensive_moore[276974]:             ],
Nov 26 00:05:49 compute-0 pensive_moore[276974]:             "lv_name": "ceph_lv0",
Nov 26 00:05:49 compute-0 pensive_moore[276974]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 00:05:49 compute-0 pensive_moore[276974]:             "lv_size": "21470642176",
Nov 26 00:05:49 compute-0 pensive_moore[276974]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c5ddf08a-6193-41e0-8332-60b5083aa62e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 00:05:49 compute-0 pensive_moore[276974]:             "lv_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 26 00:05:49 compute-0 pensive_moore[276974]:             "name": "ceph_lv0",
Nov 26 00:05:49 compute-0 pensive_moore[276974]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 26 00:05:49 compute-0 pensive_moore[276974]:             "tags": {
Nov 26 00:05:49 compute-0 pensive_moore[276974]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 26 00:05:49 compute-0 pensive_moore[276974]:                 "ceph.block_uuid": "ae5yUE-ZhRn-ddmS-5Bpx-0OKN-YbDE-pGNAJe",
Nov 26 00:05:49 compute-0 pensive_moore[276974]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 00:05:49 compute-0 pensive_moore[276974]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 26 00:05:49 compute-0 pensive_moore[276974]:                 "ceph.cluster_name": "ceph",
Nov 26 00:05:49 compute-0 pensive_moore[276974]:                 "ceph.crush_device_class": "",
Nov 26 00:05:49 compute-0 pensive_moore[276974]:                 "ceph.encrypted": "0",
Nov 26 00:05:49 compute-0 pensive_moore[276974]:                 "ceph.osd_fsid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 26 00:05:49 compute-0 pensive_moore[276974]:                 "ceph.osd_id": "0",
Nov 26 00:05:49 compute-0 pensive_moore[276974]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 00:05:49 compute-0 pensive_moore[276974]:                 "ceph.type": "block",
Nov 26 00:05:49 compute-0 pensive_moore[276974]:                 "ceph.vdo": "0"
Nov 26 00:05:49 compute-0 pensive_moore[276974]:             },
Nov 26 00:05:49 compute-0 pensive_moore[276974]:             "type": "block",
Nov 26 00:05:49 compute-0 pensive_moore[276974]:             "vg_name": "ceph_vg0"
Nov 26 00:05:49 compute-0 pensive_moore[276974]:         }
Nov 26 00:05:49 compute-0 pensive_moore[276974]:     ],
Nov 26 00:05:49 compute-0 pensive_moore[276974]:     "1": [
Nov 26 00:05:49 compute-0 pensive_moore[276974]:         {
Nov 26 00:05:49 compute-0 pensive_moore[276974]:             "devices": [
Nov 26 00:05:49 compute-0 pensive_moore[276974]:                 "/dev/loop4"
Nov 26 00:05:49 compute-0 pensive_moore[276974]:             ],
Nov 26 00:05:49 compute-0 pensive_moore[276974]:             "lv_name": "ceph_lv1",
Nov 26 00:05:49 compute-0 pensive_moore[276974]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 00:05:49 compute-0 pensive_moore[276974]:             "lv_size": "21470642176",
Nov 26 00:05:49 compute-0 pensive_moore[276974]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=21bbab34-bea3-466b-8bf7-812749fcef47,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 00:05:49 compute-0 pensive_moore[276974]:             "lv_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 26 00:05:49 compute-0 pensive_moore[276974]:             "name": "ceph_lv1",
Nov 26 00:05:49 compute-0 pensive_moore[276974]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 26 00:05:49 compute-0 pensive_moore[276974]:             "tags": {
Nov 26 00:05:49 compute-0 pensive_moore[276974]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 26 00:05:49 compute-0 pensive_moore[276974]:                 "ceph.block_uuid": "tv5axp-tK0a-4ETm-UbJz-Ybh3-9Y21-GIWLuI",
Nov 26 00:05:49 compute-0 pensive_moore[276974]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 00:05:49 compute-0 pensive_moore[276974]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 26 00:05:49 compute-0 pensive_moore[276974]:                 "ceph.cluster_name": "ceph",
Nov 26 00:05:49 compute-0 pensive_moore[276974]:                 "ceph.crush_device_class": "",
Nov 26 00:05:49 compute-0 pensive_moore[276974]:                 "ceph.encrypted": "0",
Nov 26 00:05:49 compute-0 pensive_moore[276974]:                 "ceph.osd_fsid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 26 00:05:49 compute-0 pensive_moore[276974]:                 "ceph.osd_id": "1",
Nov 26 00:05:49 compute-0 pensive_moore[276974]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 00:05:49 compute-0 pensive_moore[276974]:                 "ceph.type": "block",
Nov 26 00:05:49 compute-0 pensive_moore[276974]:                 "ceph.vdo": "0"
Nov 26 00:05:49 compute-0 pensive_moore[276974]:             },
Nov 26 00:05:49 compute-0 pensive_moore[276974]:             "type": "block",
Nov 26 00:05:49 compute-0 pensive_moore[276974]:             "vg_name": "ceph_vg1"
Nov 26 00:05:49 compute-0 pensive_moore[276974]:         }
Nov 26 00:05:49 compute-0 pensive_moore[276974]:     ],
Nov 26 00:05:49 compute-0 pensive_moore[276974]:     "2": [
Nov 26 00:05:49 compute-0 pensive_moore[276974]:         {
Nov 26 00:05:49 compute-0 pensive_moore[276974]:             "devices": [
Nov 26 00:05:49 compute-0 pensive_moore[276974]:                 "/dev/loop5"
Nov 26 00:05:49 compute-0 pensive_moore[276974]:             ],
Nov 26 00:05:49 compute-0 pensive_moore[276974]:             "lv_name": "ceph_lv2",
Nov 26 00:05:49 compute-0 pensive_moore[276974]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 00:05:49 compute-0 pensive_moore[276974]:             "lv_size": "21470642176",
Nov 26 00:05:49 compute-0 pensive_moore[276974]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=101922db-575f-58e2-980f-928050464f69,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=019d967b-1a56-4e90-8682-a890da577e20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 26 00:05:49 compute-0 pensive_moore[276974]:             "lv_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 26 00:05:49 compute-0 pensive_moore[276974]:             "name": "ceph_lv2",
Nov 26 00:05:49 compute-0 pensive_moore[276974]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 26 00:05:49 compute-0 pensive_moore[276974]:             "tags": {
Nov 26 00:05:49 compute-0 pensive_moore[276974]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 26 00:05:49 compute-0 pensive_moore[276974]:                 "ceph.block_uuid": "HZQ5tE-p3Hf-hpaG-qjR8-sZ6Q-x4jg-hFCmre",
Nov 26 00:05:49 compute-0 pensive_moore[276974]:                 "ceph.cephx_lockbox_secret": "",
Nov 26 00:05:49 compute-0 pensive_moore[276974]:                 "ceph.cluster_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 26 00:05:49 compute-0 pensive_moore[276974]:                 "ceph.cluster_name": "ceph",
Nov 26 00:05:49 compute-0 pensive_moore[276974]:                 "ceph.crush_device_class": "",
Nov 26 00:05:49 compute-0 pensive_moore[276974]:                 "ceph.encrypted": "0",
Nov 26 00:05:49 compute-0 pensive_moore[276974]:                 "ceph.osd_fsid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 26 00:05:49 compute-0 pensive_moore[276974]:                 "ceph.osd_id": "2",
Nov 26 00:05:49 compute-0 pensive_moore[276974]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 26 00:05:49 compute-0 pensive_moore[276974]:                 "ceph.type": "block",
Nov 26 00:05:49 compute-0 pensive_moore[276974]:                 "ceph.vdo": "0"
Nov 26 00:05:49 compute-0 pensive_moore[276974]:             },
Nov 26 00:05:49 compute-0 pensive_moore[276974]:             "type": "block",
Nov 26 00:05:49 compute-0 pensive_moore[276974]:             "vg_name": "ceph_vg2"
Nov 26 00:05:49 compute-0 pensive_moore[276974]:         }
Nov 26 00:05:49 compute-0 pensive_moore[276974]:     ]
Nov 26 00:05:49 compute-0 pensive_moore[276974]: }
Nov 26 00:05:49 compute-0 systemd[1]: libpod-d64c6a1b61f6a20c078aabf93db0c13ddf2247c79f8244de118b08e45aa39ca2.scope: Deactivated successfully.
Nov 26 00:05:49 compute-0 podman[276957]: 2025-11-26 00:05:49.621228858 +0000 UTC m=+0.868305319 container died d64c6a1b61f6a20c078aabf93db0c13ddf2247c79f8244de118b08e45aa39ca2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_moore, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 00:05:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-12ae7b6057bdfc3be9aba2b845b2bb2827bd3050fcbe4eb091d995a83c66a50e-merged.mount: Deactivated successfully.
Nov 26 00:05:49 compute-0 podman[276957]: 2025-11-26 00:05:49.67874513 +0000 UTC m=+0.925821581 container remove d64c6a1b61f6a20c078aabf93db0c13ddf2247c79f8244de118b08e45aa39ca2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_moore, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True)
Nov 26 00:05:49 compute-0 systemd[1]: libpod-conmon-d64c6a1b61f6a20c078aabf93db0c13ddf2247c79f8244de118b08e45aa39ca2.scope: Deactivated successfully.
Nov 26 00:05:49 compute-0 sudo[276848]: pam_unix(sudo:session): session closed for user root
Nov 26 00:05:49 compute-0 sudo[276997]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 00:05:49 compute-0 sudo[276997]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:05:49 compute-0 sudo[276997]: pam_unix(sudo:session): session closed for user root
Nov 26 00:05:49 compute-0 sudo[277022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 00:05:49 compute-0 sudo[277022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:05:49 compute-0 sudo[277022]: pam_unix(sudo:session): session closed for user root
Nov 26 00:05:49 compute-0 sudo[277047]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 00:05:49 compute-0 sudo[277047]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:05:49 compute-0 sudo[277047]: pam_unix(sudo:session): session closed for user root
Nov 26 00:05:49 compute-0 sudo[277072]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 -- raw list --format json
Nov 26 00:05:50 compute-0 sudo[277072]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:05:50 compute-0 ceph-mon[75654]: pgmap v1113: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:05:50 compute-0 podman[277139]: 2025-11-26 00:05:50.409561426 +0000 UTC m=+0.052323855 container create bd492a84160eb6fce406222bf58a54dcba1ff4f80b21652f1889dac035f8511c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_wing, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 26 00:05:50 compute-0 systemd[1]: Started libpod-conmon-bd492a84160eb6fce406222bf58a54dcba1ff4f80b21652f1889dac035f8511c.scope.
Nov 26 00:05:50 compute-0 podman[277139]: 2025-11-26 00:05:50.381778026 +0000 UTC m=+0.024540495 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 00:05:50 compute-0 systemd[1]: Started libcrun container.
Nov 26 00:05:50 compute-0 podman[277139]: 2025-11-26 00:05:50.504975567 +0000 UTC m=+0.147738006 container init bd492a84160eb6fce406222bf58a54dcba1ff4f80b21652f1889dac035f8511c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_wing, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 26 00:05:50 compute-0 podman[277139]: 2025-11-26 00:05:50.514139992 +0000 UTC m=+0.156902411 container start bd492a84160eb6fce406222bf58a54dcba1ff4f80b21652f1889dac035f8511c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_wing, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 26 00:05:50 compute-0 podman[277139]: 2025-11-26 00:05:50.517548712 +0000 UTC m=+0.160311461 container attach bd492a84160eb6fce406222bf58a54dcba1ff4f80b21652f1889dac035f8511c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_wing, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 26 00:05:50 compute-0 quizzical_wing[277155]: 167 167
Nov 26 00:05:50 compute-0 systemd[1]: libpod-bd492a84160eb6fce406222bf58a54dcba1ff4f80b21652f1889dac035f8511c.scope: Deactivated successfully.
Nov 26 00:05:50 compute-0 conmon[277155]: conmon bd492a84160eb6fce406 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bd492a84160eb6fce406222bf58a54dcba1ff4f80b21652f1889dac035f8511c.scope/container/memory.events
Nov 26 00:05:50 compute-0 podman[277139]: 2025-11-26 00:05:50.520413798 +0000 UTC m=+0.163176217 container died bd492a84160eb6fce406222bf58a54dcba1ff4f80b21652f1889dac035f8511c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_wing, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 26 00:05:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-189c8081a7db1072b61c652bcc670f8f58c4820fad4fc6e121442a8581a63228-merged.mount: Deactivated successfully.
Nov 26 00:05:50 compute-0 podman[277139]: 2025-11-26 00:05:50.552110163 +0000 UTC m=+0.194872582 container remove bd492a84160eb6fce406222bf58a54dcba1ff4f80b21652f1889dac035f8511c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_wing, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 26 00:05:50 compute-0 systemd[1]: libpod-conmon-bd492a84160eb6fce406222bf58a54dcba1ff4f80b21652f1889dac035f8511c.scope: Deactivated successfully.
Nov 26 00:05:50 compute-0 podman[277180]: 2025-11-26 00:05:50.720023366 +0000 UTC m=+0.040691296 container create 667710f2366aa07ad791ef5c9b643410f1cedbe61c4e6f29bfce987f70ce7eb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 26 00:05:50 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1114: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:05:50 compute-0 systemd[1]: Started libpod-conmon-667710f2366aa07ad791ef5c9b643410f1cedbe61c4e6f29bfce987f70ce7eb6.scope.
Nov 26 00:05:50 compute-0 systemd[1]: Started libcrun container.
Nov 26 00:05:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77a2ec919807445fa504f4be29ac22d5dd52b9033ebb25d50b15652efcaeb0d8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 26 00:05:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77a2ec919807445fa504f4be29ac22d5dd52b9033ebb25d50b15652efcaeb0d8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 26 00:05:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77a2ec919807445fa504f4be29ac22d5dd52b9033ebb25d50b15652efcaeb0d8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 26 00:05:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77a2ec919807445fa504f4be29ac22d5dd52b9033ebb25d50b15652efcaeb0d8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 26 00:05:50 compute-0 podman[277180]: 2025-11-26 00:05:50.700343782 +0000 UTC m=+0.021011732 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 26 00:05:50 compute-0 podman[277180]: 2025-11-26 00:05:50.796216855 +0000 UTC m=+0.116884805 container init 667710f2366aa07ad791ef5c9b643410f1cedbe61c4e6f29bfce987f70ce7eb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_dewdney, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 26 00:05:50 compute-0 podman[277180]: 2025-11-26 00:05:50.801564167 +0000 UTC m=+0.122232097 container start 667710f2366aa07ad791ef5c9b643410f1cedbe61c4e6f29bfce987f70ce7eb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_dewdney, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 26 00:05:50 compute-0 podman[277180]: 2025-11-26 00:05:50.805247765 +0000 UTC m=+0.125915725 container attach 667710f2366aa07ad791ef5c9b643410f1cedbe61c4e6f29bfce987f70ce7eb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_dewdney, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 26 00:05:51 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 00:05:51 compute-0 vigorous_dewdney[277196]: {
Nov 26 00:05:51 compute-0 vigorous_dewdney[277196]:     "019d967b-1a56-4e90-8682-a890da577e20": {
Nov 26 00:05:51 compute-0 vigorous_dewdney[277196]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 26 00:05:51 compute-0 vigorous_dewdney[277196]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 26 00:05:51 compute-0 vigorous_dewdney[277196]:         "osd_id": 2,
Nov 26 00:05:51 compute-0 vigorous_dewdney[277196]:         "osd_uuid": "019d967b-1a56-4e90-8682-a890da577e20",
Nov 26 00:05:51 compute-0 vigorous_dewdney[277196]:         "type": "bluestore"
Nov 26 00:05:51 compute-0 vigorous_dewdney[277196]:     },
Nov 26 00:05:51 compute-0 vigorous_dewdney[277196]:     "21bbab34-bea3-466b-8bf7-812749fcef47": {
Nov 26 00:05:51 compute-0 vigorous_dewdney[277196]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 26 00:05:51 compute-0 vigorous_dewdney[277196]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 26 00:05:51 compute-0 vigorous_dewdney[277196]:         "osd_id": 1,
Nov 26 00:05:51 compute-0 vigorous_dewdney[277196]:         "osd_uuid": "21bbab34-bea3-466b-8bf7-812749fcef47",
Nov 26 00:05:51 compute-0 vigorous_dewdney[277196]:         "type": "bluestore"
Nov 26 00:05:51 compute-0 vigorous_dewdney[277196]:     },
Nov 26 00:05:51 compute-0 vigorous_dewdney[277196]:     "c5ddf08a-6193-41e0-8332-60b5083aa62e": {
Nov 26 00:05:51 compute-0 vigorous_dewdney[277196]:         "ceph_fsid": "101922db-575f-58e2-980f-928050464f69",
Nov 26 00:05:51 compute-0 vigorous_dewdney[277196]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 26 00:05:51 compute-0 vigorous_dewdney[277196]:         "osd_id": 0,
Nov 26 00:05:51 compute-0 vigorous_dewdney[277196]:         "osd_uuid": "c5ddf08a-6193-41e0-8332-60b5083aa62e",
Nov 26 00:05:51 compute-0 vigorous_dewdney[277196]:         "type": "bluestore"
Nov 26 00:05:51 compute-0 vigorous_dewdney[277196]:     }
Nov 26 00:05:51 compute-0 vigorous_dewdney[277196]: }
Nov 26 00:05:51 compute-0 systemd[1]: libpod-667710f2366aa07ad791ef5c9b643410f1cedbe61c4e6f29bfce987f70ce7eb6.scope: Deactivated successfully.
Nov 26 00:05:51 compute-0 systemd[1]: libpod-667710f2366aa07ad791ef5c9b643410f1cedbe61c4e6f29bfce987f70ce7eb6.scope: Consumed 1.050s CPU time.
Nov 26 00:05:51 compute-0 podman[277180]: 2025-11-26 00:05:51.844987709 +0000 UTC m=+1.165655669 container died 667710f2366aa07ad791ef5c9b643410f1cedbe61c4e6f29bfce987f70ce7eb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_dewdney, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 26 00:05:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-77a2ec919807445fa504f4be29ac22d5dd52b9033ebb25d50b15652efcaeb0d8-merged.mount: Deactivated successfully.
Nov 26 00:05:51 compute-0 podman[277180]: 2025-11-26 00:05:51.959450888 +0000 UTC m=+1.280118828 container remove 667710f2366aa07ad791ef5c9b643410f1cedbe61c4e6f29bfce987f70ce7eb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_dewdney, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 26 00:05:51 compute-0 systemd[1]: libpod-conmon-667710f2366aa07ad791ef5c9b643410f1cedbe61c4e6f29bfce987f70ce7eb6.scope: Deactivated successfully.
Nov 26 00:05:51 compute-0 sudo[277072]: pam_unix(sudo:session): session closed for user root
Nov 26 00:05:52 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 26 00:05:52 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 26 00:05:52 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 26 00:05:52 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 26 00:05:52 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev 8b2a3764-83b8-454e-a5de-8cd804140f22 does not exist
Nov 26 00:05:52 compute-0 ceph-mon[75654]: pgmap v1114: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:05:52 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 26 00:05:52 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 26 00:05:52 compute-0 sudo[277241]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 00:05:52 compute-0 sudo[277241]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:05:52 compute-0 sudo[277241]: pam_unix(sudo:session): session closed for user root
Nov 26 00:05:52 compute-0 sudo[277266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 26 00:05:52 compute-0 sudo[277266]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:05:52 compute-0 sudo[277266]: pam_unix(sudo:session): session closed for user root
Nov 26 00:05:52 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1115: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:05:54 compute-0 ceph-mon[75654]: pgmap v1115: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:05:54 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1116: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:05:56 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 00:05:56 compute-0 ceph-mgr[75954]: [balancer INFO root] Optimize plan auto_2025-11-26_00:05:56
Nov 26 00:05:56 compute-0 ceph-mgr[75954]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 26 00:05:56 compute-0 ceph-mgr[75954]: [balancer INFO root] do_upmap
Nov 26 00:05:56 compute-0 ceph-mgr[75954]: [balancer INFO root] pools ['vms', 'backups', 'volumes', 'cephfs.cephfs.meta', '.mgr', 'images', 'cephfs.cephfs.data']
Nov 26 00:05:56 compute-0 ceph-mgr[75954]: [balancer INFO root] prepared 0/10 changes
Nov 26 00:05:56 compute-0 ceph-mon[75654]: pgmap v1116: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:05:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 00:05:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 00:05:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 00:05:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 00:05:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 00:05:56 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 00:05:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 26 00:05:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 00:05:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 26 00:05:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 26 00:05:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 00:05:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 26 00:05:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 00:05:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 26 00:05:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 00:05:56 compute-0 ceph-mgr[75954]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 26 00:05:56 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1117: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:05:58 compute-0 ceph-mon[75654]: pgmap v1117: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:05:58 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1118: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:06:00 compute-0 ceph-mon[75654]: pgmap v1118: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:06:00 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1119: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:06:01 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 00:06:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] _maybe_adjust
Nov 26 00:06:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 00:06:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 00:06:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 00:06:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 26 00:06:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 00:06:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 26 00:06:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 00:06:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 00:06:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 00:06:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 26 00:06:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 00:06:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 26 00:06:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 00:06:01 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 00:06:02 compute-0 ceph-mon[75654]: pgmap v1119: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:06:02 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1120: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:06:03 compute-0 podman[277292]: 2025-11-26 00:06:03.272353148 +0000 UTC m=+0.095531066 container health_status b33bb0f8771548517ad62d4e0193b93d84fd686bd9113462f17aea9d51587084 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 26 00:06:03 compute-0 nova_compute[252550]: 2025-11-26 00:06:03.822 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 00:06:03 compute-0 nova_compute[252550]: 2025-11-26 00:06:03.822 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 26 00:06:03 compute-0 nova_compute[252550]: 2025-11-26 00:06:03.841 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 26 00:06:04 compute-0 ceph-mon[75654]: pgmap v1120: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:06:04 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1121: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:06:06 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 00:06:06 compute-0 ceph-mon[75654]: pgmap v1121: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:06:06 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1122: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:06:08 compute-0 ceph-mon[75654]: pgmap v1122: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:06:08 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1123: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:06:08 compute-0 nova_compute[252550]: 2025-11-26 00:06:08.841 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 00:06:09 compute-0 ceph-mon[75654]: pgmap v1123: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:06:10 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1124: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:06:11 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 00:06:11 compute-0 ceph-mon[75654]: pgmap v1124: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:06:12 compute-0 podman[277314]: 2025-11-26 00:06:12.25656516 +0000 UTC m=+0.069953404 container health_status 9c33443d6f21f9126b440289b7d384a59ef92406382d0e40a2b11f1da5dc47d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 26 00:06:12 compute-0 podman[277313]: 2025-11-26 00:06:12.274131968 +0000 UTC m=+0.100722004 container health_status 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller)
Nov 26 00:06:12 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1125: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:06:13 compute-0 nova_compute[252550]: 2025-11-26 00:06:13.817 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 00:06:13 compute-0 nova_compute[252550]: 2025-11-26 00:06:13.821 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 00:06:13 compute-0 ceph-mon[75654]: pgmap v1125: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:06:13 compute-0 nova_compute[252550]: 2025-11-26 00:06:13.854 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 26 00:06:13 compute-0 nova_compute[252550]: 2025-11-26 00:06:13.855 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 26 00:06:13 compute-0 nova_compute[252550]: 2025-11-26 00:06:13.855 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 26 00:06:13 compute-0 nova_compute[252550]: 2025-11-26 00:06:13.856 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 26 00:06:13 compute-0 nova_compute[252550]: 2025-11-26 00:06:13.856 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 26 00:06:14 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 00:06:14 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1783339804' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 00:06:14 compute-0 nova_compute[252550]: 2025-11-26 00:06:14.274 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 26 00:06:14 compute-0 nova_compute[252550]: 2025-11-26 00:06:14.521 252558 WARNING nova.virt.libvirt.driver [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 26 00:06:14 compute-0 nova_compute[252550]: 2025-11-26 00:06:14.523 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5148MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 26 00:06:14 compute-0 nova_compute[252550]: 2025-11-26 00:06:14.523 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 26 00:06:14 compute-0 nova_compute[252550]: 2025-11-26 00:06:14.524 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 26 00:06:14 compute-0 nova_compute[252550]: 2025-11-26 00:06:14.760 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 26 00:06:14 compute-0 nova_compute[252550]: 2025-11-26 00:06:14.761 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 26 00:06:14 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1126: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:06:14 compute-0 nova_compute[252550]: 2025-11-26 00:06:14.892 252558 DEBUG nova.scheduler.client.report [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Refreshing inventories for resource provider 08547965-b35f-4b7b-95d8-902f06aa011c _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 26 00:06:14 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/1783339804' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 00:06:14 compute-0 nova_compute[252550]: 2025-11-26 00:06:14.998 252558 DEBUG nova.scheduler.client.report [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Updating ProviderTree inventory for provider 08547965-b35f-4b7b-95d8-902f06aa011c from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 26 00:06:14 compute-0 nova_compute[252550]: 2025-11-26 00:06:14.998 252558 DEBUG nova.compute.provider_tree [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Updating inventory in ProviderTree for provider 08547965-b35f-4b7b-95d8-902f06aa011c with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 26 00:06:15 compute-0 nova_compute[252550]: 2025-11-26 00:06:15.010 252558 DEBUG nova.scheduler.client.report [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Refreshing aggregate associations for resource provider 08547965-b35f-4b7b-95d8-902f06aa011c, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 26 00:06:15 compute-0 nova_compute[252550]: 2025-11-26 00:06:15.028 252558 DEBUG nova.scheduler.client.report [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Refreshing trait associations for resource provider 08547965-b35f-4b7b-95d8-902f06aa011c, traits: COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_USB,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_FMA3,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_BMI,HW_CPU_X86_F16C,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NODE,HW_CPU_X86_AVX,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_FDC,COMPUTE_ACCELERATORS,HW_CPU_X86_AESNI,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_AVX2,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_MMX,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE4A,HW_CPU_X86_SHA,HW_CPU_X86_SSE41,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000E _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 26 00:06:15 compute-0 nova_compute[252550]: 2025-11-26 00:06:15.055 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 26 00:06:15 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 26 00:06:15 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1528675005' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 00:06:15 compute-0 nova_compute[252550]: 2025-11-26 00:06:15.480 252558 DEBUG oslo_concurrency.processutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 26 00:06:15 compute-0 nova_compute[252550]: 2025-11-26 00:06:15.487 252558 DEBUG nova.compute.provider_tree [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Inventory has not changed in ProviderTree for provider: 08547965-b35f-4b7b-95d8-902f06aa011c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 26 00:06:15 compute-0 nova_compute[252550]: 2025-11-26 00:06:15.513 252558 DEBUG nova.scheduler.client.report [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Inventory has not changed for provider 08547965-b35f-4b7b-95d8-902f06aa011c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 26 00:06:15 compute-0 nova_compute[252550]: 2025-11-26 00:06:15.516 252558 DEBUG nova.compute.resource_tracker [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 26 00:06:15 compute-0 nova_compute[252550]: 2025-11-26 00:06:15.516 252558 DEBUG oslo_concurrency.lockutils [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.993s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 26 00:06:15 compute-0 ceph-mon[75654]: pgmap v1126: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:06:15 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/1528675005' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 26 00:06:16 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 00:06:16 compute-0 nova_compute[252550]: 2025-11-26 00:06:16.518 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 00:06:16 compute-0 nova_compute[252550]: 2025-11-26 00:06:16.519 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 00:06:16 compute-0 nova_compute[252550]: 2025-11-26 00:06:16.519 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 26 00:06:16 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1127: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:06:17 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 26 00:06:17 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/972289283' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 00:06:17 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 26 00:06:17 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/972289283' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 00:06:17 compute-0 nova_compute[252550]: 2025-11-26 00:06:17.823 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 00:06:17 compute-0 ceph-mon[75654]: pgmap v1127: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:06:17 compute-0 ceph-mon[75654]: from='client.? 192.168.122.10:0/972289283' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 26 00:06:17 compute-0 ceph-mon[75654]: from='client.? 192.168.122.10:0/972289283' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 26 00:06:18 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1128: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:06:19 compute-0 nova_compute[252550]: 2025-11-26 00:06:19.821 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 00:06:19 compute-0 nova_compute[252550]: 2025-11-26 00:06:19.822 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 26 00:06:19 compute-0 nova_compute[252550]: 2025-11-26 00:06:19.822 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 26 00:06:19 compute-0 nova_compute[252550]: 2025-11-26 00:06:19.841 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 26 00:06:19 compute-0 nova_compute[252550]: 2025-11-26 00:06:19.841 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 00:06:19 compute-0 ceph-mon[75654]: pgmap v1128: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:06:20 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1129: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:06:20 compute-0 nova_compute[252550]: 2025-11-26 00:06:20.822 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 00:06:20 compute-0 nova_compute[252550]: 2025-11-26 00:06:20.822 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 00:06:20 compute-0 nova_compute[252550]: 2025-11-26 00:06:20.822 252558 DEBUG nova.compute.manager [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 26 00:06:21 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 00:06:21 compute-0 nova_compute[252550]: 2025-11-26 00:06:21.822 252558 DEBUG oslo_service.periodic_task [None req-2aa0f609-36f6-4e20-9b1b-edc4337024cf - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 26 00:06:21 compute-0 ceph-mon[75654]: pgmap v1129: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:06:22 compute-0 sshd-session[277403]: Accepted publickey for zuul from 192.168.122.10 port 37294 ssh2: ECDSA SHA256:7fdySOwLsSW4/MY3qn2D8coSBpPDcuftesbV+ivPlmY
Nov 26 00:06:22 compute-0 systemd-logind[789]: New session 55 of user zuul.
Nov 26 00:06:22 compute-0 systemd[1]: Started Session 55 of User zuul.
Nov 26 00:06:22 compute-0 sshd-session[277403]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 26 00:06:22 compute-0 sudo[277407]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Nov 26 00:06:22 compute-0 sudo[277407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 26 00:06:22 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1130: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:06:23 compute-0 ceph-mon[75654]: pgmap v1130: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:06:24 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.15015 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 00:06:24 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1131: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:06:25 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.15017 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 00:06:25 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Nov 26 00:06:25 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4039388735' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 26 00:06:26 compute-0 ceph-mon[75654]: from='client.15015 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 00:06:26 compute-0 ceph-mon[75654]: pgmap v1131: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:06:26 compute-0 ceph-mon[75654]: from='client.15017 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 00:06:26 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/4039388735' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 26 00:06:26 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 00:06:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 00:06:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 00:06:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 00:06:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 00:06:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] scanning for idle connections..
Nov 26 00:06:26 compute-0 ceph-mgr[75954]: [volumes INFO mgr_util] cleaning up connections: []
Nov 26 00:06:26 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1132: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:06:28 compute-0 ceph-mon[75654]: pgmap v1132: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:06:28 compute-0 ovs-vsctl[277670]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Nov 26 00:06:28 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1133: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:06:29 compute-0 virtqemud[251995]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Nov 26 00:06:29 compute-0 virtqemud[251995]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Nov 26 00:06:29 compute-0 virtqemud[251995]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 26 00:06:29 compute-0 ceph-mds[99641]: mds.cephfs.compute-0.bgauhq asok_command: cache status {prefix=cache status} (starting...)
Nov 26 00:06:29 compute-0 ceph-mds[99641]: mds.cephfs.compute-0.bgauhq asok_command: client ls {prefix=client ls} (starting...)
Nov 26 00:06:29 compute-0 lvm[278012]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 26 00:06:29 compute-0 lvm[278012]: VG ceph_vg0 finished
Nov 26 00:06:29 compute-0 lvm[278040]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 26 00:06:29 compute-0 lvm[278040]: VG ceph_vg1 finished
Nov 26 00:06:29 compute-0 lvm[278046]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 26 00:06:29 compute-0 lvm[278046]: VG ceph_vg2 finished
Nov 26 00:06:30 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.15021 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 00:06:30 compute-0 ceph-mon[75654]: pgmap v1133: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:06:30 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.15023 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 00:06:30 compute-0 ceph-mds[99641]: mds.cephfs.compute-0.bgauhq asok_command: damage ls {prefix=damage ls} (starting...)
Nov 26 00:06:30 compute-0 ceph-mds[99641]: mds.cephfs.compute-0.bgauhq asok_command: dump loads {prefix=dump loads} (starting...)
Nov 26 00:06:30 compute-0 ceph-mds[99641]: mds.cephfs.compute-0.bgauhq asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Nov 26 00:06:30 compute-0 ceph-mds[99641]: mds.cephfs.compute-0.bgauhq asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Nov 26 00:06:30 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1134: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:06:30 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0) v1
Nov 26 00:06:30 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/39046050' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 26 00:06:30 compute-0 ceph-mds[99641]: mds.cephfs.compute-0.bgauhq asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Nov 26 00:06:31 compute-0 ceph-mds[99641]: mds.cephfs.compute-0.bgauhq asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Nov 26 00:06:31 compute-0 ceph-mon[75654]: from='client.15021 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 00:06:31 compute-0 ceph-mon[75654]: from='client.15023 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 00:06:31 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/39046050' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 26 00:06:31 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 00:06:31 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.15029 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 00:06:31 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-26T00:06:31.097+0000 7f36737f5640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 26 00:06:31 compute-0 ceph-mgr[75954]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 26 00:06:31 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 00:06:31 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/654800089' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 00:06:31 compute-0 ceph-mds[99641]: mds.cephfs.compute-0.bgauhq asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Nov 26 00:06:31 compute-0 ceph-mds[99641]: mds.cephfs.compute-0.bgauhq asok_command: get subtrees {prefix=get subtrees} (starting...)
Nov 26 00:06:31 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Nov 26 00:06:31 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3774492520' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 26 00:06:31 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0) v1
Nov 26 00:06:31 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2521965635' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 26 00:06:31 compute-0 ceph-mds[99641]: mds.cephfs.compute-0.bgauhq asok_command: ops {prefix=ops} (starting...)
Nov 26 00:06:31 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 26 00:06:31 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2650124604' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 26 00:06:32 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Nov 26 00:06:32 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4196617477' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 26 00:06:32 compute-0 ceph-mon[75654]: pgmap v1134: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:06:32 compute-0 ceph-mon[75654]: from='client.15029 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 00:06:32 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/654800089' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 00:06:32 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/3774492520' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 26 00:06:32 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/2521965635' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 26 00:06:32 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/2650124604' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 26 00:06:32 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/4196617477' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 26 00:06:32 compute-0 ceph-mds[99641]: mds.cephfs.compute-0.bgauhq asok_command: session ls {prefix=session ls} (starting...)
Nov 26 00:06:32 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 26 00:06:32 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3776063975' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 26 00:06:32 compute-0 ceph-mds[99641]: mds.cephfs.compute-0.bgauhq asok_command: status {prefix=status} (starting...)
Nov 26 00:06:32 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.15043 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 00:06:32 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 26 00:06:32 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4077956155' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 26 00:06:32 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1135: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:06:32 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.15047 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 00:06:33 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/3776063975' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 26 00:06:33 compute-0 ceph-mon[75654]: from='client.15043 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 00:06:33 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/4077956155' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 26 00:06:33 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 26 00:06:33 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1135112060' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 26 00:06:33 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0) v1
Nov 26 00:06:33 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/186323434' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 26 00:06:33 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Nov 26 00:06:33 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/529213605' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 26 00:06:33 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Nov 26 00:06:33 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/665988486' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 26 00:06:33 compute-0 podman[278587]: 2025-11-26 00:06:33.873013269 +0000 UTC m=+0.063156293 container health_status b33bb0f8771548517ad62d4e0193b93d84fd686bd9113462f17aea9d51587084 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251118, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 26 00:06:34 compute-0 ceph-mon[75654]: pgmap v1135: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:06:34 compute-0 ceph-mon[75654]: from='client.15047 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 00:06:34 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/1135112060' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 26 00:06:34 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/186323434' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 26 00:06:34 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/529213605' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 26 00:06:34 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/665988486' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 26 00:06:34 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.15059 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 00:06:34 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-26T00:06:34.204+0000 7f36737f5640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 26 00:06:34 compute-0 ceph-mgr[75954]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 26 00:06:34 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Nov 26 00:06:34 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/723364685' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 26 00:06:34 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.15063 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 00:06:34 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Nov 26 00:06:34 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1553018103' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 26 00:06:34 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1136: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:06:34 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.15065 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 00:06:35 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Nov 26 00:06:35 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1014545830' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 26 00:06:35 compute-0 ceph-mon[75654]: from='client.15059 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 00:06:35 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/723364685' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 26 00:06:35 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/1553018103' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 26 00:06:35 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.15069 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 00:06:35 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 26 00:06:35 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1754443487' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 26 00:06:35 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.15073 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59367424 unmapped: 344064 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:12.918575+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59367424 unmapped: 344064 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:13.918729+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59375616 unmapped: 335872 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:14.918896+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59375616 unmapped: 335872 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:15.919099+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59375616 unmapped: 335872 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 385089 data_alloc: 218103808 data_used: 32768
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:16.919248+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59383808 unmapped: 327680 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:17.919505+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.d scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.d scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59392000 unmapped: 319488 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:18.919930+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  log_queue is 2 last_log 29 sent 27 num 2 unsent 2 sending 2
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:34:47.981956+0000 osd.2 (osd.2) 28 : cluster [DBG] 5.d scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:34:47.995904+0000 osd.2 (osd.2) 29 : cluster [DBG] 5.d scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client handle_log_ack log(last 29) v1
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:34:47.981956+0000 osd.2 (osd.2) 28 : cluster [DBG] 5.d scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:34:47.995904+0000 osd.2 (osd.2) 29 : cluster [DBG] 5.d scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59408384 unmapped: 303104 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:19.920224+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59408384 unmapped: 303104 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:20.920409+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59432960 unmapped: 278528 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 386236 data_alloc: 218103808 data_used: 32768
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:21.920595+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59432960 unmapped: 278528 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:22.920775+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.e scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.723413467s of 12.791498184s, submitted: 16
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.e scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59432960 unmapped: 278528 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:23.920927+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  log_queue is 2 last_log 31 sent 29 num 2 unsent 2 sending 2
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:34:52.947540+0000 osd.2 (osd.2) 30 : cluster [DBG] 5.e scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:34:52.961662+0000 osd.2 (osd.2) 31 : cluster [DBG] 5.e scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client handle_log_ack log(last 31) v1
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:34:52.947540+0000 osd.2 (osd.2) 30 : cluster [DBG] 5.e scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:34:52.961662+0000 osd.2 (osd.2) 31 : cluster [DBG] 5.e scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59441152 unmapped: 270336 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:24.921084+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59441152 unmapped: 270336 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:25.921243+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59482112 unmapped: 229376 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 389943 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:26.921381+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59482112 unmapped: 229376 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:27.921506+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59490304 unmapped: 221184 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:28.921624+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59490304 unmapped: 221184 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:29.921746+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59490304 unmapped: 221184 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:30.921851+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59498496 unmapped: 212992 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 389943 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:31.921959+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59498496 unmapped: 212992 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:32.922076+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59498496 unmapped: 212992 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:33.922216+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59506688 unmapped: 204800 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:34.922365+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59514880 unmapped: 196608 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:35.922589+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.10 deep-scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.111580849s of 13.117458344s, submitted: 2
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.10 deep-scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59514880 unmapped: 196608 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 391091 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:36.922698+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  log_queue is 2 last_log 33 sent 31 num 2 unsent 2 sending 2
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:35:06.064970+0000 osd.2 (osd.2) 32 : cluster [DBG] 5.10 deep-scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:35:06.079026+0000 osd.2 (osd.2) 33 : cluster [DBG] 5.10 deep-scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client handle_log_ack log(last 33) v1
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:35:06.064970+0000 osd.2 (osd.2) 32 : cluster [DBG] 5.10 deep-scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:35:06.079026+0000 osd.2 (osd.2) 33 : cluster [DBG] 5.10 deep-scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59523072 unmapped: 188416 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:37.922899+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59523072 unmapped: 188416 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:38.923100+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  log_queue is 2 last_log 35 sent 33 num 2 unsent 2 sending 2
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:35:08.064517+0000 osd.2 (osd.2) 34 : cluster [DBG] 5.17 scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:35:08.078520+0000 osd.2 (osd.2) 35 : cluster [DBG] 5.17 scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client handle_log_ack log(last 35) v1
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:35:08.064517+0000 osd.2 (osd.2) 34 : cluster [DBG] 5.17 scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:35:08.078520+0000 osd.2 (osd.2) 35 : cluster [DBG] 5.17 scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59531264 unmapped: 180224 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:39.923421+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59531264 unmapped: 180224 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:40.923589+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59539456 unmapped: 172032 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 393387 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:41.923764+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  log_queue is 2 last_log 37 sent 35 num 2 unsent 2 sending 2
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:35:10.985096+0000 osd.2 (osd.2) 36 : cluster [DBG] 5.1b scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:35:10.999157+0000 osd.2 (osd.2) 37 : cluster [DBG] 5.1b scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client handle_log_ack log(last 37) v1
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:35:10.985096+0000 osd.2 (osd.2) 36 : cluster [DBG] 5.1b scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:35:10.999157+0000 osd.2 (osd.2) 37 : cluster [DBG] 5.1b scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59547648 unmapped: 163840 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:42.923991+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59555840 unmapped: 155648 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:43.924231+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59555840 unmapped: 155648 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:44.924381+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59564032 unmapped: 147456 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:45.924571+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  log_queue is 2 last_log 39 sent 37 num 2 unsent 2 sending 2
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:35:15.017993+0000 osd.2 (osd.2) 38 : cluster [DBG] 5.1c scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:35:15.032072+0000 osd.2 (osd.2) 39 : cluster [DBG] 5.1c scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.1f deep-scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 5.1f deep-scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client handle_log_ack log(last 39) v1
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:35:15.017993+0000 osd.2 (osd.2) 38 : cluster [DBG] 5.1c scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:35:15.032072+0000 osd.2 (osd.2) 39 : cluster [DBG] 5.1c scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59564032 unmapped: 147456 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 395683 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:46.924756+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  log_queue is 2 last_log 41 sent 39 num 2 unsent 2 sending 2
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:35:16.037143+0000 osd.2 (osd.2) 40 : cluster [DBG] 5.1f deep-scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:35:16.051218+0000 osd.2 (osd.2) 41 : cluster [DBG] 5.1f deep-scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client handle_log_ack log(last 41) v1
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:35:16.037143+0000 osd.2 (osd.2) 40 : cluster [DBG] 5.1f deep-scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:35:16.051218+0000 osd.2 (osd.2) 41 : cluster [DBG] 5.1f deep-scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59572224 unmapped: 139264 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:47.924937+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59572224 unmapped: 139264 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:48.925109+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59572224 unmapped: 139264 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:49.925288+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59580416 unmapped: 131072 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:50.925452+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59588608 unmapped: 122880 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 395683 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:51.925620+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59596800 unmapped: 114688 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:52.925770+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59596800 unmapped: 114688 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:53.925935+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.988166809s of 18.023300171s, submitted: 10
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59604992 unmapped: 106496 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:54.926079+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  log_queue is 2 last_log 43 sent 41 num 2 unsent 2 sending 2
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:35:24.088310+0000 osd.2 (osd.2) 42 : cluster [DBG] 4.18 scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:35:24.102401+0000 osd.2 (osd.2) 43 : cluster [DBG] 4.18 scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59613184 unmapped: 98304 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client handle_log_ack log(last 43) v1
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:35:24.088310+0000 osd.2 (osd.2) 42 : cluster [DBG] 4.18 scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:35:24.102401+0000 osd.2 (osd.2) 43 : cluster [DBG] 4.18 scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:55.926327+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  log_queue is 2 last_log 45 sent 43 num 2 unsent 2 sending 2
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:35:25.101112+0000 osd.2 (osd.2) 44 : cluster [DBG] 4.1a scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:35:25.115217+0000 osd.2 (osd.2) 45 : cluster [DBG] 4.1a scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client handle_log_ack log(last 45) v1
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:35:25.101112+0000 osd.2 (osd.2) 44 : cluster [DBG] 4.1a scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:35:25.115217+0000 osd.2 (osd.2) 45 : cluster [DBG] 4.1a scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59621376 unmapped: 90112 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 397979 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:56.926577+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59629568 unmapped: 81920 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:57.926844+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59629568 unmapped: 81920 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:58.927001+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59645952 unmapped: 65536 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:59.927126+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  log_queue is 2 last_log 47 sent 45 num 2 unsent 2 sending 2
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:35:29.158670+0000 osd.2 (osd.2) 46 : cluster [DBG] 4.1b scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:35:29.172804+0000 osd.2 (osd.2) 47 : cluster [DBG] 4.1b scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.e scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.e scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59645952 unmapped: 65536 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client handle_log_ack log(last 47) v1
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:35:29.158670+0000 osd.2 (osd.2) 46 : cluster [DBG] 4.1b scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:35:29.172804+0000 osd.2 (osd.2) 47 : cluster [DBG] 4.1b scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:00.927421+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  log_queue is 2 last_log 49 sent 47 num 2 unsent 2 sending 2
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:35:30.185804+0000 osd.2 (osd.2) 48 : cluster [DBG] 4.e scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:35:30.199904+0000 osd.2 (osd.2) 49 : cluster [DBG] 4.e scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59645952 unmapped: 65536 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 400274 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client handle_log_ack log(last 49) v1
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:35:30.185804+0000 osd.2 (osd.2) 48 : cluster [DBG] 4.e scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:35:30.199904+0000 osd.2 (osd.2) 49 : cluster [DBG] 4.e scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:01.927638+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59654144 unmapped: 57344 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:02.927984+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.a scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.a scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59645952 unmapped: 65536 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:03.928159+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  log_queue is 2 last_log 51 sent 49 num 2 unsent 2 sending 2
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:35:33.194099+0000 osd.2 (osd.2) 50 : cluster [DBG] 4.a scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:35:33.208225+0000 osd.2 (osd.2) 51 : cluster [DBG] 4.a scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59654144 unmapped: 57344 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client handle_log_ack log(last 51) v1
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:35:33.194099+0000 osd.2 (osd.2) 50 : cluster [DBG] 4.a scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:35:33.208225+0000 osd.2 (osd.2) 51 : cluster [DBG] 4.a scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:04.928400+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59654144 unmapped: 57344 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:05.928562+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.980714798s of 12.043251991s, submitted: 10
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59662336 unmapped: 49152 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 402569 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:06.928731+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  log_queue is 2 last_log 53 sent 51 num 2 unsent 2 sending 2
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:35:36.131618+0000 osd.2 (osd.2) 52 : cluster [DBG] 4.13 scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:35:36.145731+0000 osd.2 (osd.2) 53 : cluster [DBG] 4.13 scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.1c deep-scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.1c deep-scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59670528 unmapped: 40960 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client handle_log_ack log(last 53) v1
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:35:36.131618+0000 osd.2 (osd.2) 52 : cluster [DBG] 4.13 scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:35:36.145731+0000 osd.2 (osd.2) 53 : cluster [DBG] 4.13 scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:07.929140+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  log_queue is 2 last_log 55 sent 53 num 2 unsent 2 sending 2
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:35:37.156617+0000 osd.2 (osd.2) 54 : cluster [DBG] 4.1c deep-scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:35:37.169982+0000 osd.2 (osd.2) 55 : cluster [DBG] 4.1c deep-scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59678720 unmapped: 32768 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client handle_log_ack log(last 55) v1
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:35:37.156617+0000 osd.2 (osd.2) 54 : cluster [DBG] 4.1c deep-scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:35:37.169982+0000 osd.2 (osd.2) 55 : cluster [DBG] 4.1c deep-scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:08.929430+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  log_queue is 2 last_log 57 sent 55 num 2 unsent 2 sending 2
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:35:38.119989+0000 osd.2 (osd.2) 56 : cluster [DBG] 4.11 scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:35:38.134064+0000 osd.2 (osd.2) 57 : cluster [DBG] 4.11 scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.1c deep-scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.1c deep-scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59686912 unmapped: 24576 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client handle_log_ack log(last 57) v1
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:35:38.119989+0000 osd.2 (osd.2) 56 : cluster [DBG] 4.11 scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:35:38.134064+0000 osd.2 (osd.2) 57 : cluster [DBG] 4.11 scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:09.929706+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  log_queue is 2 last_log 59 sent 57 num 2 unsent 2 sending 2
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:35:39.113217+0000 osd.2 (osd.2) 58 : cluster [DBG] 7.1c deep-scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:35:39.127337+0000 osd.2 (osd.2) 59 : cluster [DBG] 7.1c deep-scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59686912 unmapped: 24576 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client handle_log_ack log(last 59) v1
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:35:39.113217+0000 osd.2 (osd.2) 58 : cluster [DBG] 7.1c deep-scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:35:39.127337+0000 osd.2 (osd.2) 59 : cluster [DBG] 7.1c deep-scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:10.930127+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59686912 unmapped: 24576 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 406013 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:11.930268+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59695104 unmapped: 16384 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:12.930412+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59695104 unmapped: 16384 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:13.930689+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  log_queue is 2 last_log 61 sent 59 num 2 unsent 2 sending 2
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:35:43.085275+0000 osd.2 (osd.2) 60 : cluster [DBG] 3.18 scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:35:43.099299+0000 osd.2 (osd.2) 61 : cluster [DBG] 3.18 scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client handle_log_ack log(last 61) v1
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:35:43.085275+0000 osd.2 (osd.2) 60 : cluster [DBG] 3.18 scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:35:43.099299+0000 osd.2 (osd.2) 61 : cluster [DBG] 3.18 scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59703296 unmapped: 8192 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:14.930996+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59703296 unmapped: 8192 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:15.931225+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  log_queue is 2 last_log 63 sent 61 num 2 unsent 2 sending 2
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:35:45.093375+0000 osd.2 (osd.2) 62 : cluster [DBG] 7.11 scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:35:45.107448+0000 osd.2 (osd.2) 63 : cluster [DBG] 7.11 scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client handle_log_ack log(last 63) v1
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:35:45.093375+0000 osd.2 (osd.2) 62 : cluster [DBG] 7.11 scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:35:45.107448+0000 osd.2 (osd.2) 63 : cluster [DBG] 7.11 scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59703296 unmapped: 8192 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 408309 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:16.931499+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59711488 unmapped: 0 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:17.931657+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59711488 unmapped: 0 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:18.931851+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59711488 unmapped: 0 heap: 59711488 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:19.932027+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59719680 unmapped: 1040384 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:20.932223+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59719680 unmapped: 1040384 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 408309 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:21.932371+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59727872 unmapped: 1032192 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:22.932567+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59727872 unmapped: 1032192 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:23.932744+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59727872 unmapped: 1032192 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:24.932954+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59736064 unmapped: 1024000 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:25.933279+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 20.015459061s of 20.058856964s, submitted: 12
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59752448 unmapped: 1007616 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 409457 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:26.933509+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  log_queue is 2 last_log 65 sent 63 num 2 unsent 2 sending 2
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:35:56.190527+0000 osd.2 (osd.2) 64 : cluster [DBG] 3.16 scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:35:56.204653+0000 osd.2 (osd.2) 65 : cluster [DBG] 3.16 scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.a scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.a scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client handle_log_ack log(last 65) v1
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:35:56.190527+0000 osd.2 (osd.2) 64 : cluster [DBG] 3.16 scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:35:56.204653+0000 osd.2 (osd.2) 65 : cluster [DBG] 3.16 scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59752448 unmapped: 1007616 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:27.933745+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  log_queue is 2 last_log 67 sent 65 num 2 unsent 2 sending 2
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:35:57.154506+0000 osd.2 (osd.2) 66 : cluster [DBG] 7.a scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:35:57.168476+0000 osd.2 (osd.2) 67 : cluster [DBG] 7.a scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client handle_log_ack log(last 67) v1
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:35:57.154506+0000 osd.2 (osd.2) 66 : cluster [DBG] 7.a scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:35:57.168476+0000 osd.2 (osd.2) 67 : cluster [DBG] 7.a scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59760640 unmapped: 999424 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:28.933964+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59760640 unmapped: 999424 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:29.934091+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59768832 unmapped: 991232 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:30.934264+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  log_queue is 2 last_log 69 sent 67 num 2 unsent 2 sending 2
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:36:00.097898+0000 osd.2 (osd.2) 68 : cluster [DBG] 3.11 scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:36:00.111987+0000 osd.2 (osd.2) 69 : cluster [DBG] 3.11 scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client handle_log_ack log(last 69) v1
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:36:00.097898+0000 osd.2 (osd.2) 68 : cluster [DBG] 3.11 scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:36:00.111987+0000 osd.2 (osd.2) 69 : cluster [DBG] 3.11 scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59768832 unmapped: 991232 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 411752 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:31.934566+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59777024 unmapped: 983040 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:32.934710+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59777024 unmapped: 983040 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:33.934865+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.8 deep-scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.8 deep-scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59785216 unmapped: 974848 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:34.935077+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  log_queue is 2 last_log 71 sent 69 num 2 unsent 2 sending 2
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:36:04.090135+0000 osd.2 (osd.2) 70 : cluster [DBG] 7.8 deep-scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:36:04.104274+0000 osd.2 (osd.2) 71 : cluster [DBG] 7.8 deep-scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.15 deep-scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.15 deep-scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client handle_log_ack log(last 71) v1
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:36:04.090135+0000 osd.2 (osd.2) 70 : cluster [DBG] 7.8 deep-scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:36:04.104274+0000 osd.2 (osd.2) 71 : cluster [DBG] 7.8 deep-scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59801600 unmapped: 958464 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:35.935333+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  log_queue is 2 last_log 73 sent 71 num 2 unsent 2 sending 2
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:36:05.049109+0000 osd.2 (osd.2) 72 : cluster [DBG] 7.15 deep-scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:36:05.063189+0000 osd.2 (osd.2) 73 : cluster [DBG] 7.15 deep-scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client handle_log_ack log(last 73) v1
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:36:05.049109+0000 osd.2 (osd.2) 72 : cluster [DBG] 7.15 deep-scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:36:05.063189+0000 osd.2 (osd.2) 73 : cluster [DBG] 7.15 deep-scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59817984 unmapped: 942080 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 414047 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:36.935532+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59826176 unmapped: 933888 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:37.935747+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59826176 unmapped: 933888 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:38.935868+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59834368 unmapped: 925696 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:39.935986+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59834368 unmapped: 925696 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:40.936131+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59850752 unmapped: 909312 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 414047 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:41.936223+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:42.936338+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:43.936529+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.5 deep-scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.928712845s of 17.964544296s, submitted: 10
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.5 deep-scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59858944 unmapped: 901120 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:44.936723+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  log_queue is 2 last_log 75 sent 73 num 2 unsent 2 sending 2
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:36:14.155007+0000 osd.2 (osd.2) 74 : cluster [DBG] 3.5 deep-scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:36:14.169108+0000 osd.2 (osd.2) 75 : cluster [DBG] 3.5 deep-scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client handle_log_ack log(last 75) v1
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:36:14.155007+0000 osd.2 (osd.2) 74 : cluster [DBG] 3.5 deep-scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:36:14.169108+0000 osd.2 (osd.2) 75 : cluster [DBG] 3.5 deep-scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:45.936989+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.e scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.e scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59867136 unmapped: 892928 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 416341 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:46.937192+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  log_queue is 2 last_log 77 sent 75 num 2 unsent 2 sending 2
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:36:16.126345+0000 osd.2 (osd.2) 76 : cluster [DBG] 3.e scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:36:16.140448+0000 osd.2 (osd.2) 77 : cluster [DBG] 3.e scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client handle_log_ack log(last 77) v1
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:36:16.126345+0000 osd.2 (osd.2) 76 : cluster [DBG] 3.e scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:36:16.140448+0000 osd.2 (osd.2) 77 : cluster [DBG] 3.e scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59875328 unmapped: 884736 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:47.937402+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59883520 unmapped: 876544 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:48.937919+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  log_queue is 2 last_log 79 sent 77 num 2 unsent 2 sending 2
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:36:18.153052+0000 osd.2 (osd.2) 78 : cluster [DBG] 7.5 scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:36:18.167216+0000 osd.2 (osd.2) 79 : cluster [DBG] 7.5 scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client handle_log_ack log(last 79) v1
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:36:18.153052+0000 osd.2 (osd.2) 78 : cluster [DBG] 7.5 scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:36:18.167216+0000 osd.2 (osd.2) 79 : cluster [DBG] 7.5 scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59891712 unmapped: 868352 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:49.938121+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.c scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.c scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59899904 unmapped: 860160 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:50.938337+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  log_queue is 2 last_log 81 sent 79 num 2 unsent 2 sending 2
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:36:20.179718+0000 osd.2 (osd.2) 80 : cluster [DBG] 7.c scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:36:20.193651+0000 osd.2 (osd.2) 81 : cluster [DBG] 7.c scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client handle_log_ack log(last 81) v1
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:36:20.179718+0000 osd.2 (osd.2) 80 : cluster [DBG] 7.c scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:36:20.193651+0000 osd.2 (osd.2) 81 : cluster [DBG] 7.c scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59908096 unmapped: 851968 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 418635 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:51.938551+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59908096 unmapped: 851968 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:52.938694+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59908096 unmapped: 851968 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:53.938885+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59916288 unmapped: 843776 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:54.939012+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59916288 unmapped: 843776 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:55.939461+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59924480 unmapped: 835584 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 418635 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:56.939710+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.752611160s of 12.872432709s, submitted: 8
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59932672 unmapped: 827392 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:57.940036+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  log_queue is 2 last_log 83 sent 81 num 2 unsent 2 sending 2
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:36:27.027593+0000 osd.2 (osd.2) 82 : cluster [DBG] 3.8 scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:36:27.041716+0000 osd.2 (osd.2) 83 : cluster [DBG] 3.8 scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client handle_log_ack log(last 83) v1
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:36:27.027593+0000 osd.2 (osd.2) 82 : cluster [DBG] 3.8 scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:36:27.041716+0000 osd.2 (osd.2) 83 : cluster [DBG] 3.8 scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59932672 unmapped: 827392 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:58.940567+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.e scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.e scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59940864 unmapped: 819200 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:59.940784+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  log_queue is 2 last_log 85 sent 83 num 2 unsent 2 sending 2
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:36:29.019700+0000 osd.2 (osd.2) 84 : cluster [DBG] 7.e scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:36:29.033760+0000 osd.2 (osd.2) 85 : cluster [DBG] 7.e scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client handle_log_ack log(last 85) v1
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:36:29.019700+0000 osd.2 (osd.2) 84 : cluster [DBG] 7.e scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:36:29.033760+0000 osd.2 (osd.2) 85 : cluster [DBG] 7.e scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59940864 unmapped: 819200 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:00.941346+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59949056 unmapped: 811008 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 420929 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:01.942719+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59949056 unmapped: 811008 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:02.944274+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59957248 unmapped: 802816 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:03.945488+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59957248 unmapped: 802816 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:04.946649+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  log_queue is 2 last_log 87 sent 85 num 2 unsent 2 sending 2
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:36:33.965860+0000 osd.2 (osd.2) 86 : cluster [DBG] 3.1d scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:36:33.979992+0000 osd.2 (osd.2) 87 : cluster [DBG] 3.1d scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client handle_log_ack log(last 87) v1
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:36:33.965860+0000 osd.2 (osd.2) 86 : cluster [DBG] 3.1d scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:36:33.979992+0000 osd.2 (osd.2) 87 : cluster [DBG] 3.1d scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59957248 unmapped: 802816 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:05.948061+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 422077 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59965440 unmapped: 794624 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:06.949327+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59965440 unmapped: 794624 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:07.949526+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59965440 unmapped: 794624 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:08.950013+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.934057236s of 11.955801964s, submitted: 6
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59981824 unmapped: 778240 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:09.950395+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  log_queue is 2 last_log 89 sent 87 num 2 unsent 2 sending 2
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:36:38.983375+0000 osd.2 (osd.2) 88 : cluster [DBG] 7.2 scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:36:38.997461+0000 osd.2 (osd.2) 89 : cluster [DBG] 7.2 scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client handle_log_ack log(last 89) v1
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:36:38.983375+0000 osd.2 (osd.2) 88 : cluster [DBG] 7.2 scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:36:38.997461+0000 osd.2 (osd.2) 89 : cluster [DBG] 7.2 scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59990016 unmapped: 770048 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:10.951250+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 423224 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:11.951657+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 59998208 unmapped: 761856 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:12.952388+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  log_queue is 2 last_log 91 sent 89 num 2 unsent 2 sending 2
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:36:41.975396+0000 osd.2 (osd.2) 90 : cluster [DBG] 3.1e scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:36:41.989590+0000 osd.2 (osd.2) 91 : cluster [DBG] 3.1e scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client handle_log_ack log(last 91) v1
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:36:41.975396+0000 osd.2 (osd.2) 90 : cluster [DBG] 3.1e scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:36:41.989590+0000 osd.2 (osd.2) 91 : cluster [DBG] 3.1e scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60006400 unmapped: 753664 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:13.953223+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  log_queue is 2 last_log 93 sent 91 num 2 unsent 2 sending 2
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:36:43.922789+0000 osd.2 (osd.2) 92 : cluster [DBG] 7.1 scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:36:43.936966+0000 osd.2 (osd.2) 93 : cluster [DBG] 7.1 scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client handle_log_ack log(last 93) v1
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:36:43.922789+0000 osd.2 (osd.2) 92 : cluster [DBG] 7.1 scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:36:43.936966+0000 osd.2 (osd.2) 93 : cluster [DBG] 7.1 scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:14.953988+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60014592 unmapped: 745472 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:15.954687+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 426667 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60030976 unmapped: 729088 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:16.956825+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  log_queue is 2 last_log 95 sent 93 num 2 unsent 2 sending 2
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:36:45.955166+0000 osd.2 (osd.2) 94 : cluster [DBG] 7.1a scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:36:45.969268+0000 osd.2 (osd.2) 95 : cluster [DBG] 7.1a scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client handle_log_ack log(last 95) v1
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:36:45.955166+0000 osd.2 (osd.2) 94 : cluster [DBG] 7.1a scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:36:45.969268+0000 osd.2 (osd.2) 95 : cluster [DBG] 7.1a scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.7 deep-scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 3.7 deep-scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60030976 unmapped: 729088 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:17.957251+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  log_queue is 2 last_log 97 sent 95 num 2 unsent 2 sending 2
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:36:47.880846+0000 osd.2 (osd.2) 96 : cluster [DBG] 3.7 deep-scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:36:47.895043+0000 osd.2 (osd.2) 97 : cluster [DBG] 3.7 deep-scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client handle_log_ack log(last 97) v1
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:36:47.880846+0000 osd.2 (osd.2) 96 : cluster [DBG] 3.7 deep-scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:36:47.895043+0000 osd.2 (osd.2) 97 : cluster [DBG] 3.7 deep-scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60030976 unmapped: 729088 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:18.957601+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60039168 unmapped: 720896 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:19.957765+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60047360 unmapped: 712704 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:20.958503+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 427814 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60055552 unmapped: 704512 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.913698196s of 12.949940681s, submitted: 10
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:21.958684+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  log_queue is 2 last_log 99 sent 97 num 2 unsent 2 sending 2
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:36:51.933398+0000 osd.2 (osd.2) 98 : cluster [DBG] 6.8 scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:36:51.947491+0000 osd.2 (osd.2) 99 : cluster [DBG] 6.8 scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client handle_log_ack log(last 99) v1
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:36:51.933398+0000 osd.2 (osd.2) 98 : cluster [DBG] 6.8 scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:36:51.947491+0000 osd.2 (osd.2) 99 : cluster [DBG] 6.8 scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60071936 unmapped: 688128 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:22.959115+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60080128 unmapped: 679936 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:23.959221+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60088320 unmapped: 671744 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:24.959383+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60088320 unmapped: 671744 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:25.959572+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 6.f scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_channel(cluster) log [DBG] : 6.f scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60112896 unmapped: 647168 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:26.960070+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  log_queue is 2 last_log 101 sent 99 num 2 unsent 2 sending 2
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:36:56.020844+0000 osd.2 (osd.2) 100 : cluster [DBG] 6.f scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  will send 2025-11-25T23:36:56.045628+0000 osd.2 (osd.2) 101 : cluster [DBG] 6.f scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client handle_log_ack log(last 101) v1
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:36:56.020844+0000 osd.2 (osd.2) 100 : cluster [DBG] 6.f scrub starts
Nov 26 00:06:35 compute-0 ceph-osd[91111]: log_client  logged 2025-11-25T23:36:56.045628+0000 osd.2 (osd.2) 101 : cluster [DBG] 6.f scrub ok
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60112896 unmapped: 647168 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:27.960346+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60112896 unmapped: 647168 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:28.960531+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60121088 unmapped: 638976 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:29.960721+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60121088 unmapped: 638976 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:30.960906+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60129280 unmapped: 630784 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:31.961297+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60129280 unmapped: 630784 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:32.961614+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60137472 unmapped: 622592 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:33.961885+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60137472 unmapped: 622592 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:34.962083+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60145664 unmapped: 614400 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:35.962292+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60145664 unmapped: 614400 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:36.962449+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:37.962610+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60145664 unmapped: 614400 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:38.962845+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60153856 unmapped: 606208 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:39.963049+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60153856 unmapped: 606208 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:40.963253+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60153856 unmapped: 606208 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:41.963486+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60170240 unmapped: 589824 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:42.963683+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60170240 unmapped: 589824 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:43.963841+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60178432 unmapped: 581632 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:44.964029+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60178432 unmapped: 581632 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:45.964222+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60178432 unmapped: 581632 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:46.964339+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60186624 unmapped: 573440 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:47.964448+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60186624 unmapped: 573440 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:48.964594+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60186624 unmapped: 573440 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:49.964754+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60194816 unmapped: 565248 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:50.964910+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60194816 unmapped: 565248 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:51.965077+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60219392 unmapped: 540672 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:52.965321+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60227584 unmapped: 532480 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:53.965504+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60235776 unmapped: 524288 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:54.965686+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60235776 unmapped: 524288 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:55.965910+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60235776 unmapped: 524288 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:56.966128+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60252160 unmapped: 507904 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:57.966351+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60252160 unmapped: 507904 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:58.966508+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60252160 unmapped: 507904 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:59.966715+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60260352 unmapped: 499712 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:00.966869+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60260352 unmapped: 499712 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:01.967053+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60252160 unmapped: 507904 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:02.967305+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60260352 unmapped: 499712 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:03.967728+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60268544 unmapped: 491520 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:04.967959+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60276736 unmapped: 483328 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:05.971093+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60276736 unmapped: 483328 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:06.972912+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60276736 unmapped: 483328 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:07.973629+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60293120 unmapped: 466944 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:08.974900+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60293120 unmapped: 466944 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:09.975809+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60301312 unmapped: 458752 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:10.976194+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60301312 unmapped: 458752 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:11.976448+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60325888 unmapped: 434176 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:12.978646+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60334080 unmapped: 425984 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:13.980541+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60334080 unmapped: 425984 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:14.980837+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60342272 unmapped: 417792 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:15.981114+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60350464 unmapped: 409600 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:16.981334+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60350464 unmapped: 409600 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:17.981458+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60358656 unmapped: 401408 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:18.981965+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60358656 unmapped: 401408 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:19.982110+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60366848 unmapped: 393216 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:20.982349+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60366848 unmapped: 393216 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:21.982615+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60375040 unmapped: 385024 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:22.982789+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60375040 unmapped: 385024 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:23.983149+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60375040 unmapped: 385024 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:24.983295+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60383232 unmapped: 376832 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:25.983641+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60383232 unmapped: 376832 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:26.983875+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60383232 unmapped: 376832 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:27.984016+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60391424 unmapped: 368640 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:28.984134+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60391424 unmapped: 368640 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:29.984309+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60391424 unmapped: 368640 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:30.984473+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60399616 unmapped: 360448 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:31.984739+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60416000 unmapped: 344064 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:32.984917+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60424192 unmapped: 335872 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:33.985255+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60424192 unmapped: 335872 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:34.985491+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60432384 unmapped: 327680 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:35.985786+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60440576 unmapped: 319488 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:36.986093+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60440576 unmapped: 319488 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:37.986350+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60465152 unmapped: 294912 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:38.986627+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60465152 unmapped: 294912 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:39.986822+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60465152 unmapped: 294912 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:40.987019+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60473344 unmapped: 286720 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:41.987209+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60473344 unmapped: 286720 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:42.987369+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60481536 unmapped: 278528 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:43.987520+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60481536 unmapped: 278528 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:44.987660+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60489728 unmapped: 270336 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:45.987905+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60489728 unmapped: 270336 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:46.988108+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60489728 unmapped: 270336 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:47.988336+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60497920 unmapped: 262144 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:48.988456+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60497920 unmapped: 262144 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:49.988591+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60506112 unmapped: 253952 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:50.988747+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60506112 unmapped: 253952 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:51.988866+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60522496 unmapped: 237568 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:52.988989+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60530688 unmapped: 229376 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:53.989141+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60530688 unmapped: 229376 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:54.989254+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60538880 unmapped: 221184 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:55.989472+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60522496 unmapped: 237568 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:56.989597+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60522496 unmapped: 237568 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:57.989760+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60530688 unmapped: 229376 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:58.989934+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60530688 unmapped: 229376 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:59.990164+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60530688 unmapped: 229376 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:00.990379+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60538880 unmapped: 221184 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:01.990557+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60538880 unmapped: 221184 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:02.990751+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60547072 unmapped: 212992 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:03.990898+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60547072 unmapped: 212992 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:04.991100+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60547072 unmapped: 212992 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:05.991253+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60563456 unmapped: 196608 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:06.991395+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60563456 unmapped: 196608 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:07.991551+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60563456 unmapped: 196608 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:08.991722+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60571648 unmapped: 188416 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:09.991960+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60571648 unmapped: 188416 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:10.992102+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60588032 unmapped: 172032 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:11.992276+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60596224 unmapped: 163840 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:12.992446+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60596224 unmapped: 163840 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:13.992723+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60604416 unmapped: 155648 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:14.992922+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60604416 unmapped: 155648 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:15.993224+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60612608 unmapped: 147456 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:16.993434+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60620800 unmapped: 139264 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:17.993591+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60620800 unmapped: 139264 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:18.993762+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60628992 unmapped: 131072 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:19.993901+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60628992 unmapped: 131072 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:20.994075+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60628992 unmapped: 131072 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:21.994286+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60637184 unmapped: 122880 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:22.994487+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60637184 unmapped: 122880 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:23.994741+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60645376 unmapped: 114688 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:24.994983+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60645376 unmapped: 114688 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:25.995272+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60653568 unmapped: 106496 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:26.995500+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60661760 unmapped: 98304 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:27.995732+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60661760 unmapped: 98304 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:28.995902+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60661760 unmapped: 98304 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:29.996108+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60669952 unmapped: 90112 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:30.996337+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60669952 unmapped: 90112 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:31.996492+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 81920 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:32.996647+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 81920 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:33.996826+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60678144 unmapped: 81920 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:34.997014+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60686336 unmapped: 73728 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:35.997235+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60686336 unmapped: 73728 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:36.997390+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60686336 unmapped: 73728 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:37.997568+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 65536 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:38.997759+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60694528 unmapped: 65536 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:39.997920+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 57344 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:40.998118+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 57344 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:41.998272+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60702720 unmapped: 57344 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:42.998447+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 49152 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:43.998636+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60710912 unmapped: 49152 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:44.998778+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 40960 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:45.998944+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 40960 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:46.999028+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60719104 unmapped: 40960 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:47.999211+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60727296 unmapped: 32768 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:48.999381+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60727296 unmapped: 32768 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:49.999583+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 24576 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:50.999766+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60735488 unmapped: 24576 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:51.999897+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60743680 unmapped: 16384 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:53.000123+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60743680 unmapped: 16384 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:54.000259+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60743680 unmapped: 16384 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:55.000447+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60751872 unmapped: 8192 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:56.000619+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60751872 unmapped: 8192 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:57.000793+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60751872 unmapped: 8192 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:58.000927+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 0 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:59.001072+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60760064 unmapped: 0 heap: 60760064 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:00.001234+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60768256 unmapped: 1040384 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:01.001357+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60768256 unmapped: 1040384 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:02.001526+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60768256 unmapped: 1040384 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:03.001695+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60776448 unmapped: 1032192 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:04.001864+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60776448 unmapped: 1032192 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:05.002017+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60776448 unmapped: 1032192 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:06.002262+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60784640 unmapped: 1024000 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:07.002424+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60784640 unmapped: 1024000 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:08.002534+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60792832 unmapped: 1015808 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:09.002679+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60792832 unmapped: 1015808 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:10.002857+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60792832 unmapped: 1015808 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:11.003070+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60801024 unmapped: 1007616 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:12.003235+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60801024 unmapped: 1007616 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:13.003345+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60809216 unmapped: 999424 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:14.003444+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60809216 unmapped: 999424 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:15.003586+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60817408 unmapped: 991232 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:16.003766+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60817408 unmapped: 991232 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:17.003919+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60817408 unmapped: 991232 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:18.004078+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60825600 unmapped: 983040 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:19.004236+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60825600 unmapped: 983040 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:20.004406+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60825600 unmapped: 983040 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:21.004574+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60833792 unmapped: 974848 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:22.004751+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60833792 unmapped: 974848 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:23.004916+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60841984 unmapped: 966656 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:24.005079+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60841984 unmapped: 966656 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:25.005255+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60841984 unmapped: 966656 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:26.005464+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60850176 unmapped: 958464 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:27.005598+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60850176 unmapped: 958464 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:28.005764+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60858368 unmapped: 950272 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:29.005910+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60858368 unmapped: 950272 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:30.006067+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60858368 unmapped: 950272 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:31.006207+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60866560 unmapped: 942080 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:32.006329+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60866560 unmapped: 942080 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:33.006493+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60874752 unmapped: 933888 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:34.006642+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60874752 unmapped: 933888 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:35.006784+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60874752 unmapped: 933888 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:36.006994+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60882944 unmapped: 925696 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:37.007212+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60882944 unmapped: 925696 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:38.007382+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60891136 unmapped: 917504 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:39.007557+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60891136 unmapped: 917504 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:40.007755+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60891136 unmapped: 917504 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:41.007931+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60891136 unmapped: 917504 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:42.008082+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60899328 unmapped: 909312 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:43.008261+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60899328 unmapped: 909312 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:44.008410+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60907520 unmapped: 901120 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:45.008552+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60907520 unmapped: 901120 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:46.008755+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60907520 unmapped: 901120 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:47.008944+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60907520 unmapped: 901120 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:48.009121+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60915712 unmapped: 892928 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:49.009291+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60915712 unmapped: 892928 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:50.009565+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60923904 unmapped: 884736 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:51.009735+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60923904 unmapped: 884736 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:52.009898+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60923904 unmapped: 884736 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:53.010071+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60932096 unmapped: 876544 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:54.010248+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60932096 unmapped: 876544 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:55.010393+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60940288 unmapped: 868352 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:56.010573+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60948480 unmapped: 860160 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:57.010723+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60948480 unmapped: 860160 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:58.010906+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60956672 unmapped: 851968 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:59.011076+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60956672 unmapped: 851968 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:00.011237+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60964864 unmapped: 843776 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:01.011363+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60956672 unmapped: 851968 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:02.011521+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60956672 unmapped: 851968 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:03.011661+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60956672 unmapped: 851968 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:04.011829+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60964864 unmapped: 843776 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:05.012027+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60964864 unmapped: 843776 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:06.012251+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60973056 unmapped: 835584 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:07.012369+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60973056 unmapped: 835584 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:08.012500+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60981248 unmapped: 827392 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:09.012643+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60981248 unmapped: 827392 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:10.012783+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60981248 unmapped: 827392 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:11.012909+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60989440 unmapped: 819200 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:12.013078+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60989440 unmapped: 819200 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:13.013290+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60989440 unmapped: 819200 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:14.013475+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60997632 unmapped: 811008 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:15.013667+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 60997632 unmapped: 811008 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:16.013833+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61005824 unmapped: 802816 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:17.014005+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61005824 unmapped: 802816 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:18.014129+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61014016 unmapped: 794624 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:19.014545+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61014016 unmapped: 794624 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:20.014722+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61014016 unmapped: 794624 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:21.014888+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61014016 unmapped: 794624 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:22.016060+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61022208 unmapped: 786432 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:23.016283+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61022208 unmapped: 786432 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:24.016416+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61022208 unmapped: 786432 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:25.016604+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61030400 unmapped: 778240 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:26.016911+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61030400 unmapped: 778240 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:27.017025+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61038592 unmapped: 770048 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:28.017278+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61038592 unmapped: 770048 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:29.017538+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61038592 unmapped: 770048 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:30.017687+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61046784 unmapped: 761856 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:31.017818+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61046784 unmapped: 761856 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:32.017941+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61054976 unmapped: 753664 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:33.018075+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61054976 unmapped: 753664 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:34.018229+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61054976 unmapped: 753664 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:35.018509+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61063168 unmapped: 745472 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:36.018833+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61063168 unmapped: 745472 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:37.019034+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61063168 unmapped: 745472 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:38.019322+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61071360 unmapped: 737280 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:39.019575+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61071360 unmapped: 737280 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:40.019774+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61071360 unmapped: 737280 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:41.020000+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61079552 unmapped: 729088 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:42.020216+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61079552 unmapped: 729088 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:43.020390+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61087744 unmapped: 720896 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:44.020585+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61087744 unmapped: 720896 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:45.020713+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61087744 unmapped: 720896 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:46.020857+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61095936 unmapped: 712704 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:47.021008+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61095936 unmapped: 712704 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:48.021122+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61104128 unmapped: 704512 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:49.021289+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61104128 unmapped: 704512 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:50.021454+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:51.021582+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61104128 unmapped: 704512 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:52.021935+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61112320 unmapped: 696320 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:53.022080+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61112320 unmapped: 696320 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:54.022598+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61120512 unmapped: 688128 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:55.022745+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61120512 unmapped: 688128 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:56.022909+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61120512 unmapped: 688128 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:57.023205+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:58.023334+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:59.023504+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61128704 unmapped: 679936 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:00.023781+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61136896 unmapped: 671744 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:01.024093+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61136896 unmapped: 671744 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:02.024372+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61153280 unmapped: 655360 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:03.024626+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61153280 unmapped: 655360 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:04.025006+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61153280 unmapped: 655360 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:05.025244+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:06.025495+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61161472 unmapped: 647168 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:07.025904+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:08.026085+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:09.026255+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:10.026449+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61177856 unmapped: 630784 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:11.026677+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61177856 unmapped: 630784 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:12.026950+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:13.027302+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61169664 unmapped: 638976 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:14.027504+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61177856 unmapped: 630784 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:15.027700+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61177856 unmapped: 630784 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:16.027873+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:17.028080+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61186048 unmapped: 622592 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:18.028225+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61194240 unmapped: 614400 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:19.028349+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61194240 unmapped: 614400 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:20.028506+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61194240 unmapped: 614400 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:21.028648+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:22.028799+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:23.028939+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61202432 unmapped: 606208 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:24.029366+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:25.030128+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61210624 unmapped: 598016 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:26.030449+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:27.030588+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:28.031001+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61218816 unmapped: 589824 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:29.031334+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61227008 unmapped: 581632 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:30.031619+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61227008 unmapped: 581632 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:31.031737+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61227008 unmapped: 581632 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:32.032460+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:33.032591+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:34.033012+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61235200 unmapped: 573440 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:35.033303+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61243392 unmapped: 565248 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:36.033489+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61243392 unmapped: 565248 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:37.033611+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61243392 unmapped: 565248 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:38.033770+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61251584 unmapped: 557056 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:39.033940+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61251584 unmapped: 557056 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:40.034325+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61259776 unmapped: 548864 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:41.034577+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61259776 unmapped: 548864 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:42.034727+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61259776 unmapped: 548864 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:43.034891+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61267968 unmapped: 540672 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:44.035028+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61267968 unmapped: 540672 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:45.035274+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61276160 unmapped: 532480 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:46.035529+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61276160 unmapped: 532480 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:47.035698+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61276160 unmapped: 532480 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:48.035851+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61284352 unmapped: 524288 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:49.036047+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61284352 unmapped: 524288 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:50.036186+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61292544 unmapped: 516096 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:51.036355+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61292544 unmapped: 516096 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:52.036505+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61292544 unmapped: 516096 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:53.036657+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61300736 unmapped: 507904 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:54.036893+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61300736 unmapped: 507904 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:55.037052+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61308928 unmapped: 499712 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:56.037204+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61308928 unmapped: 499712 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:57.037339+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61308928 unmapped: 499712 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:58.037478+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61308928 unmapped: 499712 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:59.037618+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61317120 unmapped: 491520 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:00.037739+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61317120 unmapped: 491520 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:01.037900+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61325312 unmapped: 483328 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:02.038047+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61325312 unmapped: 483328 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:03.038388+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61325312 unmapped: 483328 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:04.038527+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61333504 unmapped: 475136 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:05.038669+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61333504 unmapped: 475136 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:06.038848+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61341696 unmapped: 466944 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:07.039215+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61341696 unmapped: 466944 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:08.039383+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61349888 unmapped: 458752 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:09.039575+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61349888 unmapped: 458752 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:10.039797+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61349888 unmapped: 458752 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:11.040013+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61358080 unmapped: 450560 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:12.040244+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61366272 unmapped: 442368 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:13.040617+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61366272 unmapped: 442368 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:14.040982+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61374464 unmapped: 434176 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:15.041254+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61374464 unmapped: 434176 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:16.041535+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61382656 unmapped: 425984 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:17.041763+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61390848 unmapped: 417792 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:18.041934+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61390848 unmapped: 417792 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:19.042127+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61399040 unmapped: 409600 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:20.042245+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61399040 unmapped: 409600 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:21.042416+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61407232 unmapped: 401408 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:22.042633+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61407232 unmapped: 401408 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:23.042783+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61407232 unmapped: 401408 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:24.042939+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61407232 unmapped: 401408 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:25.043123+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61415424 unmapped: 393216 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:26.043437+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61407232 unmapped: 401408 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:27.043607+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61415424 unmapped: 393216 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:28.043728+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61415424 unmapped: 393216 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:29.043856+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61415424 unmapped: 393216 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:30.043961+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61423616 unmapped: 385024 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:31.044196+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 4172 writes, 19K keys, 4172 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 4172 writes, 365 syncs, 11.43 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4172 writes, 19K keys, 4172 commit groups, 1.0 writes per commit group, ingest: 16.16 MB, 0.03 MB/s
                                           Interval WAL: 4172 writes, 365 syncs, 11.43 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56223ceb91f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56223ceb91f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56223ceb91f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56223ceb91f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56223ceb91f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56223ceb91f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56223ceb91f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56223ceb9090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56223ceb9090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56223ceb9090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56223ceb91f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56223ceb91f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61480960 unmapped: 327680 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:32.044355+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61489152 unmapped: 319488 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:33.044484+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61489152 unmapped: 319488 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:34.044683+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61497344 unmapped: 311296 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:35.044820+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61497344 unmapped: 311296 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:36.044971+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61497344 unmapped: 311296 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:37.045128+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61505536 unmapped: 303104 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:38.045341+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61505536 unmapped: 303104 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:39.045510+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61513728 unmapped: 294912 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:40.045639+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61513728 unmapped: 294912 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:41.045822+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61513728 unmapped: 294912 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:42.045974+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61513728 unmapped: 294912 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:43.046129+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61521920 unmapped: 286720 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:44.046307+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61521920 unmapped: 286720 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:45.046458+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61521920 unmapped: 286720 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:46.046668+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61530112 unmapped: 278528 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:47.046877+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61530112 unmapped: 278528 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:48.047889+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61538304 unmapped: 270336 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:49.048157+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61538304 unmapped: 270336 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:50.048385+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61538304 unmapped: 270336 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:51.048620+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61538304 unmapped: 270336 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:52.048946+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61546496 unmapped: 262144 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:53.049111+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61546496 unmapped: 262144 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:54.049236+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61554688 unmapped: 253952 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:55.049414+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61554688 unmapped: 253952 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:56.049609+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61562880 unmapped: 245760 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:57.049738+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61571072 unmapped: 237568 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:58.049948+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61571072 unmapped: 237568 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:59.050123+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61579264 unmapped: 229376 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:00.050233+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61579264 unmapped: 229376 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:01.050415+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61579264 unmapped: 229376 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:02.050569+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61579264 unmapped: 229376 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:03.050789+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61587456 unmapped: 221184 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:04.050974+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61587456 unmapped: 221184 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:05.051212+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61595648 unmapped: 212992 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:06.051445+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61595648 unmapped: 212992 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:07.051615+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61603840 unmapped: 204800 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:08.051801+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61603840 unmapped: 204800 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:09.052053+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61603840 unmapped: 204800 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:10.052271+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61612032 unmapped: 196608 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:11.072320+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61612032 unmapped: 196608 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:12.072485+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61620224 unmapped: 188416 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:13.072648+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61620224 unmapped: 188416 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:14.072796+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61620224 unmapped: 188416 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:15.072957+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61628416 unmapped: 180224 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:16.073795+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61628416 unmapped: 180224 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:17.073972+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61636608 unmapped: 172032 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:18.074139+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61636608 unmapped: 172032 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:19.074331+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61636608 unmapped: 172032 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:20.074488+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61644800 unmapped: 163840 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:21.074724+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61652992 unmapped: 155648 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:22.074880+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61661184 unmapped: 147456 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:23.075055+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61661184 unmapped: 147456 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:24.075260+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61661184 unmapped: 147456 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:25.075473+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61669376 unmapped: 139264 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:26.075671+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61669376 unmapped: 139264 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:27.076079+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61669376 unmapped: 139264 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:28.076229+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61669376 unmapped: 139264 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:29.076372+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61669376 unmapped: 139264 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:30.076509+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61677568 unmapped: 131072 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:31.076722+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61677568 unmapped: 131072 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:32.076907+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 122880 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:33.077076+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 122880 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:34.077238+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 122880 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:35.077403+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 114688 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:36.077567+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 114688 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:37.077712+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 114688 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:38.077838+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 106496 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:39.077993+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 106496 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:40.078157+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 106496 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:41.078369+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 98304 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:42.078518+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 98304 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:43.078694+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61718528 unmapped: 90112 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:44.078833+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61718528 unmapped: 90112 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:45.079011+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61718528 unmapped: 90112 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:46.079262+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61718528 unmapped: 90112 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:47.079394+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61718528 unmapped: 90112 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:48.079541+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61718528 unmapped: 90112 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:49.079726+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61726720 unmapped: 81920 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:50.079892+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61726720 unmapped: 81920 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:51.080131+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61726720 unmapped: 81920 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:52.080392+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61734912 unmapped: 73728 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:53.080572+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61734912 unmapped: 73728 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:54.080803+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61743104 unmapped: 65536 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:55.081819+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61743104 unmapped: 65536 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:56.082281+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61743104 unmapped: 65536 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:57.083234+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61751296 unmapped: 57344 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:58.083410+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61751296 unmapped: 57344 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:59.083546+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61759488 unmapped: 49152 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:00.083663+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61759488 unmapped: 49152 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:01.083820+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61767680 unmapped: 40960 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:02.083953+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61767680 unmapped: 40960 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:03.084125+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61767680 unmapped: 40960 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:04.084636+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61775872 unmapped: 32768 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:05.084850+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61775872 unmapped: 32768 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:06.085096+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61775872 unmapped: 32768 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:07.085500+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61784064 unmapped: 24576 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:08.085717+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61784064 unmapped: 24576 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:09.085904+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61792256 unmapped: 16384 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:10.086238+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61792256 unmapped: 16384 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:11.086575+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61792256 unmapped: 16384 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:12.086813+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61792256 unmapped: 16384 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:13.086944+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61800448 unmapped: 8192 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:14.087101+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61800448 unmapped: 8192 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:15.087316+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61800448 unmapped: 8192 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:16.087545+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61808640 unmapped: 0 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:17.087742+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61808640 unmapped: 0 heap: 61808640 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:18.087895+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61816832 unmapped: 1040384 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:19.088001+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61816832 unmapped: 1040384 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:20.088220+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61816832 unmapped: 1040384 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:21.088396+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61825024 unmapped: 1032192 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:22.088629+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61825024 unmapped: 1032192 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:23.088890+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61833216 unmapped: 1024000 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:24.089202+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61833216 unmapped: 1024000 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:25.089346+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61841408 unmapped: 1015808 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:26.089554+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61841408 unmapped: 1015808 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:27.089720+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61841408 unmapped: 1015808 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:28.089931+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61849600 unmapped: 1007616 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:29.090104+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61849600 unmapped: 1007616 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:30.090253+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:31.090382+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:32.090521+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:33.090727+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:34.090859+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:35.090970+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:36.091092+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:37.091296+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:38.091470+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:39.091631+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:40.091779+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:41.091899+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:42.092034+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:43.092149+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:44.092215+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:45.092349+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:46.092493+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:47.092613+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:48.092735+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:49.112046+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:50.112202+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:51.121324+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:52.121477+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:53.121806+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:54.122001+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:55.122130+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:56.122367+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:57.122475+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:58.122584+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:59.122710+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:00.122919+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:01.123030+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:02.123131+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:03.123331+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:04.123462+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:05.123576+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:06.123779+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:07.124093+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:08.124215+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:09.124385+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:10.124525+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:11.124713+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:12.124958+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:13.125225+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:14.125427+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:15.125564+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:16.125750+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:17.125916+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:18.126094+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:19.126225+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:20.126455+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:21.126637+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:22.126813+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:23.126943+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:24.127127+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:25.127267+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:26.127464+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:27.127648+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:28.127832+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:29.128086+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:30.128316+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:31.128499+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:32.128652+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:33.128811+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:34.129011+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:35.129183+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:36.129339+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:37.129568+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:38.129716+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:39.129845+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:40.129955+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:41.130230+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:42.130399+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:43.130549+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:44.130695+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:45.130831+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:46.131021+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:47.131166+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:48.131335+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:49.131480+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:50.131684+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:51.131799+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:52.131942+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:53.132054+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:54.132227+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:55.132379+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:56.132523+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:57.132683+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:58.132840+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:59.132964+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:00.133154+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:01.133313+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:02.133485+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:03.133676+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:04.133815+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:05.133959+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:06.134234+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:07.134364+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:08.134490+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:09.134680+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:10.134885+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 999424 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:11.135052+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:12.135294+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:13.135552+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:14.135767+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 991232 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:15.136013+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: no keepalive since 2025-11-25T23:46:45.136110+0000 (2106-02-07T06:28:15.999867+0000 seconds), reconnecting
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _reopen_session rank -1
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _add_conns ranks=[0]
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient(hunting): picked mon.compute-0 con 0x56223f69c800 addr [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient(hunting): start opening mon connection
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient(hunting): _renew_subs
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient(hunting): get_auth_request con 0x56223f69c800 auth_method 0
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient(hunting): get_auth_request method 2 preferred_modes [2,1]
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient(hunting): _init_auth method 2
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient(hunting): _init_auth already have auth, reseting
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient(hunting): handle_auth_reply_more payload 9
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient(hunting): handle_auth_reply_more payload_len 9
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient(hunting): handle_auth_reply_more responding with 132 bytes
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient(hunting): handle_auth_done global_id 14211 payload 293
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _finish_hunting 0
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: found mon.compute-0
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _finish_auth 0
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:15.138354+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: handle_monmap mon_map magic: 0 v1
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient:  got monmap 1 from mon.compute-0 (according to old e1)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: dump:
                                           epoch 1
                                           fsid 101922db-575f-58e2-980f-928050464f69
                                           last_changed 2025-11-25T23:31:04.907397+0000
                                           created 2025-11-25T23:31:04.907397+0000
                                           min_mon_release 18 (reef)
                                           election_strategy: 1
                                           0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: handle_config config(9 keys) v1
Nov 26 00:06:35 compute-0 ceph-osd[91111]: set_mon_vals no callback set
Nov 26 00:06:35 compute-0 ceph-osd[91111]: mgrc handle_mgr_map Got map version 9
Nov 26 00:06:35 compute-0 ceph-osd[91111]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/855624559,v1:192.168.122.100:6801/855624559]
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:19.637788+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:20.637959+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:21.638089+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:22.638469+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:23.638665+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:24.672619+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:25.672789+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:26.672935+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:27.673161+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:28.673358+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:29.673510+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:30.673635+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:31.673789+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:32.673984+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:33.674143+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:34.674296+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:35.674428+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:36.674658+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:37.674873+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:38.675005+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:39.675193+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:40.675325+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:41.675524+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:42.675685+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:43.675828+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:44.675996+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:45.676374+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:46.676547+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:47.676671+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:48.676946+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:49.677085+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:50.677229+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:51.677359+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:52.677814+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:53.678043+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:54.678235+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:55.678415+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:56.678616+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:57.678784+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:58.678913+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:59.679047+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:00.679176+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:01.679293+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:02.679420+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:03.679701+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:04.679885+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:05.680084+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:06.680300+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:07.680466+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:08.680667+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:09.680873+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:10.681006+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:11.681214+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:12.681417+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:13.681606+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:14.681801+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:15.682018+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:16.682263+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:17.682429+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:18.682645+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:19.682805+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:20.682940+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:21.683049+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:22.683160+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:23.683340+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:24.683503+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:25.683663+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 974848 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:26.683828+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:27.683976+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:28.684120+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:29.684275+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:30.684449+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:31.684610+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:32.684773+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:33.684882+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:34.685038+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:35.685144+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: mgrc ms_handle_reset ms_handle_reset con 0x56223dd09c00
Nov 26 00:06:35 compute-0 ceph-osd[91111]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/855624559
Nov 26 00:06:35 compute-0 ceph-osd[91111]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/855624559,v1:192.168.122.100:6801/855624559]
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: get_auth_request con 0x56223f255800 auth_method 0
Nov 26 00:06:35 compute-0 ceph-osd[91111]: mgrc handle_mgr_configure stats_period=5
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:36.685358+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61775872 unmapped: 1081344 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:37.685519+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61775872 unmapped: 1081344 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:38.685672+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61775872 unmapped: 1081344 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:39.685836+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61775872 unmapped: 1081344 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:40.686013+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61775872 unmapped: 1081344 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:41.686154+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:42.686387+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:43.686575+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:44.686730+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:45.686866+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:46.687081+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:47.687327+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:48.687529+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:49.687672+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:50.687793+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:51.687962+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:52.688123+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:53.688244+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:54.688390+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:55.688558+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:56.688734+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:57.688860+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:58.689008+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:59.689193+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:00.689336+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:01.689465+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:02.689625+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:03.689790+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:04.689922+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:05.690045+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:06.690262+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:07.690409+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:08.690544+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:09.690695+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:10.690846+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:11.690999+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:12.691137+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:13.691334+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:14.691470+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:15.691600+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:16.692106+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:17.692310+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:18.692434+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:19.692609+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:20.692790+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:21.692926+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:22.693076+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:23.693447+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:24.693735+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:25.693993+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:26.694164+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:27.708493+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:28.708733+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:29.709009+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:30.709193+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:31.709300+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:32.709421+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:33.709590+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:34.709764+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:35.709896+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:36.710041+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:37.710231+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:38.710470+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:39.710618+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:40.710780+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:41.710928+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:42.711085+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:43.711249+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:44.711436+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:45.711662+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:46.711818+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:47.712023+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:48.712209+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:49.712369+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:50.712550+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:51.712709+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:52.712842+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:53.712964+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:54.713222+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:55.713348+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:56.713504+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:57.713643+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:58.714528+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:59.714698+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:00.714851+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:01.715232+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:02.715389+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:03.715831+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:04.715970+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:05.716367+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:06.716582+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:07.716811+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:08.717273+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:09.717665+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:10.718521+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:11.718732+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:12.718927+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:13.719109+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:14.719262+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:15.719392+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:16.719562+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:17.719701+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:18.719842+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:19.719957+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:20.720067+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:21.720257+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:22.720404+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:23.720610+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:24.720756+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:25.720896+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:26.721105+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:27.721249+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:28.721403+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:29.721526+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:30.721748+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:31.721940+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:32.722211+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:33.722414+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:34.722550+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:35.722680+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:36.722828+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:37.723006+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:38.723269+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:39.723395+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:40.723561+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:41.723718+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:42.723925+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:43.724070+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:44.724249+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:45.724405+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:46.724577+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:47.724767+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:48.724926+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:49.725054+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:50.725210+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:51.725372+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:52.725504+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:53.725641+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:54.725780+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:55.725995+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:56.726214+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:57.726351+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:58.726507+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:59.726836+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:00.727058+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:01.727207+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:02.727333+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:03.727474+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:04.727586+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:05.728161+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:06.728480+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:07.728620+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:08.728734+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:09.728847+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:10.728970+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:11.729081+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:12.729217+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:13.729304+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:14.729479+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:15.729663+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:16.729934+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:17.730120+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:18.730228+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:19.730326+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:20.730455+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:21.730590+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:22.730690+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:23.730845+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:24.731008+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:25.731199+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:26.731347+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:27.731466+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:28.731603+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:29.731744+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:30.731817+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:31.731960+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:32.732148+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:33.732317+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:34.732444+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:35.732574+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:36.732727+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:37.732854+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:38.733048+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:39.733224+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:40.733457+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:41.733598+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:42.733821+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:43.733996+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:44.734221+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:45.734495+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:46.734746+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:47.734917+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:48.735040+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:49.735237+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:50.735463+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:51.735675+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:52.735817+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:53.735964+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:54.736124+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:55.736370+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:56.736596+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:57.736765+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:58.736945+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:59.737241+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:00.737481+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:01.737648+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:02.737788+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:03.738792+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:04.739594+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:05.740672+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:06.741461+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:07.742079+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:08.742545+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:09.742861+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:10.743111+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:11.743323+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:12.743474+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:13.744231+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:14.744416+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:15.745106+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:16.745756+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:17.746333+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:18.746740+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:19.747280+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:20.747667+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:21.748060+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:22.748241+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:23.748529+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:24.748698+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:25.749064+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:26.749482+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:27.749731+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:28.749988+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:29.750239+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:30.750402+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:31.750515+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:32.750736+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:33.751004+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:34.751143+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:35.751352+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:36.751539+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:37.751704+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:38.751821+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:39.751981+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:40.752099+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:41.752266+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:42.752389+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:43.752546+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:44.752722+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:45.752903+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:46.753049+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:47.753232+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:48.753360+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:49.753545+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:50.753680+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:51.753877+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:52.754031+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:53.754232+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:54.754391+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:55.754556+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:56.754758+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:57.754927+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:58.755061+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:59.755276+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:00.755450+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:01.755599+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:02.755727+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:03.755866+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:04.756125+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:05.756279+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:06.756452+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:07.756708+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:08.756821+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:09.757073+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:10.757241+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:11.757606+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:12.757924+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:13.758111+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:14.758266+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:15.758437+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:16.758629+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:17.758820+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:18.758994+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:19.759318+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:20.759535+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:21.759748+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:22.760046+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:23.760278+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:24.760507+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61685760 unmapped: 1171456 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:25.760700+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:26.760958+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:27.761126+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:28.761325+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:29.761556+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:30.761741+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 4172 writes, 19K keys, 4172 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 4172 writes, 365 syncs, 11.43 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56223ceb91f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56223ceb91f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56223ceb91f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56223ceb91f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56223ceb91f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56223ceb91f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56223ceb91f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56223ceb9090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56223ceb9090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56223ceb9090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56223ceb91f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56223ceb91f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:31.761968+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:32.762226+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:33.762465+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:34.762631+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:35.762862+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:36.763151+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:37.763491+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:38.763697+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:39.763872+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:40.764029+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:41.764243+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:42.764427+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:43.764600+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:44.764751+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:45.764962+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:46.765991+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:47.766289+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:48.766431+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:49.766572+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:50.766725+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:51.766913+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:52.767132+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:53.767348+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:54.767550+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61693952 unmapped: 1163264 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:55.767691+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:56.767899+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:57.768338+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:58.768564+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:59.768741+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:00.768928+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:01.769145+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:02.769326+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:03.769523+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:04.769719+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:05.769895+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:06.770125+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:07.770315+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:08.770491+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:09.770650+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:10.770805+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:11.771048+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:12.771244+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:13.771422+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:14.771667+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:15.771899+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:16.772089+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:17.772290+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:18.772538+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:19.772799+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:20.773009+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:21.773214+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:22.773398+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:23.773552+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:24.773720+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:25.774069+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:26.774307+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:27.774509+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:28.774702+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:29.774962+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:30.775222+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:31.775378+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:32.775562+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:33.775727+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:34.775856+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:35.776037+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:36.776248+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:37.776420+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61702144 unmapped: 1155072 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:38.776578+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:39.776727+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:40.776885+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:41.777049+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:42.777237+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:43.777399+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:44.777561+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:45.777771+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:46.777995+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:47.778152+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:48.778318+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:49.778457+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:50.778638+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:51.778730+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:52.778862+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:53.778948+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:54.779069+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:55.779231+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:56.779392+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:57.779540+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:58.779659+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:59.779831+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:00.779953+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:01.780080+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:02.780248+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:03.780421+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:04.780566+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:05.780749+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:06.780934+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:07.781127+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x483eb/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:08.781330+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:09.781508+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:10.781642+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430108 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56223ea28c00
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:11.781779+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61710336 unmapped: 1146880 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:12.781949+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 63 handle_osd_map epochs [64,65], i have 63, src has [1,65]
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 1071.016235352s of 1071.030395508s, submitted: 4
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 65 heartbeat osd_stat(store_statfs(0x4fe117000/0x0/0x4ffc00000, data 0x4afe6/0xb6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 966656 heap: 62857216 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:13.782091+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62128128 unmapped: 17514496 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:14.782281+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 65 handle_osd_map epochs [65,66], i have 65, src has [1,66]
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 66 ms_handle_reset con 0x56223ea28c00 session 0x56223f2c0b40
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62185472 unmapped: 17457152 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:15.782526+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 502193 data_alloc: 218103808 data_used: 114688
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62193664 unmapped: 17448960 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:16.782768+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62193664 unmapped: 17448960 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:17.783931+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56223e823c00
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62308352 unmapped: 17334272 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:18.784812+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _renew_subs
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 66 handle_osd_map epochs [67,67], i have 66, src has [1,67]
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 67 ms_handle_reset con 0x56223e823c00 session 0x56223eae34a0
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 67 heartbeat osd_stat(store_statfs(0x4fd4a3000/0x0/0x4ffc00000, data 0xcbc602/0xd2b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 63406080 unmapped: 16236544 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:19.785096+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 63406080 unmapped: 16236544 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:20.785838+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 539786 data_alloc: 218103808 data_used: 122880
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 63406080 unmapped: 16236544 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:21.786462+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 63406080 unmapped: 16236544 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:22.786709+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 63406080 unmapped: 16236544 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:23.787140+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 67 heartbeat osd_stat(store_statfs(0x4fd49e000/0x0/0x4ffc00000, data 0xcbdbfb/0xd2f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 67 handle_osd_map epochs [68,68], i have 67, src has [1,68]
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 67 handle_osd_map epochs [68,68], i have 68, src has [1,68]
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.124011040s of 11.322376251s, submitted: 43
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62291968 unmapped: 17350656 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:24.787397+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62291968 unmapped: 17350656 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:25.787549+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 542246 data_alloc: 218103808 data_used: 122880
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:26.788002+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62291968 unmapped: 17350656 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fd49b000/0x0/0x4ffc00000, data 0xcbf09b/0xd32000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:27.788262+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62291968 unmapped: 17350656 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fd49b000/0x0/0x4ffc00000, data 0xcbf09b/0xd32000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:28.788624+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62291968 unmapped: 17350656 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fd49b000/0x0/0x4ffc00000, data 0xcbf09b/0xd32000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:29.789005+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62291968 unmapped: 17350656 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:30.789374+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62291968 unmapped: 17350656 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 542246 data_alloc: 218103808 data_used: 122880
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:31.789745+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62291968 unmapped: 17350656 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:32.790074+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62291968 unmapped: 17350656 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:33.790272+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62291968 unmapped: 17350656 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fd49b000/0x0/0x4ffc00000, data 0xcbf09b/0xd32000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:34.790609+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62291968 unmapped: 17350656 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fd49b000/0x0/0x4ffc00000, data 0xcbf09b/0xd32000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:35.790765+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62291968 unmapped: 17350656 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 542246 data_alloc: 218103808 data_used: 122880
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fd49b000/0x0/0x4ffc00000, data 0xcbf09b/0xd32000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:36.791053+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62308352 unmapped: 17334272 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:37.791310+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62308352 unmapped: 17334272 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:38.791527+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62308352 unmapped: 17334272 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:39.791791+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62308352 unmapped: 17334272 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:40.791951+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62308352 unmapped: 17334272 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fd49b000/0x0/0x4ffc00000, data 0xcbf09b/0xd32000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 542246 data_alloc: 218103808 data_used: 122880
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:41.792288+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62308352 unmapped: 17334272 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:42.792549+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62308352 unmapped: 17334272 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:43.792781+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62308352 unmapped: 17334272 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:44.793050+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62308352 unmapped: 17334272 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fd49b000/0x0/0x4ffc00000, data 0xcbf09b/0xd32000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:45.793228+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62308352 unmapped: 17334272 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 542246 data_alloc: 218103808 data_used: 122880
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:46.793482+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62308352 unmapped: 17334272 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:47.793684+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62308352 unmapped: 17334272 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:48.794003+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62308352 unmapped: 17334272 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fd49b000/0x0/0x4ffc00000, data 0xcbf09b/0xd32000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:49.794131+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62308352 unmapped: 17334272 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:50.794371+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62308352 unmapped: 17334272 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 542246 data_alloc: 218103808 data_used: 122880
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:51.796323+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62308352 unmapped: 17334272 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fd49b000/0x0/0x4ffc00000, data 0xcbf09b/0xd32000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:52.798118+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62308352 unmapped: 17334272 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56223e822800
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 28.678512573s of 28.689655304s, submitted: 13
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _renew_subs
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 68 handle_osd_map epochs [69,69], i have 68, src has [1,69]
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 69 ms_handle_reset con 0x56223e822800 session 0x56223eae2d20
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:53.798515+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62414848 unmapped: 17227776 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:54.798820+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62414848 unmapped: 17227776 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 69 heartbeat osd_stat(store_statfs(0x4fd496000/0x0/0x4ffc00000, data 0xcc0a78/0xd37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56223e822c00
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:55.798998+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62423040 unmapped: 17219584 heap: 79642624 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 604393 data_alloc: 218103808 data_used: 131072
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:56.799205+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62554112 unmapped: 25485312 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:57.799417+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 69 heartbeat osd_stat(store_statfs(0x4fbc96000/0x0/0x4ffc00000, data 0x24c0a88/0x2538000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62603264 unmapped: 25436160 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 69 ms_handle_reset con 0x56223e822c00 session 0x56223f2205a0
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:58.799535+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62603264 unmapped: 25436160 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56223e823000
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _renew_subs
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 69 handle_osd_map epochs [70,70], i have 69, src has [1,70]
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:59.799708+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 62619648 unmapped: 25419776 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 70 handle_osd_map epochs [70,71], i have 70, src has [1,71]
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:00.799851+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 71 ms_handle_reset con 0x56223e823000 session 0x56223f221a40
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 63610880 unmapped: 24428544 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 564280 data_alloc: 218103808 data_used: 139264
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56223e822800
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:01.799981+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 63651840 unmapped: 24387584 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 71 heartbeat osd_stat(store_statfs(0x4fd48f000/0x0/0x4ffc00000, data 0xcc3640/0xd3d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 71 heartbeat osd_stat(store_statfs(0x4fd48f000/0x0/0x4ffc00000, data 0xcc3640/0xd3d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _renew_subs
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 71 handle_osd_map epochs [72,72], i have 71, src has [1,72]
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:02.800114+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 72 ms_handle_reset con 0x56223e822800 session 0x56223f707e00
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 63840256 unmapped: 24199168 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:03.800273+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 63840256 unmapped: 24199168 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56223e822c00
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.976924896s of 11.311155319s, submitted: 70
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 72 handle_osd_map epochs [72,73], i have 72, src has [1,73]
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 73 ms_handle_reset con 0x56223e822c00 session 0x56223f5f1e00
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:04.800472+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 63938560 unmapped: 24100864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56223e823c00
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 73 ms_handle_reset con 0x56223e823c00 session 0x56223f5f14a0
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 73 handle_osd_map epochs [73,74], i have 73, src has [1,74]
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:05.800632+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 64061440 unmapped: 23977984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 574353 data_alloc: 218103808 data_used: 139264
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56223ea28c00
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56223e823400
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:06.800797+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 64118784 unmapped: 23920640 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _renew_subs
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 74 handle_osd_map epochs [75,75], i have 74, src has [1,75]
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 75 ms_handle_reset con 0x56223ea28c00 session 0x56223f5f0960
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 75 ms_handle_reset con 0x56223e823400 session 0x56223f742f00
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:07.801073+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 64135168 unmapped: 23904256 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56223e823400
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 75 ms_handle_reset con 0x56223e823400 session 0x56223f5bf4a0
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56223e822800
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 75 ms_handle_reset con 0x56223e822800 session 0x56223f5be960
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 75 heartbeat osd_stat(store_statfs(0x4fd481000/0x0/0x4ffc00000, data 0xcc8cfc/0xd4b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:08.801265+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 64135168 unmapped: 23904256 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56223e822c00
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56223e823c00
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _renew_subs
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 75 handle_osd_map epochs [76,76], i have 75, src has [1,76]
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 76 ms_handle_reset con 0x56223e822c00 session 0x56223f707a40
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:09.801450+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 76 ms_handle_reset con 0x56223e823c00 session 0x56223f2c0780
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 23764992 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56223ea28c00
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56223f254c00
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _renew_subs
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 76 handle_osd_map epochs [77,77], i have 76, src has [1,77]
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 77 heartbeat osd_stat(store_statfs(0x4fd480000/0x0/0x4ffc00000, data 0xcca2d4/0xd4c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [0,0,1,1])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:10.801568+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 77 ms_handle_reset con 0x56223f254c00 session 0x56223f59f860
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 64413696 unmapped: 23625728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 77 ms_handle_reset con 0x56223ea28c00 session 0x56223f2e7860
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 588235 data_alloc: 218103808 data_used: 147456
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:11.801808+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 64446464 unmapped: 23592960 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56223e822800
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:12.801962+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 64446464 unmapped: 23592960 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 77 handle_osd_map epochs [77,78], i have 77, src has [1,78]
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:13.802232+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 64454656 unmapped: 23584768 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56223e822c00
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _renew_subs
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 78 handle_osd_map epochs [79,79], i have 78, src has [1,79]
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.005178452s of 10.492918968s, submitted: 152
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:14.802378+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 79 ms_handle_reset con 0x56223e822c00 session 0x56223f2205a0
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 79 ms_handle_reset con 0x56223e822800 session 0x56223ea89860
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56223e822400
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 64561152 unmapped: 23478272 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 79 ms_handle_reset con 0x56223e822400 session 0x56223ea89680
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 79 heartbeat osd_stat(store_statfs(0x4fd479000/0x0/0x4ffc00000, data 0xcce187/0xd54000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:15.802577+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 64561152 unmapped: 23478272 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 594951 data_alloc: 218103808 data_used: 147456
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:16.802791+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 64561152 unmapped: 23478272 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:17.802992+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 79 handle_osd_map epochs [80,80], i have 79, src has [1,80]
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 64577536 unmapped: 23461888 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:18.803147+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 64577536 unmapped: 23461888 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:19.803406+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56223e823c00
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 64585728 unmapped: 23453696 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 80 heartbeat osd_stat(store_statfs(0x4fd476000/0x0/0x4ffc00000, data 0xccf683/0xd57000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 80 ms_handle_reset con 0x56223e823c00 session 0x56223ea88f00
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:20.803647+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 64585728 unmapped: 23453696 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 597747 data_alloc: 218103808 data_used: 147456
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 80 handle_osd_map epochs [81,81], i have 80, src has [1,81]
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56223e822400
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _renew_subs
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 81 handle_osd_map epochs [82,82], i have 81, src has [1,82]
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:21.803790+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 82 ms_handle_reset con 0x56223e822400 session 0x56223f5f0960
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 64741376 unmapped: 23298048 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56223e823c00
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 82 handle_osd_map epochs [82,83], i have 82, src has [1,83]
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:22.803953+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 83 ms_handle_reset con 0x56223e823c00 session 0x56223eae3860
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 64757760 unmapped: 23281664 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56223e822c00
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:23.804100+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _renew_subs
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 83 handle_osd_map epochs [84,84], i have 83, src has [1,84]
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 84 ms_handle_reset con 0x56223e822c00 session 0x56223eae2d20
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 65863680 unmapped: 22175744 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56223e822800
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 84 handle_osd_map epochs [84,85], i have 84, src has [1,85]
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.822913170s of 10.012957573s, submitted: 69
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:24.804225+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 85 ms_handle_reset con 0x56223e822800 session 0x56223eae2b40
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 65929216 unmapped: 22110208 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56223ea28c00
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:25.804338+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 85 handle_osd_map epochs [85,86], i have 85, src has [1,86]
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 86 ms_handle_reset con 0x56223ea28c00 session 0x56223f2c1a40
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 86 heartbeat osd_stat(store_statfs(0x4fd45f000/0x0/0x4ffc00000, data 0xcd6c4b/0xd6d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 65953792 unmapped: 22085632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 634272 data_alloc: 218103808 data_used: 200704
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:26.804524+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 65953792 unmapped: 22085632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:27.804682+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 65953792 unmapped: 22085632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:28.804809+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56223e822400
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56223e822800
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 65953792 unmapped: 22085632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:29.804971+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _renew_subs
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 86 handle_osd_map epochs [87,87], i have 86, src has [1,87]
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 87 heartbeat osd_stat(store_statfs(0x4fd459000/0x0/0x4ffc00000, data 0xcd996d/0xd74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 66002944 unmapped: 22036480 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 87 ms_handle_reset con 0x56223e822800 session 0x56223f2210e0
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56223e822c00
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:30.805159+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 87 handle_osd_map epochs [87,88], i have 87, src has [1,88]
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 88 ms_handle_reset con 0x56223e822400 session 0x56223f2c10e0
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 67420160 unmapped: 20619264 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 88 ms_handle_reset con 0x56223e822c00 session 0x56223f220000
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 641350 data_alloc: 218103808 data_used: 200704
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56223e823c00
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56223e86dc00
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56223f255000
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:31.805372+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _renew_subs
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 88 handle_osd_map epochs [89,89], i have 88, src has [1,89]
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 67584000 unmapped: 20455424 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 89 ms_handle_reset con 0x56223e86dc00 session 0x56223f220f00
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x562240975000
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:32.805569+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _renew_subs
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 89 handle_osd_map epochs [90,90], i have 89, src has [1,90]
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 90 heartbeat osd_stat(store_statfs(0x4fbe7f000/0x0/0x4ffc00000, data 0xd01d2a/0xd9e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 67690496 unmapped: 20348928 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 90 ms_handle_reset con 0x562240975000 session 0x56223f707c20
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x562240975000
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:33.805765+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56223e822400
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _renew_subs
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 90 handle_osd_map epochs [91,91], i have 90, src has [1,91]
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 67805184 unmapped: 20234240 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 91 ms_handle_reset con 0x562240975000 session 0x56223f5bef00
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56223e822800
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56223e822c00
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:34.805944+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 91 handle_osd_map epochs [91,92], i have 91, src has [1,92]
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.528375626s of 10.122215271s, submitted: 156
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 92 ms_handle_reset con 0x56223e822c00 session 0x56223ea01680
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 92 ms_handle_reset con 0x56223e822400 session 0x56223f11bc20
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 68911104 unmapped: 19128320 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 92 ms_handle_reset con 0x56223e822800 session 0x56223e9d41e0
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 92 handle_osd_map epochs [92,93], i have 92, src has [1,93]
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:35.806125+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fbe79000/0x0/0x4ffc00000, data 0xd04238/0xda0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56223e86dc00
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 69869568 unmapped: 18169856 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 93 ms_handle_reset con 0x56223e86dc00 session 0x56223ea010e0
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 652558 data_alloc: 218103808 data_used: 217088
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56223e822400
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 93 ms_handle_reset con 0x56223e822400 session 0x56223ea005a0
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:36.806342+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56223e822800
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 93 ms_handle_reset con 0x56223e822800 session 0x56223f2e8960
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 69926912 unmapped: 18112512 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 93 handle_osd_map epochs [94,94], i have 93, src has [1,94]
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:37.806523+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 69959680 unmapped: 18079744 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 94 heartbeat osd_stat(store_statfs(0x4fbe76000/0x0/0x4ffc00000, data 0xd06d56/0xda6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56223e822c00
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:38.806686+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _renew_subs
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 94 handle_osd_map epochs [95,95], i have 94, src has [1,95]
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70017024 unmapped: 18022400 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 95 ms_handle_reset con 0x56223e822c00 session 0x56223f2e92c0
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:39.806827+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70033408 unmapped: 18006016 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x562240975000
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 95 handle_osd_map epochs [95,96], i have 95, src has [1,96]
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 96 ms_handle_reset con 0x562240975000 session 0x56223f2e8b40
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:40.807037+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70066176 unmapped: 17973248 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 664123 data_alloc: 218103808 data_used: 229376
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 96 heartbeat osd_stat(store_statfs(0x4fbe70000/0x0/0x4ffc00000, data 0xd099ba/0xdac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:41.807189+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 17956864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:42.807401+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 17956864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:43.807617+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x562240975400
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 96 ms_handle_reset con 0x562240975400 session 0x56223e9d4000
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x562240975400
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56223e822400
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 17956864 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 96 ms_handle_reset con 0x56223e822400 session 0x56223ea00000
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 96 ms_handle_reset con 0x562240975400 session 0x56223f707e00
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56223e822800
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 96 ms_handle_reset con 0x56223e822800 session 0x56223f2e74a0
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x562240975000
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 96 ms_handle_reset con 0x562240975000 session 0x56223dc9da40
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x562240975c00
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 96 ms_handle_reset con 0x562240975c00 session 0x56223f220780
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x562240975c00
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 96 ms_handle_reset con 0x562240975c00 session 0x56223f743e00
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:44.807795+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56223e822400
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 96 ms_handle_reset con 0x56223e822400 session 0x56223ea005a0
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70090752 unmapped: 17948672 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 96 heartbeat osd_stat(store_statfs(0x4fbe70000/0x0/0x4ffc00000, data 0xd099ba/0xdac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 96 handle_osd_map epochs [96,97], i have 96, src has [1,97]
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.487519264s of 10.742533684s, submitted: 87
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56223e822800
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:45.807962+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 97 ms_handle_reset con 0x56223e822800 session 0x56223ea010e0
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70033408 unmapped: 18006016 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x562240975000
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 97 ms_handle_reset con 0x562240975000 session 0x56223f2e8b40
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x562240975400
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 97 ms_handle_reset con 0x562240975400 session 0x56223f2e92c0
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 668581 data_alloc: 218103808 data_used: 237568
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:46.808103+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56223e822400
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56223e822800
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70033408 unmapped: 18006016 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:47.808467+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70033408 unmapped: 18006016 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:48.808574+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70033408 unmapped: 18006016 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:49.808717+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70033408 unmapped: 18006016 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x562240975000
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 97 ms_handle_reset con 0x562240975000 session 0x56223f2e8000
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x562240975c00
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:50.808847+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 97 heartbeat osd_stat(store_statfs(0x4fbe6d000/0x0/0x4ffc00000, data 0xd0aea2/0xdb0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70049792 unmapped: 17989632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 668581 data_alloc: 218103808 data_used: 237568
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _renew_subs
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 97 handle_osd_map epochs [98,98], i have 97, src has [1,98]
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:51.809012+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 98 ms_handle_reset con 0x562240975c00 session 0x56223f2c03c0
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56224066d000
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x562240654c00
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 98 ms_handle_reset con 0x562240654c00 session 0x56223f2f8000
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 98 ms_handle_reset con 0x56224066d000 session 0x56223f706d20
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56223e822c00
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 98 ms_handle_reset con 0x56223e822c00 session 0x56223f743a40
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x562240654c00
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70123520 unmapped: 17915904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 98 heartbeat osd_stat(store_statfs(0x4fbe6a000/0x0/0x4ffc00000, data 0xd0c45c/0xdb3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 98 handle_osd_map epochs [99,99], i have 98, src has [1,99]
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 98 handle_osd_map epochs [99,99], i have 99, src has [1,99]
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:52.809148+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 99 ms_handle_reset con 0x562240654c00 session 0x56223f5be960
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70131712 unmapped: 17907712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:53.809330+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70131712 unmapped: 17907712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:54.809473+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 99 heartbeat osd_stat(store_statfs(0x4fbe65000/0x0/0x4ffc00000, data 0xd0de6a/0xdb7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 99 handle_osd_map epochs [100,100], i have 99, src has [1,100]
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 99 handle_osd_map epochs [100,100], i have 100, src has [1,100]
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56224066cc00
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70131712 unmapped: 17907712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 100 ms_handle_reset con 0x56224066cc00 session 0x56223f5f0f00
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:55.809614+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56224066c800
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 17850368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 679111 data_alloc: 218103808 data_used: 241664
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 100 handle_osd_map epochs [100,101], i have 100, src has [1,101]
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.938570976s of 11.146072388s, submitted: 55
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:56.809907+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 101 ms_handle_reset con 0x56224066c800 session 0x56223f220780
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 17850368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:57.810082+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 17850368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 101 ms_handle_reset con 0x56223e822400 session 0x56223f59e5a0
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 101 ms_handle_reset con 0x56223e822800 session 0x56223ea01c20
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:58.810230+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56223e822400
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 101 heartbeat osd_stat(store_statfs(0x4fbe61000/0x0/0x4ffc00000, data 0xd1066a/0xdbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70213632 unmapped: 17825792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:59.810397+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _renew_subs
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 101 handle_osd_map epochs [102,102], i have 101, src has [1,102]
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 102 ms_handle_reset con 0x56223e822400 session 0x56223f2205a0
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70246400 unmapped: 17793024 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:00.810590+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70246400 unmapped: 17793024 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 683786 data_alloc: 218103808 data_used: 245760
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:01.810743+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70246400 unmapped: 17793024 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:02.810871+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70246400 unmapped: 17793024 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:03.811102+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70246400 unmapped: 17793024 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 102 handle_osd_map epochs [103,103], i have 102, src has [1,103]
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 103 heartbeat osd_stat(store_statfs(0x4fbe5f000/0x0/0x4ffc00000, data 0xd11c84/0xdbe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:04.811274+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 103 ms_handle_reset con 0x56223e823c00 session 0x56223f5f14a0
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 103 ms_handle_reset con 0x56223f255000 session 0x56223f2f9680
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x562240654c00
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 17768448 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 103 ms_handle_reset con 0x562240654c00 session 0x56223ea01a40
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 103 heartbeat osd_stat(store_statfs(0x4fbe5c000/0x0/0x4ffc00000, data 0xd13140/0xdc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:05.811429+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 17768448 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 683002 data_alloc: 218103808 data_used: 241664
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 103 handle_osd_map epochs [104,104], i have 103, src has [1,104]
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56224066c800
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 104 ms_handle_reset con 0x56224066c800 session 0x56223eae2780
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:06.811576+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56224066d000
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.860716820s of 10.143070221s, submitted: 96
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 17752064 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:07.811694+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _renew_subs
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 104 handle_osd_map epochs [105,105], i have 104, src has [1,105]
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 105 ms_handle_reset con 0x56224066d000 session 0x56223f2e9c20
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 17735680 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:08.811832+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 105 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0xcf1bde/0xda2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 17735680 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:09.812310+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 17735680 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:10.813642+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 17735680 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 689942 data_alloc: 218103808 data_used: 245760
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:11.813757+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 17735680 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:12.813883+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 17735680 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:13.814027+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 17735680 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 105 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0xcf1bde/0xda2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:14.814277+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 105 heartbeat osd_stat(store_statfs(0x4fbe7a000/0x0/0x4ffc00000, data 0xcf1bde/0xda2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 17735680 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:15.814382+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 17735680 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 689942 data_alloc: 218103808 data_used: 245760
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:16.814577+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 17735680 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:17.814746+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 17727488 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:18.814914+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 105 handle_osd_map epochs [106,106], i have 105, src has [1,106]
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.980017662s of 12.055953979s, submitted: 14
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:19.815046+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:20.815244+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 692244 data_alloc: 218103808 data_used: 245760
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:21.815366+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:22.815540+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:23.815680+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:24.815878+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:25.816374+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:26.816765+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:27.817271+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:28.817623+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:29.817960+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:30.818205+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:31.818381+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:32.818542+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:33.818800+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:34.819005+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:35.819255+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:36.819586+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:37.819699+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:38.819853+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:39.820261+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:40.820526+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:41.820737+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:42.820880+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:43.821041+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:44.821311+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:45.821531+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:46.821763+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:47.821947+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:48.822114+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:49.822273+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:50.822423+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:51.822583+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:52.822765+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:53.822959+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:54.823163+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:55.823342+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:56.823510+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:57.823642+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:58.823764+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:59.823856+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:00.823968+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:01.824098+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:02.824223+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:03.824381+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:04.824511+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:05.824662+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:06.824840+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:07.824980+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:08.825138+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:09.825277+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:10.825418+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:11.825584+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:12.825716+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:13.825842+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 17694720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:14.826015+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70352896 unmapped: 17686528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:15.826199+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70352896 unmapped: 17686528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:16.826381+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70352896 unmapped: 17686528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:17.826509+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70352896 unmapped: 17686528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:18.826769+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70352896 unmapped: 17686528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:19.826911+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70352896 unmapped: 17686528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:20.827039+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70352896 unmapped: 17686528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:21.827163+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70352896 unmapped: 17686528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:22.827357+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70352896 unmapped: 17686528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:23.827537+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70352896 unmapped: 17686528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:24.827697+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70352896 unmapped: 17686528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:25.827844+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70352896 unmapped: 17686528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:26.828023+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70352896 unmapped: 17686528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:27.828223+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70352896 unmapped: 17686528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:28.828418+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70352896 unmapped: 17686528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:29.828607+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70352896 unmapped: 17686528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:30.829017+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70361088 unmapped: 17678336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:31.829278+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70361088 unmapped: 17678336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:32.829842+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70361088 unmapped: 17678336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:33.830306+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70361088 unmapped: 17678336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:34.830706+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70361088 unmapped: 17678336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:35.832300+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70361088 unmapped: 17678336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:36.834231+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70361088 unmapped: 17678336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:37.834472+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70361088 unmapped: 17678336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:38.834673+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70361088 unmapped: 17678336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:39.834806+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70361088 unmapped: 17678336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:40.834942+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70361088 unmapped: 17678336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:41.835247+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70361088 unmapped: 17678336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:42.835373+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70361088 unmapped: 17678336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:43.835501+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70361088 unmapped: 17678336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:44.835617+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70361088 unmapped: 17678336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:45.835760+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70361088 unmapped: 17678336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:46.835930+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70369280 unmapped: 17670144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:47.836064+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70369280 unmapped: 17670144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:48.836241+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70369280 unmapped: 17670144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:49.836415+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70369280 unmapped: 17670144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:50.905015+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70369280 unmapped: 17670144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:51.905159+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70369280 unmapped: 17670144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:52.905326+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70549504 unmapped: 17489920 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:53.905442+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: do_command 'config diff' '{prefix=config diff}'
Nov 26 00:06:35 compute-0 ceph-osd[91111]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 26 00:06:35 compute-0 ceph-osd[91111]: do_command 'config show' '{prefix=config show}'
Nov 26 00:06:35 compute-0 ceph-osd[91111]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 26 00:06:35 compute-0 ceph-osd[91111]: do_command 'counter dump' '{prefix=counter dump}'
Nov 26 00:06:35 compute-0 ceph-osd[91111]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 26 00:06:35 compute-0 ceph-osd[91111]: do_command 'counter schema' '{prefix=counter schema}'
Nov 26 00:06:35 compute-0 ceph-osd[91111]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 70868992 unmapped: 17170432 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:54.905560+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71122944 unmapped: 16916480 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:55.905716+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71024640 unmapped: 17014784 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: do_command 'log dump' '{prefix=log dump}'
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:56.906237+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Nov 26 00:06:35 compute-0 ceph-osd[91111]: do_command 'perf dump' '{prefix=perf dump}'
Nov 26 00:06:35 compute-0 ceph-osd[91111]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Nov 26 00:06:35 compute-0 ceph-osd[91111]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Nov 26 00:06:35 compute-0 ceph-osd[91111]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Nov 26 00:06:35 compute-0 ceph-osd[91111]: do_command 'perf schema' '{prefix=perf schema}'
Nov 26 00:06:35 compute-0 ceph-osd[91111]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 16891904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:57.906392+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 16891904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:58.906889+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 16891904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:59.907008+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 16891904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:00.917212+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 16891904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:01.917363+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 16891904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:02.917480+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 16891904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:03.917620+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 16891904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:04.917754+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 16891904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:05.917924+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 16891904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:06.918103+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 16891904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:07.918208+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 16891904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:08.918318+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 16891904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:09.918425+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 16891904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:10.918552+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 16891904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:11.918686+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 16891904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:12.918794+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 16891904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:13.918916+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 16891904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:14.919034+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 16891904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:15.919153+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 16891904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:16.920227+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 16891904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:17.920385+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 16891904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:18.920497+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 16891904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:19.920616+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 16891904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:20.920735+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 16891904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:21.920867+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 16891904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:22.921034+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 16891904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:23.921199+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 16891904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:24.921330+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 16891904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:25.921473+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 16891904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:26.921627+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 16891904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:27.921751+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 16891904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:28.921894+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 16891904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:29.922008+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 16891904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:30.922162+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 16891904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:31.922363+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 16891904 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:32.922567+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71155712 unmapped: 16883712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:33.922749+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71155712 unmapped: 16883712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:34.923031+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71155712 unmapped: 16883712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:35.929278+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71155712 unmapped: 16883712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:36.929972+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71155712 unmapped: 16883712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:37.930489+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71155712 unmapped: 16883712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:38.930873+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71155712 unmapped: 16883712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:39.931227+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71155712 unmapped: 16883712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:40.931600+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71155712 unmapped: 16883712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:41.931735+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71155712 unmapped: 16883712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:42.931998+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71155712 unmapped: 16883712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:43.932229+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71155712 unmapped: 16883712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:44.932503+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71155712 unmapped: 16883712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:45.932692+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71155712 unmapped: 16883712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:46.932841+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:47.932984+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71155712 unmapped: 16883712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:48.933136+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71155712 unmapped: 16883712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:49.933250+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71155712 unmapped: 16883712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:50.933418+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71155712 unmapped: 16883712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:51.933609+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71155712 unmapped: 16883712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:52.933908+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71155712 unmapped: 16883712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:53.934085+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71155712 unmapped: 16883712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:54.934239+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71155712 unmapped: 16883712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:55.934454+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71155712 unmapped: 16883712 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:56.934721+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71163904 unmapped: 16875520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:57.934907+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71163904 unmapped: 16875520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:58.935051+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71163904 unmapped: 16875520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:59.935221+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71163904 unmapped: 16875520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:00.935363+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71163904 unmapped: 16875520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:01.935532+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71163904 unmapped: 16875520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:02.935827+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71163904 unmapped: 16875520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:03.936012+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71163904 unmapped: 16875520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:04.936241+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71163904 unmapped: 16875520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:05.936558+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71163904 unmapped: 16875520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:06.936847+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71163904 unmapped: 16875520 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:07.937004+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71172096 unmapped: 16867328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:08.938531+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71172096 unmapped: 16867328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:09.940222+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71172096 unmapped: 16867328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:10.940509+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71172096 unmapped: 16867328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:11.940666+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71172096 unmapped: 16867328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:12.941426+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71172096 unmapped: 16867328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:13.941740+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71172096 unmapped: 16867328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:14.941883+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71172096 unmapped: 16867328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:15.942011+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71172096 unmapped: 16867328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:16.942534+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71172096 unmapped: 16867328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:17.942786+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71172096 unmapped: 16867328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:18.943234+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71172096 unmapped: 16867328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:19.943440+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71172096 unmapped: 16867328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:20.943796+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71172096 unmapped: 16867328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:21.943945+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71172096 unmapped: 16867328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:22.944816+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71172096 unmapped: 16867328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:23.945267+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71172096 unmapped: 16867328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:24.946038+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71172096 unmapped: 16867328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:25.946423+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71172096 unmapped: 16867328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:26.947152+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71172096 unmapped: 16867328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:27.947815+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71172096 unmapped: 16867328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:28.947961+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71172096 unmapped: 16867328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:29.948143+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71172096 unmapped: 16867328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:30.948374+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71172096 unmapped: 16867328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:31.948504+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71172096 unmapped: 16867328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:32.948639+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71172096 unmapped: 16867328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:33.948941+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71172096 unmapped: 16867328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:34.949116+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71172096 unmapped: 16867328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:35.949285+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71172096 unmapped: 16867328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:36.949582+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71172096 unmapped: 16867328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:37.949716+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71172096 unmapped: 16867328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:38.949843+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71172096 unmapped: 16867328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:39.950023+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71180288 unmapped: 16859136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:40.950457+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71180288 unmapped: 16859136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:41.961982+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 16850944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:42.968269+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 16850944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:43.972351+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 16850944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:44.977121+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 16850944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:45.984265+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 16850944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:46.990620+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 16850944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:47.992528+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 16850944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:48.994008+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 16850944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:49.995735+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 16850944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:50.998317+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 16850944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:52.001852+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 16850944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:53.002264+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 16850944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:54.003762+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 16850944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:55.004039+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 16850944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:56.004247+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 16850944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:57.004949+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 16850944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:58.005248+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 16850944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:59.006375+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 16850944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:00.007860+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 16850944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:01.009085+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 16850944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:02.010299+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 16850944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:03.011458+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 16850944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:04.012371+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 16850944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:05.013370+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 16850944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:06.013926+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 16850944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:07.014424+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 16850944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:08.014830+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 16850944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:09.015032+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 16850944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:10.015538+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 16850944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:11.015933+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 16850944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:12.016283+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 16850944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:13.016446+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 16850944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:14.016592+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 16850944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:15.016742+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 16850944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:16.016886+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 16850944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:17.017057+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 16850944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:18.017161+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 16850944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:19.017310+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71196672 unmapped: 16842752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:20.017465+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71196672 unmapped: 16842752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:21.017639+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71196672 unmapped: 16842752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:22.017802+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71196672 unmapped: 16842752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:23.018002+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71196672 unmapped: 16842752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:24.018200+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71196672 unmapped: 16842752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:25.018326+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71196672 unmapped: 16842752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:26.018481+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71213056 unmapped: 16826368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:27.018774+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71213056 unmapped: 16826368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:28.018876+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71213056 unmapped: 16826368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:29.018955+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71213056 unmapped: 16826368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:30.019080+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71213056 unmapped: 16826368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:31.019227+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71213056 unmapped: 16826368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:32.019324+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71213056 unmapped: 16826368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:33.019510+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71213056 unmapped: 16826368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:34.019636+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71213056 unmapped: 16826368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:35.019780+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71213056 unmapped: 16826368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:36.019954+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71213056 unmapped: 16826368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:37.020149+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71213056 unmapped: 16826368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:38.020326+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71213056 unmapped: 16826368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:39.020444+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71213056 unmapped: 16826368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:40.020581+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71213056 unmapped: 16826368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:41.020711+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71213056 unmapped: 16826368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:42.020836+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71213056 unmapped: 16826368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:43.020982+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71213056 unmapped: 16826368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:44.021165+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71213056 unmapped: 16826368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:45.021359+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71213056 unmapped: 16826368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:46.021847+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71213056 unmapped: 16826368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:47.022512+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71213056 unmapped: 16826368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:48.022908+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71213056 unmapped: 16826368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:49.023228+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71213056 unmapped: 16826368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:50.023444+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71213056 unmapped: 16826368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:51.023578+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71213056 unmapped: 16826368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:52.023857+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71213056 unmapped: 16826368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:53.024101+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71213056 unmapped: 16826368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:54.024380+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 16818176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:55.024523+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 16818176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:56.024699+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 16818176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:57.024909+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 16818176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:58.025042+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 16818176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:59.025242+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 16818176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:00.025394+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 16818176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:01.025555+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 16818176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:02.025708+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 16818176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:03.025855+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 16818176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:04.026058+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 16818176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:05.026228+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 16818176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:06.026388+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 16818176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:07.026573+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 16818176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:08.026753+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 16818176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:09.026972+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 16818176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:10.027281+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 16818176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:11.027440+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 16818176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:12.027592+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 16818176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:13.027729+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 16818176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:14.027867+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 16818176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:15.028008+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 16818176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:16.028159+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 16818176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:17.028410+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 16818176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:18.028518+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 16818176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:19.028640+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 16818176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:20.028765+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 16818176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:21.028919+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 16818176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:22.029025+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 16818176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:23.029269+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 16818176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:24.029434+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 16818176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:25.029583+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 16818176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:26.029729+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 16818176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:27.029839+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 16818176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:28.029991+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 16818176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:29.030230+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 16818176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:30.030369+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71221248 unmapped: 16818176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:31.030547+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 16809984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:32.030708+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 16809984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:33.030850+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 16809984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:34.031073+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 16809984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:35.031244+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 16809984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:36.031417+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 16809984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:37.031602+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 16809984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:38.031770+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 16809984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:39.031913+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 16809984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:40.032060+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 16809984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:41.032195+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 16809984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:42.032348+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 16809984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:43.032484+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 16809984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:44.032672+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 16809984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:45.032827+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 16809984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:46.032965+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 16809984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:47.033103+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 16809984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:48.033253+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 16809984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:49.033381+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 16809984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:50.033572+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 16809984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:51.033691+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 16809984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:52.033832+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 16809984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:53.033959+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 16809984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:54.034087+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 16809984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:55.034210+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 16809984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:56.034322+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 16809984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:57.034468+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 16809984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:58.034601+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 16809984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:59.034696+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 16809984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:00.034801+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 16809984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:01.034945+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 16809984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:02.035098+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 16809984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:03.035232+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 16809984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:04.035399+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 16809984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:05.035585+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71237632 unmapped: 16801792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:06.035698+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71237632 unmapped: 16801792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:07.035871+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71237632 unmapped: 16801792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:08.036011+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71237632 unmapped: 16801792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:09.036223+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71237632 unmapped: 16801792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:10.036389+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71237632 unmapped: 16801792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:11.036553+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71237632 unmapped: 16801792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:12.036699+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71237632 unmapped: 16801792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:13.036864+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71237632 unmapped: 16801792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:14.037031+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71237632 unmapped: 16801792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:15.037212+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71237632 unmapped: 16801792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:16.037351+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71237632 unmapped: 16801792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:17.037507+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71237632 unmapped: 16801792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:18.037673+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71237632 unmapped: 16801792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:19.037824+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71237632 unmapped: 16801792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:20.037980+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71237632 unmapped: 16801792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:21.038255+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71237632 unmapped: 16801792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:22.038479+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71237632 unmapped: 16801792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:23.038698+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71237632 unmapped: 16801792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:24.039449+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71237632 unmapped: 16801792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:25.039921+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71237632 unmapped: 16801792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:26.040205+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71245824 unmapped: 16793600 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:27.040373+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71245824 unmapped: 16793600 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:28.040866+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71245824 unmapped: 16793600 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:29.041346+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71245824 unmapped: 16793600 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:30.041771+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71245824 unmapped: 16793600 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:31.041987+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 5481 writes, 23K keys, 5481 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 5481 writes, 906 syncs, 6.05 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1309 writes, 3946 keys, 1309 commit groups, 1.0 writes per commit group, ingest: 2.20 MB, 0.00 MB/s
                                           Interval WAL: 1309 writes, 541 syncs, 2.42 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: handle_auth_request added challenge on 0x56224066c000
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71254016 unmapped: 16785408 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:32.042141+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71254016 unmapped: 16785408 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:33.042471+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71254016 unmapped: 16785408 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:34.042832+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71254016 unmapped: 16785408 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:35.043146+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71254016 unmapped: 16785408 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:36.043378+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: mgrc ms_handle_reset ms_handle_reset con 0x56223f255800
Nov 26 00:06:35 compute-0 ceph-osd[91111]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/855624559
Nov 26 00:06:35 compute-0 ceph-osd[91111]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/855624559,v1:192.168.122.100:6801/855624559]
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: get_auth_request con 0x56224066c800 auth_method 0
Nov 26 00:06:35 compute-0 ceph-osd[91111]: mgrc handle_mgr_configure stats_period=5
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71434240 unmapped: 16605184 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:37.043597+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71434240 unmapped: 16605184 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:38.043886+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71434240 unmapped: 16605184 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:39.044044+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71434240 unmapped: 16605184 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:40.044230+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71434240 unmapped: 16605184 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:41.044469+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71434240 unmapped: 16605184 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:42.044708+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71434240 unmapped: 16605184 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:43.044980+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71434240 unmapped: 16605184 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:44.045218+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71434240 unmapped: 16605184 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:45.045437+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71434240 unmapped: 16605184 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:46.045634+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 16687104 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:47.045856+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 16687104 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:48.046042+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 16687104 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:49.046198+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:50.046421+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 16687104 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:51.046620+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 16687104 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:52.047034+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 16687104 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:53.047617+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 16687104 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:54.047949+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 16687104 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:55.048240+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 16687104 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:56.048379+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 16687104 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:57.048532+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 16687104 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:58.052297+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 16687104 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:59.052463+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 16687104 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:00.052735+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 16687104 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:01.052857+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 16687104 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:02.053214+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 16687104 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:03.053365+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 16687104 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:04.053667+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 16687104 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:05.053841+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 16687104 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:06.054118+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 16687104 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:07.054310+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 16687104 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:08.054551+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 16687104 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:09.054813+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 16687104 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:10.055155+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:11.055410+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:12.055619+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:13.055880+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:14.056117+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:15.056372+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:16.056641+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:17.056968+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:18.057258+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:19.057472+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:20.057634+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:21.057856+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:22.058033+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:23.058239+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:24.058459+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:25.058611+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:26.058953+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:27.059214+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:28.059622+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:29.059911+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:30.060087+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:31.060439+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:32.060714+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:33.060901+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:34.061103+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:35.061267+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:36.061413+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:37.061615+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:38.061830+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:39.061978+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:40.062142+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:41.062320+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:42.062504+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:43.062882+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:44.063152+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:45.063512+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:46.063737+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:47.063941+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:48.064155+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:49.064406+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:50.064649+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:51.064851+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:52.065059+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:53.065295+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:54.065566+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:55.065758+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:56.065979+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:57.066246+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:58.066382+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:59.066510+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:00.066622+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:01.066845+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:02.067014+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:03.067147+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:04.067407+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:05.067593+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 16678912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:06.067705+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 16670720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:07.067849+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 16670720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:08.068028+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 16670720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:09.068243+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 16670720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:10.068394+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 16670720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:11.068657+0000)
Nov 26 00:06:35 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 26 00:06:35 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2748698357' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 16670720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:12.068896+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 16670720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:13.069095+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 16670720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:14.069322+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 16670720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:15.069520+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 16670720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:16.069880+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 16670720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:17.070369+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 16670720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:18.070554+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 16670720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:19.070714+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 16670720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:20.070867+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 16670720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:21.071009+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 16670720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:22.071211+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 16670720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:23.071375+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 16670720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:24.071541+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 16670720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:25.071692+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 16670720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:26.071825+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 16670720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:27.071976+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 16670720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:28.072739+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 16670720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:29.072922+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 16670720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:30.073265+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 16670720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:31.073594+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 16670720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:32.074065+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 16670720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:33.074664+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 16670720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:34.074877+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 16662528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:35.075235+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 16662528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:36.075491+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 16662528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:37.075697+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 16662528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:38.075857+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 16662528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:39.079137+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 16662528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:40.079733+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 16662528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:41.080254+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 16662528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:42.080582+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 16662528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:43.080803+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 16662528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:44.081024+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 16662528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:45.081159+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 16662528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:46.081496+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 16662528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:47.082827+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 16662528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:48.083248+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 16662528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:49.083531+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 16662528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:50.083810+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 16662528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:51.084071+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 16662528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:52.084444+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 16662528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:53.084902+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 16662528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:54.085052+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 16662528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:55.085417+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 16662528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:56.085633+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 16662528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:57.085903+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 16662528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:58.086484+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 16662528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:59.087030+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 16662528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:00.087248+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 16662528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:01.087421+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 16662528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:02.087656+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 16662528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:03.087836+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 16662528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:04.087995+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 16662528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:05.088131+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 16662528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:06.088284+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 16662528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:07.088509+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 16662528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:08.088652+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 16662528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:09.088833+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71385088 unmapped: 16654336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:10.088973+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71385088 unmapped: 16654336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:11.089122+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71385088 unmapped: 16654336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:12.089293+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71385088 unmapped: 16654336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:13.089466+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71385088 unmapped: 16654336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:14.089605+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71385088 unmapped: 16654336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:15.089749+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71385088 unmapped: 16654336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:16.089874+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71385088 unmapped: 16654336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:17.090107+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71385088 unmapped: 16654336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:18.090271+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71385088 unmapped: 16654336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:19.091022+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71385088 unmapped: 16654336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:20.091157+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71385088 unmapped: 16654336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:21.091302+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71385088 unmapped: 16654336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:22.091571+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71385088 unmapped: 16654336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:23.091714+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71385088 unmapped: 16654336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:24.091827+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71385088 unmapped: 16654336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:25.091965+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71385088 unmapped: 16654336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:26.092083+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71385088 unmapped: 16654336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:27.092235+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71385088 unmapped: 16654336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:28.092351+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71385088 unmapped: 16654336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:29.092496+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71385088 unmapped: 16654336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:30.092613+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71385088 unmapped: 16654336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:31.092798+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71385088 unmapped: 16654336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:32.092942+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71385088 unmapped: 16654336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:33.093131+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71385088 unmapped: 16654336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:34.093267+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71385088 unmapped: 16654336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:35.093622+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71385088 unmapped: 16654336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:36.094253+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71385088 unmapped: 16654336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:37.095956+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71385088 unmapped: 16654336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:38.096920+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71385088 unmapped: 16654336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:39.097239+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 16646144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:40.098274+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 16646144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:41.099096+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 16646144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:42.099432+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 16646144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:43.100005+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 16646144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:44.100254+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 16646144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:45.100435+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 16646144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:46.101028+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 16646144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:47.101402+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 16646144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:48.102290+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 16646144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:49.103065+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 16646144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:50.103492+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 16646144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:51.103766+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 16646144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:52.103902+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 16646144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:53.104622+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 16646144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:54.105140+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 16646144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:55.105760+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 16646144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:56.106010+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 16646144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:57.106334+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 16646144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:58.106586+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 16646144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:59.106719+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 16646144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:06:00.106865+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 16646144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:06:01.107002+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 16646144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:06:02.107145+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fbe78000/0x0/0x4ffc00000, data 0xcf307e/0xda5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 16646144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:06:03.107272+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: do_command 'config diff' '{prefix=config diff}'
Nov 26 00:06:35 compute-0 ceph-osd[91111]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 26 00:06:35 compute-0 ceph-osd[91111]: do_command 'config show' '{prefix=config show}'
Nov 26 00:06:35 compute-0 ceph-osd[91111]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 26 00:06:35 compute-0 ceph-osd[91111]: do_command 'counter dump' '{prefix=counter dump}'
Nov 26 00:06:35 compute-0 ceph-osd[91111]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 26 00:06:35 compute-0 ceph-osd[91111]: do_command 'counter schema' '{prefix=counter schema}'
Nov 26 00:06:35 compute-0 ceph-osd[91111]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:35 compute-0 ceph-osd[91111]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:35 compute-0 ceph-osd[91111]: bluestore.MempoolThread(0x56223cf97b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693204 data_alloc: 218103808 data_used: 270336
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71827456 unmapped: 16211968 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:06:04.107392+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: prioritycache tune_memory target: 4294967296 mapped: 71770112 unmapped: 16269312 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: tick
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_tickets
Nov 26 00:06:35 compute-0 ceph-osd[91111]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:06:05.107503+0000)
Nov 26 00:06:35 compute-0 ceph-osd[91111]: do_command 'log dump' '{prefix=log dump}'
Nov 26 00:06:36 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.15077 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 00:06:36 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 00:06:36 compute-0 ceph-mon[75654]: from='client.15063 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 00:06:36 compute-0 ceph-mon[75654]: pgmap v1136: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:06:36 compute-0 ceph-mon[75654]: from='client.15065 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 00:06:36 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/1014545830' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 26 00:06:36 compute-0 ceph-mon[75654]: from='client.15069 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 00:06:36 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/1754443487' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 26 00:06:36 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/2748698357' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 26 00:06:36 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 26 00:06:36 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1125873004' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 26 00:06:36 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.15081 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 26 00:06:36 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1137: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:06:36 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 26 00:06:36 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2257827287' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 26 00:06:36 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.15085 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 00:06:37 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.15089 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 26 00:06:37 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Nov 26 00:06:37 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3399618046' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 26 00:06:37 compute-0 ceph-mon[75654]: from='client.15073 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 26 00:06:37 compute-0 ceph-mon[75654]: from='client.15077 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 00:06:37 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/1125873004' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 26 00:06:37 compute-0 ceph-mon[75654]: from='client.15081 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 26 00:06:37 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/2257827287' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 26 00:06:37 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/3399618046' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 26 00:06:37 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.15091 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 26 00:06:37 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Nov 26 00:06:37 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3623171305' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 26 00:06:38 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.15099 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 26 00:06:38 compute-0 ceph-101922db-575f-58e2-980f-928050464f69-mgr-compute-0-gwqfsl[75950]: 2025-11-26T00:06:38.292+0000 7f36737f5640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 26 00:06:38 compute-0 ceph-mgr[75954]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 26 00:06:38 compute-0 ceph-mon[75654]: pgmap v1137: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:06:38 compute-0 ceph-mon[75654]: from='client.15085 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 00:06:38 compute-0 ceph-mon[75654]: from='client.15089 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 26 00:06:38 compute-0 ceph-mon[75654]: from='client.15091 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 26 00:06:38 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/3623171305' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 26 00:06:38 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0) v1
Nov 26 00:06:38 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1216966318' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 26 00:06:38 compute-0 crontab[279296]: (root) LIST (root)
Nov 26 00:06:38 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Nov 26 00:06:38 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2332673555' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 26 00:06:38 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1138: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:06:38 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Nov 26 00:06:38 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2702273331' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 26 00:06:39 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Nov 26 00:06:39 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/439148967' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 26 00:06:39 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Nov 26 00:06:39 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/311575299' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 26 00:06:39 compute-0 ceph-mon[75654]: from='client.15099 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 26 00:06:39 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/1216966318' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 26 00:06:39 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/2332673555' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 26 00:06:39 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/2702273331' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 26 00:06:39 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/439148967' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 26 00:06:39 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/311575299' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 26 00:06:39 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Nov 26 00:06:39 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4179959095' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 26 00:06:39 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Nov 26 00:06:39 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3591011433' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 26 00:06:39 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61775872 unmapped: 999424 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:34.842856+0000)
Nov 26 00:06:39 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61775872 unmapped: 999424 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.f scrub starts
Nov 26 00:06:39 compute-0 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.960755348s of 13.993970871s, submitted: 10
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.f scrub ok
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:35.842988+0000)
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 49 sent 47 num 2 unsent 2 sending 2
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:35:05.570679+0000 osd.1 (osd.1) 48 : cluster [DBG] 4.f scrub starts
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:35:05.584781+0000 osd.1 (osd.1) 49 : cluster [DBG] 4.f scrub ok
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:39 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:39 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:39 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61792256 unmapped: 983040 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:39 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 432852 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 49) v1
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:35:05.570679+0000 osd.1 (osd.1) 48 : cluster [DBG] 4.f scrub starts
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:35:05.584781+0000 osd.1 (osd.1) 49 : cluster [DBG] 4.f scrub ok
Nov 26 00:06:39 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:36.843208+0000)
Nov 26 00:06:39 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61800448 unmapped: 974848 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:37.843368+0000)
Nov 26 00:06:39 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61808640 unmapped: 966656 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:38.843566+0000)
Nov 26 00:06:39 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61808640 unmapped: 966656 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:39.843787+0000)
Nov 26 00:06:39 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61808640 unmapped: 966656 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:40.843927+0000)
Nov 26 00:06:39 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:39 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:39 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61816832 unmapped: 958464 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:39 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 432852 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:39 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:41.844110+0000)
Nov 26 00:06:39 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61825024 unmapped: 950272 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.d scrub starts
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.d scrub ok
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:42.844255+0000)
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 51 sent 49 num 2 unsent 2 sending 2
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:35:12.687862+0000 osd.1 (osd.1) 50 : cluster [DBG] 4.d scrub starts
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:35:12.702026+0000 osd.1 (osd.1) 51 : cluster [DBG] 4.d scrub ok
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:39 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61841408 unmapped: 933888 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 51) v1
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:35:12.687862+0000 osd.1 (osd.1) 50 : cluster [DBG] 4.d scrub starts
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:35:12.702026+0000 osd.1 (osd.1) 51 : cluster [DBG] 4.d scrub ok
Nov 26 00:06:39 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:43.844504+0000)
Nov 26 00:06:39 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61849600 unmapped: 925696 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:44.844666+0000)
Nov 26 00:06:39 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61849600 unmapped: 925696 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:45.844798+0000)
Nov 26 00:06:39 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:39 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:39 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 917504 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:39 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 433999 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:46.844949+0000)
Nov 26 00:06:39 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 917504 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:47.845134+0000)
Nov 26 00:06:39 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 917504 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:39 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Nov 26 00:06:39 compute-0 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.131777763s of 13.148053169s, submitted: 4
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:48.845307+0000)
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 53 sent 51 num 2 unsent 2 sending 2
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:35:18.719099+0000 osd.1 (osd.1) 52 : cluster [DBG] 4.2 scrub starts
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:35:18.733190+0000 osd.1 (osd.1) 53 : cluster [DBG] 4.2 scrub ok
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:39 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 909312 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 53) v1
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:35:18.719099+0000 osd.1 (osd.1) 52 : cluster [DBG] 4.2 scrub starts
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:35:18.733190+0000 osd.1 (osd.1) 53 : cluster [DBG] 4.2 scrub ok
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:49.845548+0000)
Nov 26 00:06:39 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 909312 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:50.845690+0000)
Nov 26 00:06:39 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:39 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:39 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:39 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61874176 unmapped: 901120 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:39 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 435146 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:51.845883+0000)
Nov 26 00:06:39 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61874176 unmapped: 901120 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:52.846066+0000)
Nov 26 00:06:39 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 892928 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:53.846235+0000)
Nov 26 00:06:39 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 892928 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:54.846396+0000)
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 55 sent 53 num 2 unsent 2 sending 2
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:35:24.697793+0000 osd.1 (osd.1) 54 : cluster [DBG] 4.4 scrub starts
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:35:24.711869+0000 osd.1 (osd.1) 55 : cluster [DBG] 4.4 scrub ok
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:39 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61890560 unmapped: 884736 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:39 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 55) v1
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:35:24.697793+0000 osd.1 (osd.1) 54 : cluster [DBG] 4.4 scrub starts
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:35:24.711869+0000 osd.1 (osd.1) 55 : cluster [DBG] 4.4 scrub ok
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:55.847211+0000)
Nov 26 00:06:39 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:39 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:39 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61915136 unmapped: 860160 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:39 compute-0 rsyslogd[1001]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 26 00:06:39 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 436293 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:56.847595+0000)
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 57 sent 55 num 2 unsent 2 sending 2
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:35:26.720528+0000 osd.1 (osd.1) 56 : cluster [DBG] 6.1 scrub starts
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:35:26.734492+0000 osd.1 (osd.1) 57 : cluster [DBG] 6.1 scrub ok
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:39 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:39 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61923328 unmapped: 851968 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 57) v1
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:35:26.720528+0000 osd.1 (osd.1) 56 : cluster [DBG] 6.1 scrub starts
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:35:26.734492+0000 osd.1 (osd.1) 57 : cluster [DBG] 6.1 scrub ok
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:57.848445+0000)
Nov 26 00:06:39 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61923328 unmapped: 851968 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Nov 26 00:06:39 compute-0 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.016060829s of 10.034604073s, submitted: 6
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:58.848861+0000)
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 59 sent 57 num 2 unsent 2 sending 2
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:35:28.753751+0000 osd.1 (osd.1) 58 : cluster [DBG] 4.7 scrub starts
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:35:28.767842+0000 osd.1 (osd.1) 59 : cluster [DBG] 4.7 scrub ok
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:39 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61931520 unmapped: 843776 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 59) v1
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:35:28.753751+0000 osd.1 (osd.1) 58 : cluster [DBG] 4.7 scrub starts
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:35:28.767842+0000 osd.1 (osd.1) 59 : cluster [DBG] 4.7 scrub ok
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:59.849229+0000)
Nov 26 00:06:39 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61931520 unmapped: 843776 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:00.849491+0000)
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 61 sent 59 num 2 unsent 2 sending 2
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:35:30.698001+0000 osd.1 (osd.1) 60 : cluster [DBG] 4.5 scrub starts
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:35:30.712226+0000 osd.1 (osd.1) 61 : cluster [DBG] 4.5 scrub ok
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:39 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:39 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:39 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61939712 unmapped: 835584 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:39 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 439734 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 61) v1
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:35:30.698001+0000 osd.1 (osd.1) 60 : cluster [DBG] 4.5 scrub starts
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:35:30.712226+0000 osd.1 (osd.1) 61 : cluster [DBG] 4.5 scrub ok
Nov 26 00:06:39 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:01.849880+0000)
Nov 26 00:06:39 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61947904 unmapped: 827392 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:02.850097+0000)
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 63 sent 61 num 2 unsent 2 sending 2
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:35:32.624285+0000 osd.1 (osd.1) 62 : cluster [DBG] 5.1d scrub starts
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:35:32.638363+0000 osd.1 (osd.1) 63 : cluster [DBG] 5.1d scrub ok
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:39 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61964288 unmapped: 811008 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 63) v1
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:35:32.624285+0000 osd.1 (osd.1) 62 : cluster [DBG] 5.1d scrub starts
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:35:32.638363+0000 osd.1 (osd.1) 63 : cluster [DBG] 5.1d scrub ok
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:03.850251+0000)
Nov 26 00:06:39 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61964288 unmapped: 811008 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:04.850373+0000)
Nov 26 00:06:39 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61964288 unmapped: 811008 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:05.850623+0000)
Nov 26 00:06:39 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:39 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:39 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61972480 unmapped: 802816 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:39 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 440882 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:06.850791+0000)
Nov 26 00:06:39 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61972480 unmapped: 802816 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:39 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:07.850932+0000)
Nov 26 00:06:39 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:39 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61980672 unmapped: 794624 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:08.851098+0000)
Nov 26 00:06:39 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61980672 unmapped: 794624 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Nov 26 00:06:39 compute-0 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.789491653s of 10.819688797s, submitted: 6
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Nov 26 00:06:39 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:09.851248+0000)
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 65 sent 63 num 2 unsent 2 sending 2
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:35:39.573549+0000 osd.1 (osd.1) 64 : cluster [DBG] 4.9 scrub starts
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:35:39.587398+0000 osd.1 (osd.1) 65 : cluster [DBG] 4.9 scrub ok
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:39 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61988864 unmapped: 786432 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 65) v1
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:35:39.573549+0000 osd.1 (osd.1) 64 : cluster [DBG] 4.9 scrub starts
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:35:39.587398+0000 osd.1 (osd.1) 65 : cluster [DBG] 4.9 scrub ok
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:10.851512+0000)
Nov 26 00:06:39 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:39 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:39 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61988864 unmapped: 786432 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:39 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 442029 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:11.851694+0000)
Nov 26 00:06:39 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61988864 unmapped: 786432 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:12.851865+0000)
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 67 sent 65 num 2 unsent 2 sending 2
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:35:42.508345+0000 osd.1 (osd.1) 66 : cluster [DBG] 4.8 scrub starts
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:35:42.522449+0000 osd.1 (osd.1) 67 : cluster [DBG] 4.8 scrub ok
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 67) v1
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:35:42.508345+0000 osd.1 (osd.1) 66 : cluster [DBG] 4.8 scrub starts
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:35:42.522449+0000 osd.1 (osd.1) 67 : cluster [DBG] 4.8 scrub ok
Nov 26 00:06:39 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61997056 unmapped: 778240 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:13.852070+0000)
Nov 26 00:06:39 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61997056 unmapped: 778240 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:39 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:14.852238+0000)
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 69 sent 67 num 2 unsent 2 sending 2
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:35:44.477082+0000 osd.1 (osd.1) 68 : cluster [DBG] 4.14 scrub starts
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:35:44.491189+0000 osd.1 (osd.1) 69 : cluster [DBG] 4.14 scrub ok
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 69) v1
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:35:44.477082+0000 osd.1 (osd.1) 68 : cluster [DBG] 4.14 scrub starts
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:35:44.491189+0000 osd.1 (osd.1) 69 : cluster [DBG] 4.14 scrub ok
Nov 26 00:06:39 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 61997056 unmapped: 778240 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:15.852480+0000)
Nov 26 00:06:39 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:39 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:39 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62013440 unmapped: 761856 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:39 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 444324 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:16.852744+0000)
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 71 sent 69 num 2 unsent 2 sending 2
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:35:46.443633+0000 osd.1 (osd.1) 70 : cluster [DBG] 4.12 scrub starts
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:35:46.457926+0000 osd.1 (osd.1) 71 : cluster [DBG] 4.12 scrub ok
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 71) v1
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:35:46.443633+0000 osd.1 (osd.1) 70 : cluster [DBG] 4.12 scrub starts
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:35:46.457926+0000 osd.1 (osd.1) 71 : cluster [DBG] 4.12 scrub ok
Nov 26 00:06:39 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62013440 unmapped: 761856 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:17.852950+0000)
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 73 sent 71 num 2 unsent 2 sending 2
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:35:47.433360+0000 osd.1 (osd.1) 72 : cluster [DBG] 4.10 scrub starts
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:35:47.447492+0000 osd.1 (osd.1) 73 : cluster [DBG] 4.10 scrub ok
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 73) v1
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:35:47.433360+0000 osd.1 (osd.1) 72 : cluster [DBG] 4.10 scrub starts
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:35:47.447492+0000 osd.1 (osd.1) 73 : cluster [DBG] 4.10 scrub ok
Nov 26 00:06:39 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62038016 unmapped: 737280 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:18.853099+0000)
Nov 26 00:06:39 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62038016 unmapped: 737280 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:39 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:19.853242+0000)
Nov 26 00:06:39 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62046208 unmapped: 729088 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:20.898772+0000)
Nov 26 00:06:39 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:39 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:39 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62046208 unmapped: 729088 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:39 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 446620 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:39 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Nov 26 00:06:39 compute-0 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.852318764s of 11.895004272s, submitted: 10
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:21.898942+0000)
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 75 sent 73 num 2 unsent 2 sending 2
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:35:51.468599+0000 osd.1 (osd.1) 74 : cluster [DBG] 2.1b scrub starts
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:35:51.482700+0000 osd.1 (osd.1) 75 : cluster [DBG] 2.1b scrub ok
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 75) v1
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:35:51.468599+0000 osd.1 (osd.1) 74 : cluster [DBG] 2.1b scrub starts
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:35:51.482700+0000 osd.1 (osd.1) 75 : cluster [DBG] 2.1b scrub ok
Nov 26 00:06:39 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62062592 unmapped: 712704 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:22.899190+0000)
Nov 26 00:06:39 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62070784 unmapped: 704512 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:23.899384+0000)
Nov 26 00:06:39 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62070784 unmapped: 704512 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:39 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:39 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:24.899546+0000)
Nov 26 00:06:39 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62078976 unmapped: 696320 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.17 deep-scrub starts
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.17 deep-scrub ok
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:25.899659+0000)
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 77 sent 75 num 2 unsent 2 sending 2
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:35:55.580024+0000 osd.1 (osd.1) 76 : cluster [DBG] 2.17 deep-scrub starts
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:35:55.594114+0000 osd.1 (osd.1) 77 : cluster [DBG] 2.17 deep-scrub ok
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 77) v1
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:35:55.580024+0000 osd.1 (osd.1) 76 : cluster [DBG] 2.17 deep-scrub starts
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:35:55.594114+0000 osd.1 (osd.1) 77 : cluster [DBG] 2.17 deep-scrub ok
Nov 26 00:06:39 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:39 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:39 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62095360 unmapped: 679936 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:39 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 448916 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:26.899852+0000)
Nov 26 00:06:39 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62103552 unmapped: 671744 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:27.899973+0000)
Nov 26 00:06:39 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62103552 unmapped: 671744 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:28.900106+0000)
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 79 sent 77 num 2 unsent 2 sending 2
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:35:58.500728+0000 osd.1 (osd.1) 78 : cluster [DBG] 5.11 scrub starts
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:35:58.514817+0000 osd.1 (osd.1) 79 : cluster [DBG] 5.11 scrub ok
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 79) v1
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:35:58.500728+0000 osd.1 (osd.1) 78 : cluster [DBG] 5.11 scrub starts
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:35:58.514817+0000 osd.1 (osd.1) 79 : cluster [DBG] 5.11 scrub ok
Nov 26 00:06:39 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62119936 unmapped: 655360 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.15 deep-scrub starts
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.15 deep-scrub ok
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:29.900507+0000)
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 81 sent 79 num 2 unsent 2 sending 2
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:35:59.459754+0000 osd.1 (osd.1) 80 : cluster [DBG] 2.15 deep-scrub starts
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:35:59.473847+0000 osd.1 (osd.1) 81 : cluster [DBG] 2.15 deep-scrub ok
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:39 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 81) v1
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:35:59.459754+0000 osd.1 (osd.1) 80 : cluster [DBG] 2.15 deep-scrub starts
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:35:59.473847+0000 osd.1 (osd.1) 81 : cluster [DBG] 2.15 deep-scrub ok
Nov 26 00:06:39 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62119936 unmapped: 655360 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:30.900715+0000)
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 83 sent 81 num 2 unsent 2 sending 2
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:36:00.429488+0000 osd.1 (osd.1) 82 : cluster [DBG] 5.12 scrub starts
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:36:00.443864+0000 osd.1 (osd.1) 83 : cluster [DBG] 5.12 scrub ok
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 83) v1
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:36:00.429488+0000 osd.1 (osd.1) 82 : cluster [DBG] 5.12 scrub starts
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:36:00.443864+0000 osd.1 (osd.1) 83 : cluster [DBG] 5.12 scrub ok
Nov 26 00:06:39 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:39 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:39 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62136320 unmapped: 638976 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:39 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 452360 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:31.900941+0000)
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 85 sent 83 num 2 unsent 2 sending 2
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:36:01.416751+0000 osd.1 (osd.1) 84 : cluster [DBG] 5.13 scrub starts
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:36:01.430716+0000 osd.1 (osd.1) 85 : cluster [DBG] 5.13 scrub ok
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 85) v1
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:36:01.416751+0000 osd.1 (osd.1) 84 : cluster [DBG] 5.13 scrub starts
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:36:01.430716+0000 osd.1 (osd.1) 85 : cluster [DBG] 5.13 scrub ok
Nov 26 00:06:39 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62144512 unmapped: 630784 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:32.901200+0000)
Nov 26 00:06:39 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62144512 unmapped: 630784 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:39 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:33.901341+0000)
Nov 26 00:06:39 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62144512 unmapped: 630784 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:34.901461+0000)
Nov 26 00:06:39 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62152704 unmapped: 622592 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:35.901597+0000)
Nov 26 00:06:39 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:39 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:39 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62160896 unmapped: 614400 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:39 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 453508 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:36.901734+0000)
Nov 26 00:06:39 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62169088 unmapped: 606208 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Nov 26 00:06:39 compute-0 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.911233902s of 15.955444336s, submitted: 12
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Nov 26 00:06:39 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:37.901901+0000)
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 87 sent 85 num 2 unsent 2 sending 2
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:36:07.423953+0000 osd.1 (osd.1) 86 : cluster [DBG] 5.9 scrub starts
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:36:07.438037+0000 osd.1 (osd.1) 87 : cluster [DBG] 5.9 scrub ok
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 87) v1
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:36:07.423953+0000 osd.1 (osd.1) 86 : cluster [DBG] 5.9 scrub starts
Nov 26 00:06:39 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:36:07.438037+0000 osd.1 (osd.1) 87 : cluster [DBG] 5.9 scrub ok
Nov 26 00:06:39 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62185472 unmapped: 589824 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:38.902124+0000)
Nov 26 00:06:39 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62185472 unmapped: 589824 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:39.902239+0000)
Nov 26 00:06:39 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62193664 unmapped: 581632 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:40.902357+0000)
Nov 26 00:06:39 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:39 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:39 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62193664 unmapped: 581632 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:39 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454655 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:39 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:41.902492+0000)
Nov 26 00:06:39 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:39 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62201856 unmapped: 573440 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:42.902663+0000)
Nov 26 00:06:39 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62201856 unmapped: 573440 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:43.902880+0000)
Nov 26 00:06:39 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62210048 unmapped: 565248 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:44.903368+0000)
Nov 26 00:06:39 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:39 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62218240 unmapped: 557056 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:45.903611+0000)
Nov 26 00:06:39 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:39 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:39 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62218240 unmapped: 557056 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:39 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454655 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:39 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:46.903772+0000)
Nov 26 00:06:39 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62226432 unmapped: 548864 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:47.903901+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62226432 unmapped: 548864 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.033292770s of 11.040759087s, submitted: 2
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:48.904022+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 89 sent 87 num 2 unsent 2 sending 2
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:36:18.464712+0000 osd.1 (osd.1) 88 : cluster [DBG] 2.3 scrub starts
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:36:18.478778+0000 osd.1 (osd.1) 89 : cluster [DBG] 2.3 scrub ok
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 89) v1
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:36:18.464712+0000 osd.1 (osd.1) 88 : cluster [DBG] 2.3 scrub starts
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:36:18.478778+0000 osd.1 (osd.1) 89 : cluster [DBG] 2.3 scrub ok
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62226432 unmapped: 548864 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:49.904257+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62234624 unmapped: 540672 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:50.904426+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62234624 unmapped: 540672 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 455802 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:51.904687+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 91 sent 89 num 2 unsent 2 sending 2
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:36:21.510339+0000 osd.1 (osd.1) 90 : cluster [DBG] 2.4 scrub starts
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:36:21.524302+0000 osd.1 (osd.1) 91 : cluster [DBG] 2.4 scrub ok
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62259200 unmapped: 516096 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 91) v1
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:36:21.510339+0000 osd.1 (osd.1) 90 : cluster [DBG] 2.4 scrub starts
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:36:21.524302+0000 osd.1 (osd.1) 91 : cluster [DBG] 2.4 scrub ok
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.7 deep-scrub starts
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.7 deep-scrub ok
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:52.904990+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 93 sent 91 num 2 unsent 2 sending 2
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:36:22.556881+0000 osd.1 (osd.1) 92 : cluster [DBG] 2.7 deep-scrub starts
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:36:22.570941+0000 osd.1 (osd.1) 93 : cluster [DBG] 2.7 deep-scrub ok
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62259200 unmapped: 516096 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 93) v1
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:36:22.556881+0000 osd.1 (osd.1) 92 : cluster [DBG] 2.7 deep-scrub starts
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:36:22.570941+0000 osd.1 (osd.1) 93 : cluster [DBG] 2.7 deep-scrub ok
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:53.905262+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62267392 unmapped: 507904 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:54.905415+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62267392 unmapped: 507904 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:55.905598+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62275584 unmapped: 499712 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 458096 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:56.905757+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62283776 unmapped: 491520 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:57.905907+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 95 sent 93 num 2 unsent 2 sending 2
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:36:27.591428+0000 osd.1 (osd.1) 94 : cluster [DBG] 2.6 scrub starts
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:36:27.605504+0000 osd.1 (osd.1) 95 : cluster [DBG] 2.6 scrub ok
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62283776 unmapped: 491520 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 95) v1
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:36:27.591428+0000 osd.1 (osd.1) 94 : cluster [DBG] 2.6 scrub starts
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:36:27.605504+0000 osd.1 (osd.1) 95 : cluster [DBG] 2.6 scrub ok
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:58.906069+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62291968 unmapped: 483328 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:59.906272+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62300160 unmapped: 475136 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.d scrub starts
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.077341080s of 12.105804443s, submitted: 8
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.d scrub ok
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:00.906468+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 97 sent 95 num 2 unsent 2 sending 2
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:36:30.570514+0000 osd.1 (osd.1) 96 : cluster [DBG] 2.d scrub starts
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:36:30.584652+0000 osd.1 (osd.1) 97 : cluster [DBG] 2.d scrub ok
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62316544 unmapped: 458752 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 460390 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 97) v1
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:36:30.570514+0000 osd.1 (osd.1) 96 : cluster [DBG] 2.d scrub starts
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:36:30.584652+0000 osd.1 (osd.1) 97 : cluster [DBG] 2.d scrub ok
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:01.907717+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62324736 unmapped: 450560 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:02.908070+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62324736 unmapped: 450560 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:03.909133+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62332928 unmapped: 442368 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:04.909687+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62332928 unmapped: 442368 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:05.910943+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 460390 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62332928 unmapped: 442368 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.16 deep-scrub starts
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.16 deep-scrub ok
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:06.911612+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 99 sent 97 num 2 unsent 2 sending 2
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:36:36.568094+0000 osd.1 (osd.1) 98 : cluster [DBG] 5.16 deep-scrub starts
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:36:36.582258+0000 osd.1 (osd.1) 99 : cluster [DBG] 5.16 deep-scrub ok
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62349312 unmapped: 425984 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 99) v1
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:36:36.568094+0000 osd.1 (osd.1) 98 : cluster [DBG] 5.16 deep-scrub starts
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:36:36.582258+0000 osd.1 (osd.1) 99 : cluster [DBG] 5.16 deep-scrub ok
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:07.911843+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62349312 unmapped: 425984 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:08.912010+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62357504 unmapped: 417792 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.a deep-scrub starts
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.a deep-scrub ok
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:09.912249+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 101 sent 99 num 2 unsent 2 sending 2
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:36:39.597771+0000 osd.1 (osd.1) 100 : cluster [DBG] 2.a deep-scrub starts
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:36:39.611914+0000 osd.1 (osd.1) 101 : cluster [DBG] 2.a deep-scrub ok
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62357504 unmapped: 417792 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 101) v1
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:36:39.597771+0000 osd.1 (osd.1) 100 : cluster [DBG] 2.a deep-scrub starts
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:36:39.611914+0000 osd.1 (osd.1) 101 : cluster [DBG] 2.a deep-scrub ok
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:10.912461+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 462685 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62365696 unmapped: 409600 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:11.912676+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62365696 unmapped: 409600 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:12.913279+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62365696 unmapped: 409600 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:13.913756+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62373888 unmapped: 401408 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:14.914223+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62373888 unmapped: 401408 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:15.914595+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 462685 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62382080 unmapped: 393216 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:16.914921+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62382080 unmapped: 393216 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:17.915059+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62382080 unmapped: 393216 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:18.915217+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62390272 unmapped: 385024 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:19.915385+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62390272 unmapped: 385024 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:20.915522+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 462685 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62398464 unmapped: 376832 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:21.915635+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62414848 unmapped: 360448 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:22.915817+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62414848 unmapped: 360448 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.f scrub starts
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 23.019342422s of 23.038656235s, submitted: 6
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.f scrub ok
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:23.915968+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 103 sent 101 num 2 unsent 2 sending 2
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:36:53.609229+0000 osd.1 (osd.1) 102 : cluster [DBG] 5.f scrub starts
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:36:53.623287+0000 osd.1 (osd.1) 103 : cluster [DBG] 5.f scrub ok
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62431232 unmapped: 344064 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 103) v1
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:36:53.609229+0000 osd.1 (osd.1) 102 : cluster [DBG] 5.f scrub starts
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:36:53.623287+0000 osd.1 (osd.1) 103 : cluster [DBG] 5.f scrub ok
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:24.916236+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 105 sent 103 num 2 unsent 2 sending 2
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:36:54.576098+0000 osd.1 (osd.1) 104 : cluster [DBG] 2.5 scrub starts
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:36:54.590204+0000 osd.1 (osd.1) 105 : cluster [DBG] 2.5 scrub ok
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62431232 unmapped: 344064 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 105) v1
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:36:54.576098+0000 osd.1 (osd.1) 104 : cluster [DBG] 2.5 scrub starts
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:36:54.590204+0000 osd.1 (osd.1) 105 : cluster [DBG] 2.5 scrub ok
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:25.916529+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 464979 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62447616 unmapped: 327680 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:26.916731+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62447616 unmapped: 327680 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:27.916884+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62447616 unmapped: 327680 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:28.917063+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62455808 unmapped: 319488 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:29.917275+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62455808 unmapped: 319488 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:30.917483+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 464979 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62464000 unmapped: 311296 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:31.917708+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62464000 unmapped: 311296 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:32.917862+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62464000 unmapped: 311296 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:33.918044+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 107 sent 105 num 2 unsent 2 sending 2
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:37:03.446915+0000 osd.1 (osd.1) 106 : cluster [DBG] 2.9 scrub starts
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:37:03.460961+0000 osd.1 (osd.1) 107 : cluster [DBG] 2.9 scrub ok
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 107) v1
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:37:03.446915+0000 osd.1 (osd.1) 106 : cluster [DBG] 2.9 scrub starts
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:37:03.460961+0000 osd.1 (osd.1) 107 : cluster [DBG] 2.9 scrub ok
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62472192 unmapped: 303104 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:34.918352+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62472192 unmapped: 303104 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:35.918639+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.775456429s of 12.810349464s, submitted: 6
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 467273 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62480384 unmapped: 294912 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:36.918773+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 109 sent 107 num 2 unsent 2 sending 2
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:37:06.419652+0000 osd.1 (osd.1) 108 : cluster [DBG] 5.1 scrub starts
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:37:06.433774+0000 osd.1 (osd.1) 109 : cluster [DBG] 5.1 scrub ok
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 109) v1
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:37:06.419652+0000 osd.1 (osd.1) 108 : cluster [DBG] 5.1 scrub starts
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:37:06.433774+0000 osd.1 (osd.1) 109 : cluster [DBG] 5.1 scrub ok
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.1a deep-scrub starts
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.1a deep-scrub ok
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62488576 unmapped: 286720 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:37.918969+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 111 sent 109 num 2 unsent 2 sending 2
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:37:07.432757+0000 osd.1 (osd.1) 110 : cluster [DBG] 5.1a deep-scrub starts
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:37:07.446848+0000 osd.1 (osd.1) 111 : cluster [DBG] 5.1a deep-scrub ok
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 111) v1
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:37:07.432757+0000 osd.1 (osd.1) 110 : cluster [DBG] 5.1a deep-scrub starts
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:37:07.446848+0000 osd.1 (osd.1) 111 : cluster [DBG] 5.1a deep-scrub ok
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62504960 unmapped: 270336 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:38.919209+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 113 sent 111 num 2 unsent 2 sending 2
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:37:08.421484+0000 osd.1 (osd.1) 112 : cluster [DBG] 5.19 scrub starts
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:37:08.435543+0000 osd.1 (osd.1) 113 : cluster [DBG] 5.19 scrub ok
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 113) v1
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:37:08.421484+0000 osd.1 (osd.1) 112 : cluster [DBG] 5.19 scrub starts
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:37:08.435543+0000 osd.1 (osd.1) 113 : cluster [DBG] 5.19 scrub ok
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62513152 unmapped: 262144 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:39.919588+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.c scrub starts
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.c scrub ok
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62529536 unmapped: 245760 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:40.919722+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 115 sent 113 num 2 unsent 2 sending 2
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:37:10.366751+0000 osd.1 (osd.1) 114 : cluster [DBG] 5.c scrub starts
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:37:10.380842+0000 osd.1 (osd.1) 115 : cluster [DBG] 5.c scrub ok
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 115) v1
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:37:10.366751+0000 osd.1 (osd.1) 114 : cluster [DBG] 5.c scrub starts
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:37:10.380842+0000 osd.1 (osd.1) 115 : cluster [DBG] 5.c scrub ok
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 470716 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62537728 unmapped: 237568 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:41.919907+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62537728 unmapped: 237568 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:42.920060+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62545920 unmapped: 229376 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:43.920260+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62554112 unmapped: 221184 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:44.920384+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62554112 unmapped: 221184 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:45.920543+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 470716 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62570496 unmapped: 204800 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:46.920663+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62570496 unmapped: 204800 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:47.920776+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62570496 unmapped: 204800 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:48.920907+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62578688 unmapped: 196608 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:49.921078+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62578688 unmapped: 196608 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:50.921293+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.18 deep-scrub starts
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.953562737s of 14.978596687s, submitted: 8
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 5.18 deep-scrub ok
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 471864 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62578688 unmapped: 196608 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:51.921539+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 117 sent 115 num 2 unsent 2 sending 2
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:37:21.398317+0000 osd.1 (osd.1) 116 : cluster [DBG] 5.18 deep-scrub starts
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:37:21.412412+0000 osd.1 (osd.1) 117 : cluster [DBG] 5.18 deep-scrub ok
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 117) v1
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:37:21.398317+0000 osd.1 (osd.1) 116 : cluster [DBG] 5.18 deep-scrub starts
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:37:21.412412+0000 osd.1 (osd.1) 117 : cluster [DBG] 5.18 deep-scrub ok
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62586880 unmapped: 188416 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:52.921807+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62586880 unmapped: 188416 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:53.922096+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62595072 unmapped: 180224 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:54.922325+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62595072 unmapped: 180224 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:55.922482+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.2 deep-scrub starts
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.2 deep-scrub ok
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 473011 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62611456 unmapped: 163840 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:56.922670+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 119 sent 117 num 2 unsent 2 sending 2
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:37:26.383011+0000 osd.1 (osd.1) 118 : cluster [DBG] 6.2 deep-scrub starts
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:37:26.397073+0000 osd.1 (osd.1) 119 : cluster [DBG] 6.2 deep-scrub ok
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 119) v1
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:37:26.383011+0000 osd.1 (osd.1) 118 : cluster [DBG] 6.2 deep-scrub starts
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:37:26.397073+0000 osd.1 (osd.1) 119 : cluster [DBG] 6.2 deep-scrub ok
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62619648 unmapped: 155648 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:57.922895+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62619648 unmapped: 155648 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:58.923095+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62627840 unmapped: 147456 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:59.923296+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62627840 unmapped: 147456 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:00.923460+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 473011 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62644224 unmapped: 131072 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:01.923598+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62644224 unmapped: 131072 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:02.923782+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62644224 unmapped: 131072 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:03.923947+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.053456306s of 13.070212364s, submitted: 4
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62660608 unmapped: 114688 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:04.924106+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 121 sent 119 num 2 unsent 2 sending 2
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:37:34.468396+0000 osd.1 (osd.1) 120 : cluster [DBG] 6.6 scrub starts
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:37:34.486014+0000 osd.1 (osd.1) 121 : cluster [DBG] 6.6 scrub ok
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 121) v1
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:37:34.468396+0000 osd.1 (osd.1) 120 : cluster [DBG] 6.6 scrub starts
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:37:34.486014+0000 osd.1 (osd.1) 121 : cluster [DBG] 6.6 scrub ok
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62660608 unmapped: 114688 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:05.924554+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 474158 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62660608 unmapped: 114688 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:06.925965+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62668800 unmapped: 106496 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:07.926711+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62668800 unmapped: 106496 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:08.926907+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.e scrub starts
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.e scrub ok
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62685184 unmapped: 90112 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:09.927103+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 123 sent 121 num 2 unsent 2 sending 2
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:37:39.523649+0000 osd.1 (osd.1) 122 : cluster [DBG] 6.e scrub starts
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:37:39.541341+0000 osd.1 (osd.1) 123 : cluster [DBG] 6.e scrub ok
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 123) v1
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:37:39.523649+0000 osd.1 (osd.1) 122 : cluster [DBG] 6.e scrub starts
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:37:39.541341+0000 osd.1 (osd.1) 123 : cluster [DBG] 6.e scrub ok
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.c scrub starts
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.c scrub ok
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62685184 unmapped: 90112 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:10.927279+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 125 sent 123 num 2 unsent 2 sending 2
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:37:40.481572+0000 osd.1 (osd.1) 124 : cluster [DBG] 6.c scrub starts
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:37:40.499231+0000 osd.1 (osd.1) 125 : cluster [DBG] 6.c scrub ok
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 125) v1
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:37:40.481572+0000 osd.1 (osd.1) 124 : cluster [DBG] 6.c scrub starts
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:37:40.499231+0000 osd.1 (osd.1) 125 : cluster [DBG] 6.c scrub ok
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 476452 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62701568 unmapped: 73728 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:11.927561+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62701568 unmapped: 73728 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:12.928119+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62701568 unmapped: 73728 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:13.928546+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62709760 unmapped: 65536 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:14.928724+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.003563881s of 11.035881042s, submitted: 6
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62709760 unmapped: 65536 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:15.928914+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 127 sent 125 num 2 unsent 2 sending 2
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:37:45.504493+0000 osd.1 (osd.1) 126 : cluster [DBG] 6.4 scrub starts
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:37:45.532795+0000 osd.1 (osd.1) 127 : cluster [DBG] 6.4 scrub ok
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 127) v1
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:37:45.504493+0000 osd.1 (osd.1) 126 : cluster [DBG] 6.4 scrub starts
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:37:45.532795+0000 osd.1 (osd.1) 127 : cluster [DBG] 6.4 scrub ok
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 477599 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62717952 unmapped: 57344 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:16.929123+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.b scrub starts
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.b scrub ok
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62734336 unmapped: 40960 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:17.929513+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 129 sent 127 num 2 unsent 2 sending 2
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:37:47.566577+0000 osd.1 (osd.1) 128 : cluster [DBG] 6.b scrub starts
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:37:47.584273+0000 osd.1 (osd.1) 129 : cluster [DBG] 6.b scrub ok
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 129) v1
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:37:47.566577+0000 osd.1 (osd.1) 128 : cluster [DBG] 6.b scrub starts
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:37:47.584273+0000 osd.1 (osd.1) 129 : cluster [DBG] 6.b scrub ok
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62742528 unmapped: 32768 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:18.929797+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.d scrub starts
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_channel(cluster) log [DBG] : 6.d scrub ok
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62750720 unmapped: 24576 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:19.930017+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  log_queue is 2 last_log 131 sent 129 num 2 unsent 2 sending 2
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:37:49.555024+0000 osd.1 (osd.1) 130 : cluster [DBG] 6.d scrub starts
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  will send 2025-11-25T23:37:49.576236+0000 osd.1 (osd.1) 131 : cluster [DBG] 6.d scrub ok
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client handle_log_ack log(last 131) v1
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:37:49.555024+0000 osd.1 (osd.1) 130 : cluster [DBG] 6.d scrub starts
Nov 26 00:06:40 compute-0 ceph-osd[90055]: log_client  logged 2025-11-25T23:37:49.576236+0000 osd.1 (osd.1) 131 : cluster [DBG] 6.d scrub ok
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62750720 unmapped: 24576 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:20.930322+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62758912 unmapped: 16384 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:21.930528+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62758912 unmapped: 16384 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:22.930821+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62758912 unmapped: 16384 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:23.930973+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62767104 unmapped: 8192 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:24.931165+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62767104 unmapped: 8192 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:25.931345+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62775296 unmapped: 0 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:26.931609+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62775296 unmapped: 0 heap: 62775296 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:27.931793+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62783488 unmapped: 1040384 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:28.932111+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62783488 unmapped: 1040384 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:29.932462+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62783488 unmapped: 1040384 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:30.933271+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62791680 unmapped: 1032192 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:31.933478+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62791680 unmapped: 1032192 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:32.933787+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62791680 unmapped: 1032192 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:33.933980+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62799872 unmapped: 1024000 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:34.934249+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62799872 unmapped: 1024000 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:35.934406+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62799872 unmapped: 1024000 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:36.934770+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62808064 unmapped: 1015808 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:37.935020+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62791680 unmapped: 1032192 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:38.935305+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62799872 unmapped: 1024000 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:39.935731+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62799872 unmapped: 1024000 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:40.935938+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62808064 unmapped: 1015808 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:41.936260+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62808064 unmapped: 1015808 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:42.936493+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62808064 unmapped: 1015808 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:43.936736+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62816256 unmapped: 1007616 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:44.936912+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62816256 unmapped: 1007616 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:45.937117+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62824448 unmapped: 999424 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:46.937434+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62824448 unmapped: 999424 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:47.937646+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62824448 unmapped: 999424 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:48.937854+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:49.938133+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62832640 unmapped: 991232 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:50.938359+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62832640 unmapped: 991232 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:51.938666+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62840832 unmapped: 983040 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:52.938947+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62849024 unmapped: 974848 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:53.939149+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62849024 unmapped: 974848 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:54.939343+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62857216 unmapped: 966656 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:55.939577+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62857216 unmapped: 966656 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:56.939813+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62857216 unmapped: 966656 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:57.940046+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62865408 unmapped: 958464 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:58.940301+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62865408 unmapped: 958464 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:59.940530+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62873600 unmapped: 950272 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:00.940739+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62873600 unmapped: 950272 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:01.940929+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62873600 unmapped: 950272 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:02.941125+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62881792 unmapped: 942080 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:03.941250+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62881792 unmapped: 942080 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:04.941394+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62889984 unmapped: 933888 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:05.941586+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62889984 unmapped: 933888 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:06.941790+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62889984 unmapped: 933888 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:07.941969+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62898176 unmapped: 925696 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:08.942209+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62898176 unmapped: 925696 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:09.942448+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62898176 unmapped: 925696 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:10.942578+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62906368 unmapped: 917504 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:11.942803+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62914560 unmapped: 909312 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:12.942982+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62914560 unmapped: 909312 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:13.943211+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62914560 unmapped: 909312 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:14.943393+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62922752 unmapped: 901120 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:15.943556+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62922752 unmapped: 901120 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:16.943696+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62930944 unmapped: 892928 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:17.943849+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62930944 unmapped: 892928 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:18.944032+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62930944 unmapped: 892928 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:19.944213+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62939136 unmapped: 884736 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:20.944362+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62939136 unmapped: 884736 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:21.944522+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62947328 unmapped: 876544 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:22.944632+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62947328 unmapped: 876544 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:23.944792+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62947328 unmapped: 876544 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:24.944940+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62955520 unmapped: 868352 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:25.945150+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62955520 unmapped: 868352 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:26.945396+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62963712 unmapped: 860160 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:27.945669+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62963712 unmapped: 860160 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:28.945854+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62963712 unmapped: 860160 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:29.946238+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62963712 unmapped: 860160 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:30.946441+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62971904 unmapped: 851968 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:31.946591+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62971904 unmapped: 851968 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:32.946778+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62971904 unmapped: 851968 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:33.946984+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62980096 unmapped: 843776 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:34.947238+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62980096 unmapped: 843776 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:35.947385+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62988288 unmapped: 835584 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:36.947551+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62988288 unmapped: 835584 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:37.948042+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62996480 unmapped: 827392 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:38.948221+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62996480 unmapped: 827392 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:39.948560+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 62996480 unmapped: 827392 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:40.948851+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63004672 unmapped: 819200 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:41.949073+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63004672 unmapped: 819200 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:42.949255+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63012864 unmapped: 811008 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:43.949430+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63012864 unmapped: 811008 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:44.949549+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63012864 unmapped: 811008 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:45.949686+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63021056 unmapped: 802816 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:46.949884+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63021056 unmapped: 802816 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:47.950030+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63029248 unmapped: 794624 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:48.950255+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63029248 unmapped: 794624 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:49.950443+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63029248 unmapped: 794624 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:50.950595+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63037440 unmapped: 786432 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:51.950731+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63029248 unmapped: 794624 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:52.950893+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63029248 unmapped: 794624 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:53.951046+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63037440 unmapped: 786432 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:54.951234+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63037440 unmapped: 786432 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:55.951346+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63045632 unmapped: 778240 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:56.951485+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63045632 unmapped: 778240 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:57.951631+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63045632 unmapped: 778240 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:58.951786+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63053824 unmapped: 770048 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:59.951952+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63053824 unmapped: 770048 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:00.952066+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63062016 unmapped: 761856 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:01.952158+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63062016 unmapped: 761856 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:02.952295+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63062016 unmapped: 761856 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:03.952427+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63070208 unmapped: 753664 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:04.952558+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63070208 unmapped: 753664 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:05.952718+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63070208 unmapped: 753664 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:06.952930+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63078400 unmapped: 745472 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:07.954562+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63078400 unmapped: 745472 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:08.954675+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63086592 unmapped: 737280 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:09.954833+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63086592 unmapped: 737280 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:10.955018+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63094784 unmapped: 729088 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:11.955247+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63094784 unmapped: 729088 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:12.955396+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63094784 unmapped: 729088 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:13.955536+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63102976 unmapped: 720896 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:14.955660+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63102976 unmapped: 720896 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:15.955798+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63119360 unmapped: 704512 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:16.955911+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63119360 unmapped: 704512 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:17.956033+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63119360 unmapped: 704512 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:18.956143+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63127552 unmapped: 696320 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:19.956359+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63127552 unmapped: 696320 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:20.956493+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63135744 unmapped: 688128 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:21.956614+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63135744 unmapped: 688128 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:22.956790+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63135744 unmapped: 688128 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:23.956977+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63143936 unmapped: 679936 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:24.957262+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63143936 unmapped: 679936 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:25.957374+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63143936 unmapped: 679936 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:26.957532+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63152128 unmapped: 671744 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:27.957702+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63152128 unmapped: 671744 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:28.957865+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63160320 unmapped: 663552 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:29.958049+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63160320 unmapped: 663552 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:30.958225+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63160320 unmapped: 663552 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:31.958439+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63168512 unmapped: 655360 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:32.958607+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63168512 unmapped: 655360 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:33.958779+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63176704 unmapped: 647168 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:34.958909+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63176704 unmapped: 647168 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:35.959072+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63176704 unmapped: 647168 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:36.959254+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63184896 unmapped: 638976 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:37.959413+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63184896 unmapped: 638976 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:38.959604+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63184896 unmapped: 638976 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:39.959813+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63193088 unmapped: 630784 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:40.959972+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63193088 unmapped: 630784 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:41.960131+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63201280 unmapped: 622592 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:42.960256+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63201280 unmapped: 622592 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:43.960436+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63201280 unmapped: 622592 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:44.960584+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63209472 unmapped: 614400 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:45.960755+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63209472 unmapped: 614400 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:46.960889+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63209472 unmapped: 614400 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:47.961047+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63217664 unmapped: 606208 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:48.961262+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63217664 unmapped: 606208 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:49.961450+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63225856 unmapped: 598016 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:50.961625+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63225856 unmapped: 598016 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:51.961796+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63225856 unmapped: 598016 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:52.961946+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63234048 unmapped: 589824 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:53.962198+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63234048 unmapped: 589824 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:54.962381+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63234048 unmapped: 589824 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:55.962502+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63242240 unmapped: 581632 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:56.962652+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63242240 unmapped: 581632 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:57.962811+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63250432 unmapped: 573440 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:58.962974+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63250432 unmapped: 573440 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:59.963159+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63250432 unmapped: 573440 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:00.963342+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63250432 unmapped: 573440 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:01.963506+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63250432 unmapped: 573440 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:02.963730+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63250432 unmapped: 573440 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:03.963892+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63258624 unmapped: 565248 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:04.964062+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63258624 unmapped: 565248 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:05.964232+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63266816 unmapped: 557056 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:06.964344+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63266816 unmapped: 557056 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:07.964484+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63266816 unmapped: 557056 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:08.964624+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63275008 unmapped: 548864 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:09.964800+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63275008 unmapped: 548864 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:10.964966+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63283200 unmapped: 540672 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:11.965154+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63283200 unmapped: 540672 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:12.965310+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63283200 unmapped: 540672 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:13.965499+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63291392 unmapped: 532480 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:14.965668+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63291392 unmapped: 532480 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:15.965824+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63299584 unmapped: 524288 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:16.965974+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63299584 unmapped: 524288 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:17.966134+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63299584 unmapped: 524288 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:18.966254+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63307776 unmapped: 516096 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:19.966405+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63307776 unmapped: 516096 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:20.966635+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63307776 unmapped: 516096 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:21.966785+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63315968 unmapped: 507904 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:22.966979+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63315968 unmapped: 507904 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:23.967140+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63324160 unmapped: 499712 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:24.967277+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63324160 unmapped: 499712 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:25.967435+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63332352 unmapped: 491520 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:26.967969+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63340544 unmapped: 483328 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:27.968099+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63340544 unmapped: 483328 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:28.968367+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63340544 unmapped: 483328 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:29.968558+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63348736 unmapped: 475136 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:30.968700+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63348736 unmapped: 475136 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:31.968859+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63348736 unmapped: 475136 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:32.969220+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63356928 unmapped: 466944 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:33.969513+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63356928 unmapped: 466944 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:34.969717+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63365120 unmapped: 458752 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:35.969985+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63365120 unmapped: 458752 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:36.970159+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63373312 unmapped: 450560 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:37.970553+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63373312 unmapped: 450560 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:38.970890+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63373312 unmapped: 450560 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:39.971108+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63381504 unmapped: 442368 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:40.971236+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63389696 unmapped: 434176 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:41.971433+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63389696 unmapped: 434176 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:42.971572+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63397888 unmapped: 425984 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:43.971732+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63397888 unmapped: 425984 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:44.971846+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63406080 unmapped: 417792 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:45.971962+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63406080 unmapped: 417792 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:46.972073+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63406080 unmapped: 417792 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:47.972218+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63414272 unmapped: 409600 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:48.972369+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63414272 unmapped: 409600 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:49.972652+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63422464 unmapped: 401408 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:50.972883+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63422464 unmapped: 401408 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:51.973158+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63422464 unmapped: 401408 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:52.973333+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63430656 unmapped: 393216 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:53.973623+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63430656 unmapped: 393216 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:54.973803+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63438848 unmapped: 385024 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:55.973956+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63438848 unmapped: 385024 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:56.974103+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63438848 unmapped: 385024 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:57.974274+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63447040 unmapped: 376832 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:58.974450+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63447040 unmapped: 376832 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:59.975466+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63447040 unmapped: 376832 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:00.975738+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63455232 unmapped: 368640 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:01.976235+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63455232 unmapped: 368640 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:02.976531+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63463424 unmapped: 360448 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:03.976741+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63463424 unmapped: 360448 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:04.976997+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63463424 unmapped: 360448 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:05.977360+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63471616 unmapped: 352256 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:06.977807+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63471616 unmapped: 352256 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:07.978006+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63471616 unmapped: 352256 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:08.978214+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63479808 unmapped: 344064 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:09.978407+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63479808 unmapped: 344064 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:10.978559+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63479808 unmapped: 344064 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:11.978713+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63488000 unmapped: 335872 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:12.978869+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63488000 unmapped: 335872 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:13.979035+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63496192 unmapped: 327680 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:14.979158+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63496192 unmapped: 327680 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:15.979284+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63496192 unmapped: 327680 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:16.979411+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63504384 unmapped: 319488 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:17.979564+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63504384 unmapped: 319488 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:18.979728+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63512576 unmapped: 311296 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:19.979958+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63512576 unmapped: 311296 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:20.980161+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63512576 unmapped: 311296 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:21.980905+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63520768 unmapped: 303104 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:22.981098+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63520768 unmapped: 303104 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:23.982480+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63528960 unmapped: 294912 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:24.982874+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63528960 unmapped: 294912 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:25.983022+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63528960 unmapped: 294912 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:26.983213+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63537152 unmapped: 286720 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:27.983390+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63537152 unmapped: 286720 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:28.983720+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63537152 unmapped: 286720 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:29.984720+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63545344 unmapped: 278528 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:30.984837+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63545344 unmapped: 278528 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:31.985033+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63545344 unmapped: 278528 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:32.985219+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63553536 unmapped: 270336 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:33.985385+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63553536 unmapped: 270336 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:34.985555+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63561728 unmapped: 262144 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:35.985722+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63561728 unmapped: 262144 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:36.986298+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63561728 unmapped: 262144 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:37.986434+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63569920 unmapped: 253952 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:38.986667+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63569920 unmapped: 253952 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:39.986885+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63569920 unmapped: 253952 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:40.987081+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63578112 unmapped: 245760 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:41.987278+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63578112 unmapped: 245760 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:42.987489+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63586304 unmapped: 237568 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:43.987649+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63586304 unmapped: 237568 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:44.987817+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63586304 unmapped: 237568 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:45.987945+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63594496 unmapped: 229376 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:46.988097+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63594496 unmapped: 229376 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:47.988237+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63602688 unmapped: 221184 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:48.988411+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63602688 unmapped: 221184 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:49.988613+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63610880 unmapped: 212992 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:50.988735+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63610880 unmapped: 212992 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:51.988846+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63610880 unmapped: 212992 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:52.989160+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63619072 unmapped: 204800 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:53.989342+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63619072 unmapped: 204800 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:54.989441+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63619072 unmapped: 204800 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:55.989556+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63619072 unmapped: 204800 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:56.989683+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63627264 unmapped: 196608 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:57.989828+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63627264 unmapped: 196608 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:58.989984+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63635456 unmapped: 188416 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:59.990137+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63635456 unmapped: 188416 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:00.990290+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63635456 unmapped: 188416 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:01.990427+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63643648 unmapped: 180224 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:02.990609+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63643648 unmapped: 180224 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:03.990752+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63651840 unmapped: 172032 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:04.990914+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63651840 unmapped: 172032 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:05.991116+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63651840 unmapped: 172032 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:06.991902+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63660032 unmapped: 163840 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:07.992108+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63660032 unmapped: 163840 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:08.992264+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63668224 unmapped: 155648 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:09.992442+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63668224 unmapped: 155648 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:10.992581+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63676416 unmapped: 147456 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:11.992722+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63684608 unmapped: 139264 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:12.992866+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63684608 unmapped: 139264 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:13.993011+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63684608 unmapped: 139264 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:14.993248+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63692800 unmapped: 131072 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:15.993373+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63692800 unmapped: 131072 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:16.993573+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63692800 unmapped: 131072 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:17.993750+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63700992 unmapped: 122880 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:18.993863+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63700992 unmapped: 122880 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:19.994079+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63709184 unmapped: 114688 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:20.994230+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63709184 unmapped: 114688 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:21.994443+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63717376 unmapped: 106496 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:22.994590+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63717376 unmapped: 106496 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:23.994729+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63717376 unmapped: 106496 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:24.994870+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Cumulative writes: 4489 writes, 20K keys, 4489 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 4489 writes, 490 syncs, 9.16 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4489 writes, 20K keys, 4489 commit groups, 1.0 writes per commit group, ingest: 16.54 MB, 0.03 MB/s
                                           Interval WAL: 4489 writes, 490 syncs, 9.16 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613e96171f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613e96171f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613e96171f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613e96171f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613e96171f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613e96171f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613e96171f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613e9617090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613e9617090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613e9617090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613e96171f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613e96171f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63807488 unmapped: 16384 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:25.995072+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63815680 unmapped: 8192 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:26.995251+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63815680 unmapped: 8192 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:27.995380+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63823872 unmapped: 0 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:28.995501+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63823872 unmapped: 0 heap: 63823872 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:29.995644+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63832064 unmapped: 1040384 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:30.995758+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63832064 unmapped: 1040384 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:31.995898+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:32.996110+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63832064 unmapped: 1040384 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:33.996257+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63840256 unmapped: 1032192 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:34.996438+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63840256 unmapped: 1032192 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:35.996573+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63848448 unmapped: 1024000 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:36.996748+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63840256 unmapped: 1032192 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:37.996943+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63840256 unmapped: 1032192 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:38.997148+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63848448 unmapped: 1024000 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:39.997347+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63848448 unmapped: 1024000 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:40.997492+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63856640 unmapped: 1015808 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:41.997621+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63856640 unmapped: 1015808 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:42.997744+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63856640 unmapped: 1015808 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:43.997893+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63856640 unmapped: 1015808 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:44.998059+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63864832 unmapped: 1007616 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:45.998256+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63864832 unmapped: 1007616 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:46.998473+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63873024 unmapped: 999424 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:47.998641+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63873024 unmapped: 999424 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:48.998758+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63873024 unmapped: 999424 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:49.998911+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63881216 unmapped: 991232 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:50.999056+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63881216 unmapped: 991232 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:51.999248+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63889408 unmapped: 983040 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:52.999413+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63889408 unmapped: 983040 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:53.999576+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63889408 unmapped: 983040 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:54.999735+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63897600 unmapped: 974848 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:55.999892+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63897600 unmapped: 974848 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:57.000050+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63897600 unmapped: 974848 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:58.000197+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63905792 unmapped: 966656 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:59.000359+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63905792 unmapped: 966656 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:00.000553+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63913984 unmapped: 958464 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:01.000754+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63913984 unmapped: 958464 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:02.000953+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63913984 unmapped: 958464 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:03.001205+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63922176 unmapped: 950272 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:04.001368+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63922176 unmapped: 950272 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:05.001562+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63930368 unmapped: 942080 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:06.001817+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63930368 unmapped: 942080 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:07.002029+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63930368 unmapped: 942080 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:08.002225+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63938560 unmapped: 933888 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:09.002418+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63938560 unmapped: 933888 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:10.002595+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63946752 unmapped: 925696 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:11.002779+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63946752 unmapped: 925696 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:12.002939+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63946752 unmapped: 925696 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:13.003126+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63954944 unmapped: 917504 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:14.003228+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63954944 unmapped: 917504 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:15.003369+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63954944 unmapped: 917504 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:16.003491+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63963136 unmapped: 909312 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:17.003638+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63963136 unmapped: 909312 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:18.003762+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63971328 unmapped: 901120 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:19.003981+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63971328 unmapped: 901120 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:20.004272+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63971328 unmapped: 901120 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:21.004479+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63979520 unmapped: 892928 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:22.004693+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63979520 unmapped: 892928 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:23.004927+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63979520 unmapped: 892928 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:24.005146+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63987712 unmapped: 884736 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:25.005322+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63987712 unmapped: 884736 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:26.005503+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63995904 unmapped: 876544 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:27.005690+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63995904 unmapped: 876544 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:28.005885+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 63995904 unmapped: 876544 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:29.006022+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64004096 unmapped: 868352 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:30.006164+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64004096 unmapped: 868352 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:31.006303+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64012288 unmapped: 860160 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:32.006439+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64012288 unmapped: 860160 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:33.006636+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64012288 unmapped: 860160 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:34.006755+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64020480 unmapped: 851968 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:35.006920+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64020480 unmapped: 851968 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:36.007077+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64020480 unmapped: 851968 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:37.007219+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64028672 unmapped: 843776 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:38.007349+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64028672 unmapped: 843776 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:39.007470+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64036864 unmapped: 835584 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:40.007620+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64036864 unmapped: 835584 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:41.007746+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64036864 unmapped: 835584 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:42.007894+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64045056 unmapped: 827392 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:43.008024+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64045056 unmapped: 827392 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:44.008151+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64045056 unmapped: 827392 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:45.008236+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64053248 unmapped: 819200 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:46.008379+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64053248 unmapped: 819200 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:47.008507+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64069632 unmapped: 802816 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:48.008627+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64069632 unmapped: 802816 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:49.008893+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64069632 unmapped: 802816 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:50.009091+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64077824 unmapped: 794624 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:51.009251+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64077824 unmapped: 794624 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:52.009412+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64086016 unmapped: 786432 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:53.009599+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64086016 unmapped: 786432 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:54.009754+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64086016 unmapped: 786432 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:55.009921+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64094208 unmapped: 778240 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:56.010071+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64094208 unmapped: 778240 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:57.010232+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64094208 unmapped: 778240 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:58.010534+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64102400 unmapped: 770048 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:59.010649+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64102400 unmapped: 770048 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:00.010822+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64102400 unmapped: 770048 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:01.010974+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64110592 unmapped: 761856 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:02.011108+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64110592 unmapped: 761856 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:03.011242+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64118784 unmapped: 753664 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:04.011367+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64118784 unmapped: 753664 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:05.011510+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64118784 unmapped: 753664 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:06.011673+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64118784 unmapped: 753664 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:07.011824+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64126976 unmapped: 745472 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:08.012030+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64126976 unmapped: 745472 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:09.012199+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64135168 unmapped: 737280 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:10.012411+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64135168 unmapped: 737280 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:11.012552+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64143360 unmapped: 729088 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:12.012741+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64143360 unmapped: 729088 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:13.012955+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64151552 unmapped: 720896 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:14.013108+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64151552 unmapped: 720896 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:15.013276+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64151552 unmapped: 720896 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:16.013433+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 712704 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:17.013557+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 712704 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:18.013723+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 704512 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:19.013859+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 704512 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:20.014051+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 704512 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:21.014214+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 704512 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:22.014362+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 704512 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:23.014530+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 704512 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:24.014668+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 704512 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:25.014793+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:26.014912+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:27.015246+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:28.015493+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:29.015670+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:30.015946+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:31.016090+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:32.016297+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:33.016549+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:34.016752+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:35.016899+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:36.017021+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:37.017228+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:38.017349+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:39.017540+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:40.017771+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:41.017903+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:42.018108+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:43.018257+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:44.018401+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:45.018590+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:46.018719+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:47.018868+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:48.019048+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:49.019222+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:50.019392+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:51.121456+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:52.121609+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:53.121731+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:54.121853+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:55.122019+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:56.122130+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:57.122228+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:58.122365+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:59.122524+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:00.122761+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:01.122927+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:02.123043+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:03.123221+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:04.123387+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:05.123531+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:06.123714+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:07.123895+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:08.124051+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:09.124258+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:10.124593+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:11.124791+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:12.125027+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:13.125269+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:14.125385+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:15.125495+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:16.125622+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:17.125749+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:18.125874+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:19.126004+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:20.126224+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:21.126384+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:22.126550+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:23.126733+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:24.126919+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:25.127074+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:26.127658+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:27.127780+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:28.127928+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:29.128118+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:30.128434+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:31.128593+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:32.131042+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:33.131229+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:34.131351+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:35.131546+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:36.131678+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:37.131803+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:38.131954+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:39.132133+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:40.132316+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:41.132535+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:42.132717+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:43.132903+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:44.133090+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:45.133240+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:46.133380+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:47.133538+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:48.133664+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:49.133833+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:50.133999+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:51.134159+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:52.134307+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:53.134437+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:54.134616+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:55.134750+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:56.134852+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:57.135014+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:58.135128+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:59.135312+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:00.135536+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:01.135705+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:02.135867+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:03.136016+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:04.136163+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:05.136336+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:06.136510+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:07.136677+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:08.136858+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:09.137017+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:10.137241+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:11.137361+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:12.137492+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:13.137672+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:14.137810+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:15.137942+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:16.138108+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:17.138264+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:18.138413+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:19.138509+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:20.138876+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:21.138994+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:22.139148+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:23.139253+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:24.139378+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:25.139518+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:26.139664+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:27.139760+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:28.139937+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:29.140093+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:30.140236+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:31.140396+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:32.140569+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:33.140779+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:34.140923+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:35.141071+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:36.141217+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:37.141406+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:38.141563+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:39.141670+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:40.141879+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64176128 unmapped: 696320 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:41.142055+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:42.142187+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:43.142308+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:44.142445+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:45.142713+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:46.142846+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:47.142999+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:48.143303+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:49.143426+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:50.144427+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:51.144574+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:52.144701+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:53.144875+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:54.145021+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:55.145411+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:56.145643+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:57.145841+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:58.146031+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:59.146226+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:00.146371+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:01.146502+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:02.146599+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:03.146758+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:04.146925+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:05.147136+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:06.147264+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:07.147454+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:08.147771+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:09.148012+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:10.148249+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:11.148450+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:12.148616+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:13.148734+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:14.148875+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:15.158162+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:16.158361+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:17.158499+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:18.158684+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:19.158917+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:20.159798+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:21.159947+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:22.160106+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:23.160287+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:24.160446+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:25.160589+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:26.160739+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:27.160876+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:28.161036+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:29.161228+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:30.161409+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64184320 unmapped: 688128 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:31.187242+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: mgrc ms_handle_reset ms_handle_reset con 0x5613ea45dc00
Nov 26 00:06:40 compute-0 ceph-osd[90055]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/855624559
Nov 26 00:06:40 compute-0 ceph-osd[90055]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/855624559,v1:192.168.122.100:6801/855624559]
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: get_auth_request con 0x5613eaec7000 auth_method 0
Nov 26 00:06:40 compute-0 ceph-osd[90055]: mgrc handle_mgr_configure stats_period=5
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 ms_handle_reset con 0x5613eb896400 session 0x5613eb1c5860
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613eb896c00
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 ms_handle_reset con 0x5613eb897000 session 0x5613ebc8a5a0
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613eb896400
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:32.187392+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:33.187546+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:34.187717+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:35.187882+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:36.188087+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:37.188242+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:38.188375+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:39.188542+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:40.188748+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:41.188856+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:42.188991+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:43.189142+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:44.189258+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:45.189463+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:46.189745+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:47.189937+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:48.190116+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:49.190291+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:50.190519+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:51.190676+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:52.190805+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:53.190961+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:54.191114+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:55.191282+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:56.191412+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:57.191558+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:58.191697+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:59.191819+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:00.191990+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:01.192143+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:02.192318+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:03.192437+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:04.192592+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:05.192709+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:06.192845+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:07.192957+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:08.193085+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:09.193232+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:10.193783+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:11.193936+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:12.194093+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:13.194234+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:14.194381+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:15.194527+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:16.194665+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:17.194806+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:18.194957+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:19.195104+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:20.195248+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:21.195370+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:22.195512+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:23.195662+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:24.195875+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:25.196073+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:26.196328+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:27.196445+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:28.196786+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:29.196954+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:30.197329+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64356352 unmapped: 516096 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:31.197569+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:32.197820+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:33.197979+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:34.198194+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:35.198341+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:36.198479+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:37.198609+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:38.198725+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:39.199001+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:40.199211+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:41.199367+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:42.199506+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:43.199664+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:44.199791+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:45.199955+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:46.200295+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:47.200499+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:48.200617+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:49.200779+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:50.200952+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:51.201090+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:52.201325+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:53.201485+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:54.201623+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:55.201924+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:56.202105+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:57.202586+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:58.202731+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:59.203027+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:00.203233+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:01.203737+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:02.204213+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:03.204591+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:04.205467+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:05.206255+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:06.206726+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:07.206961+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:08.207157+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:09.207374+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:10.207591+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:11.207820+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:12.207962+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:13.208227+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:14.208449+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:15.208669+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:16.208825+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:17.208950+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:18.209082+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:19.209219+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:20.209385+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:21.209562+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:22.209739+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:23.209949+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:24.210233+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:25.210508+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:26.210686+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:27.210849+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:28.211014+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:29.211239+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:30.211461+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:31.211630+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:32.211791+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:33.212014+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:34.212443+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:35.212630+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:36.212742+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:37.212866+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:38.213003+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:39.213134+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:40.213386+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:41.213568+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:42.213791+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:43.213969+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:44.214152+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:45.214322+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:46.214503+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:47.214838+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:48.214962+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:49.215113+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:50.215284+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:51.215392+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:52.215556+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:53.215690+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:54.215827+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:55.215953+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:56.216081+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:57.216265+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:58.216489+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:59.216677+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:00.217251+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:01.218421+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:02.218666+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:03.218936+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:04.219295+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:05.219845+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:06.219975+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:07.220124+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:08.220249+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:09.220423+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:10.220594+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64266240 unmapped: 606208 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:11.220750+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:12.220938+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:13.221112+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:14.221349+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:15.221522+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:16.221695+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:17.221916+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:18.265360+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:19.265527+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:20.265689+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:21.265838+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:22.265996+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:23.266155+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:24.266362+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:25.266480+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:26.266600+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:27.266729+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:28.266848+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:29.267224+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:30.267366+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:31.267517+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:32.267656+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:33.267814+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:34.268013+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:35.268220+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:36.268369+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:37.268515+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:38.268628+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:39.268823+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:40.269061+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:41.269228+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:42.269470+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:43.269705+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:44.269934+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:45.270070+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:46.270218+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:47.270382+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:48.270580+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:49.270733+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:50.270951+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:51.271102+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:54.050420+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:55.050588+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:56.050832+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:57.050987+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:58.051087+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:59.051220+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:00.051329+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:01.051472+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:02.051611+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:03.051753+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:04.053437+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:05.054339+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:06.056125+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:07.057136+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:08.057652+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:09.057889+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:10.058086+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:11.058968+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:12.059636+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:13.059967+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:14.060282+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:15.060411+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:16.060778+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:17.061090+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:18.061347+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:19.061699+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:20.061960+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:21.062257+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:22.062452+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:23.062608+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:24.062761+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:25.062903+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:26.063012+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:27.063240+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:28.063427+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:29.063661+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:30.063951+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:31.064133+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:32.064252+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:33.064368+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:34.064535+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:35.064692+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:36.064808+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:37.064944+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:38.065100+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:39.065358+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:40.065480+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:41.065622+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:42.065744+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:43.065871+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:44.066036+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:45.066164+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:46.066360+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:47.066467+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:48.066600+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:49.066707+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:50.066866+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:51.067090+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:52.365612+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:53.365754+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:54.365975+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:55.366093+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:56.366257+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:57.366416+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:58.366565+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:59.366744+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:00.366881+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:01.367061+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:02.367206+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:03.367378+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:04.367537+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:05.367707+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:06.368678+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:07.368844+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:08.369026+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:09.369253+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:10.369473+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:11.369728+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:12.369924+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:13.370113+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:14.370298+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:15.370480+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:16.370647+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:17.370843+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:18.370974+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:19.371239+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:20.371418+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:21.371641+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:22.371834+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:23.372009+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:24.372204+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:25.372383+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Cumulative writes: 4489 writes, 20K keys, 4489 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 4489 writes, 490 syncs, 9.16 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613e96171f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000107 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613e96171f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000107 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613e96171f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000107 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613e96171f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000107 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613e96171f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000107 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613e96171f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000107 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613e96171f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000107 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613e9617090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613e9617090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613e9617090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613e96171f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000107 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5613e96171f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.000107 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:26.372564+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:27.372722+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:28.372905+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:29.373074+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:30.373262+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:31.373504+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:32.373831+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:33.374024+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:34.374299+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:35.374491+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:36.374695+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:37.374872+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:38.375070+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:39.375254+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:40.375445+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:41.375651+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:42.375838+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:43.376028+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:44.376204+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:45.376371+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:46.376611+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:47.376770+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:48.376935+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:49.377074+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:50.377238+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:51.377428+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:52.377599+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:53.377822+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:54.378009+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:55.378230+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:56.378399+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:57.378568+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:58.378770+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:59.378967+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:00.379117+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:01.379375+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:02.379515+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:03.379712+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:04.379843+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:05.379995+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:06.380157+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:07.380393+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:08.380567+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:09.380712+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:10.380928+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:11.381155+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:12.381356+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:13.381496+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:14.381699+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:15.381868+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:16.382016+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:17.382232+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:18.382453+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:19.382658+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:20.382843+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:21.383053+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:22.383256+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:23.383434+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:24.383597+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:25.383750+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:26.383954+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:27.384113+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:28.384265+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:29.384399+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:30.384595+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:31.384838+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:32.385033+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:33.385304+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:34.385500+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:35.385643+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:36.385812+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:37.386011+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:38.386146+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:39.386322+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:40.386487+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:41.386678+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:42.386810+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:43.386971+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:44.387139+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:45.387288+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64274432 unmapped: 598016 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:46.387441+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:47.387589+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:48.387710+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:49.387929+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:50.388121+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:51.388376+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:52.388514+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:53.388680+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:54.388829+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:55.388951+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:56.389049+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:57.389266+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:58.389410+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:59.389568+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:00.389761+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:01.389971+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:02.390126+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:03.390304+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:04.390454+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:05.390625+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:06.390800+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:07.390997+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:08.391131+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:09.391317+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:10.391540+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0xb8a01/0x121000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 479893 data_alloc: 218103808 data_used: 16384
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64282624 unmapped: 589824 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:11.391750+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ece8c800
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _renew_subs
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 63 handle_osd_map epochs [64,64], i have 63, src has [1,64]
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 1016.414916992s of 1016.440307617s, submitted: 6
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64299008 unmapped: 573440 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:12.391949+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _renew_subs
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 64 handle_osd_map epochs [65,65], i have 64, src has [1,65]
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 499712 heap: 64872448 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:13.392108+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 65 heartbeat osd_stat(store_statfs(0x4fe0a4000/0x0/0x4ffc00000, data 0xbb5dc/0x128000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64438272 unmapped: 17219584 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:14.392276+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _renew_subs
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 65 handle_osd_map epochs [66,66], i have 65, src has [1,66]
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 66 ms_handle_reset con 0x5613ece8c800 session 0x5613eb982b40
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64520192 unmapped: 17137664 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:15.392420+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 547922 data_alloc: 218103808 data_used: 24576
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64520192 unmapped: 17137664 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:16.392580+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64528384 unmapped: 17129472 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:17.392797+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ed314000
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64708608 unmapped: 16949248 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:18.393008+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 66 handle_osd_map epochs [66,67], i have 66, src has [1,67]
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 67 ms_handle_reset con 0x5613ed314000 session 0x5613ebd4de00
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64741376 unmapped: 16916480 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 67 heartbeat osd_stat(store_statfs(0x4fd09f000/0x0/0x4ffc00000, data 0x10be188/0x112e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:19.393240+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64741376 unmapped: 16916480 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:20.393529+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 67 handle_osd_map epochs [67,68], i have 67, src has [1,68]
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 610691 data_alloc: 218103808 data_used: 36864
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64741376 unmapped: 16916480 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:21.393809+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64741376 unmapped: 16916480 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:22.393982+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64741376 unmapped: 16916480 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:23.394232+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64741376 unmapped: 16916480 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:24.394438+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fd09c000/0x0/0x4ffc00000, data 0x10bf628/0x1131000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64741376 unmapped: 16916480 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:25.394665+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fd09c000/0x0/0x4ffc00000, data 0x10bf628/0x1131000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 610691 data_alloc: 218103808 data_used: 36864
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64741376 unmapped: 16916480 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:26.394846+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64741376 unmapped: 16916480 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:27.395075+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64741376 unmapped: 16916480 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:28.395329+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fd09c000/0x0/0x4ffc00000, data 0x10bf628/0x1131000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64741376 unmapped: 16916480 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:29.395516+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64741376 unmapped: 16916480 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:30.395698+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 610691 data_alloc: 218103808 data_used: 36864
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64741376 unmapped: 16916480 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:31.395895+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64741376 unmapped: 16916480 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:32.396069+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64741376 unmapped: 16916480 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:33.396281+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64741376 unmapped: 16916480 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:34.396458+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fd09c000/0x0/0x4ffc00000, data 0x10bf628/0x1131000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64741376 unmapped: 16916480 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:35.396616+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 610691 data_alloc: 218103808 data_used: 36864
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64749568 unmapped: 16908288 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:36.396786+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64749568 unmapped: 16908288 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:37.396939+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64749568 unmapped: 16908288 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:38.397130+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64749568 unmapped: 16908288 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:39.397267+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fd09c000/0x0/0x4ffc00000, data 0x10bf628/0x1131000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:40.397434+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64749568 unmapped: 16908288 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 610691 data_alloc: 218103808 data_used: 36864
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:41.397687+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64749568 unmapped: 16908288 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:42.397885+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64749568 unmapped: 16908288 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:43.398050+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64749568 unmapped: 16908288 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:44.398239+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64749568 unmapped: 16908288 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fd09c000/0x0/0x4ffc00000, data 0x10bf628/0x1131000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:45.398385+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64749568 unmapped: 16908288 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 610691 data_alloc: 218103808 data_used: 36864
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:46.398537+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64749568 unmapped: 16908288 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:47.398731+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64749568 unmapped: 16908288 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:48.398875+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64749568 unmapped: 16908288 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 68 heartbeat osd_stat(store_statfs(0x4fd09c000/0x0/0x4ffc00000, data 0x10bf628/0x1131000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:49.399111+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64749568 unmapped: 16908288 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:50.399328+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64749568 unmapped: 16908288 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 610691 data_alloc: 218103808 data_used: 36864
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:51.399728+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64757760 unmapped: 16900096 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:52.400081+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64757760 unmapped: 16900096 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ed314400
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 40.834548950s of 41.041542053s, submitted: 53
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _renew_subs
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 68 handle_osd_map epochs [69,69], i have 68, src has [1,69]
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:53.400266+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 69 ms_handle_reset con 0x5613ed314400 session 0x5613eb995e00
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64774144 unmapped: 16883712 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:54.400619+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 69 heartbeat osd_stat(store_statfs(0x4fd099000/0x0/0x4ffc00000, data 0x10c0be2/0x1134000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64774144 unmapped: 16883712 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:55.400889+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64774144 unmapped: 16883712 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ed315c00
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 615480 data_alloc: 218103808 data_used: 45056
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 69 heartbeat osd_stat(store_statfs(0x4fc89a000/0x0/0x4ffc00000, data 0x18c0be2/0x1934000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:56.401035+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 64872448 unmapped: 16785408 heap: 81657856 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:57.401200+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 65060864 unmapped: 24993792 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 69 heartbeat osd_stat(store_statfs(0x4fb09a000/0x0/0x4ffc00000, data 0x30c0be2/0x3134000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:58.401328+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 65060864 unmapped: 24993792 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _renew_subs
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 69 handle_osd_map epochs [70,70], i have 69, src has [1,70]
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 70 ms_handle_reset con 0x5613ed315c00 session 0x5613ebe07e00
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:59.401461+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ed315800
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 65093632 unmapped: 24961024 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:00.401783+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _renew_subs
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 70 handle_osd_map epochs [71,71], i have 70, src has [1,71]
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 65093632 unmapped: 24961024 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 71 ms_handle_reset con 0x5613ed315800 session 0x5613eb982b40
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 629376 data_alloc: 218103808 data_used: 53248
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:01.402074+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ece8c800
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 65126400 unmapped: 24928256 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 71 heartbeat osd_stat(store_statfs(0x4fd093000/0x0/0x4ffc00000, data 0x10c37aa/0x113a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:02.402273+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _renew_subs
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 71 handle_osd_map epochs [72,72], i have 71, src has [1,72]
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 66183168 unmapped: 23871488 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 72 ms_handle_reset con 0x5613ece8c800 session 0x5613eb9823c0
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:03.402476+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 66256896 unmapped: 23797760 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ed314000
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:04.402593+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _renew_subs
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 72 handle_osd_map epochs [73,73], i have 72, src has [1,73]
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.184603691s of 11.587786674s, submitted: 73
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 66273280 unmapped: 23781376 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 73 ms_handle_reset con 0x5613ed314000 session 0x5613ecd212c0
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:05.403519+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ed314400
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 73 ms_handle_reset con 0x5613ed314400 session 0x5613ec2ad2c0
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 66273280 unmapped: 23781376 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ed315c00
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 73 ms_handle_reset con 0x5613ed315c00 session 0x5613eb982000
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 636864 data_alloc: 218103808 data_used: 61440
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:06.403815+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ed315400
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _renew_subs
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 73 handle_osd_map epochs [74,74], i have 73, src has [1,74]
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 66306048 unmapped: 23748608 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613edb41400
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 74 heartbeat osd_stat(store_statfs(0x4fd084000/0x0/0x4ffc00000, data 0x10c80af/0x1149000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:07.403988+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 66396160 unmapped: 23658496 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 74 handle_osd_map epochs [74,75], i have 74, src has [1,75]
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _renew_subs
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 74 handle_osd_map epochs [75,75], i have 75, src has [1,75]
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 75 ms_handle_reset con 0x5613ed315400 session 0x5613eb982b40
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 75 ms_handle_reset con 0x5613edb41400 session 0x5613ecd463c0
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ece8c800
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 75 ms_handle_reset con 0x5613ece8c800 session 0x5613ec2ad4a0
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ed315c00
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 75 ms_handle_reset con 0x5613ed315c00 session 0x5613ec2e1e00
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:08.404166+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 66469888 unmapped: 23584768 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ee378000
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ee378c00
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:09.404371+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 66494464 unmapped: 23560192 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _renew_subs
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 75 handle_osd_map epochs [76,76], i have 75, src has [1,76]
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 76 ms_handle_reset con 0x5613ee378c00 session 0x5613eaf134a0
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 76 ms_handle_reset con 0x5613ee378000 session 0x5613ea3f1e00
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ece8c800
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:10.404535+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ed315c00
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 67682304 unmapped: 22372352 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 76 handle_osd_map epochs [76,77], i have 76, src has [1,77]
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 77 ms_handle_reset con 0x5613ed315c00 session 0x5613eaf894a0
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 662651 data_alloc: 218103808 data_used: 69632
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 77 ms_handle_reset con 0x5613ece8c800 session 0x5613ec2ac1e0
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:11.404809+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 67805184 unmapped: 22249472 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 77 heartbeat osd_stat(store_statfs(0x4fcc6a000/0x0/0x4ffc00000, data 0x10cbf1a/0x1150000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e3f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:12.404994+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 67837952 unmapped: 22216704 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613edb41400
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 77 ms_handle_reset con 0x5613edb41400 session 0x5613ea5e7680
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ee378c00
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ee379000
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:13.405195+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 67870720 unmapped: 22183936 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _renew_subs
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 77 handle_osd_map epochs [78,78], i have 77, src has [1,78]
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:14.405342+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 67903488 unmapped: 22151168 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _renew_subs
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 78 handle_osd_map epochs [79,79], i have 78, src has [1,79]
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.674486160s of 10.168670654s, submitted: 142
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 79 ms_handle_reset con 0x5613ee379000 session 0x5613eaf89e00
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 79 ms_handle_reset con 0x5613ee378c00 session 0x5613ea5e74a0
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ece8c800
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 79 ms_handle_reset con 0x5613ece8c800 session 0x5613eb003e00
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:15.405503+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 68009984 unmapped: 22044672 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 672501 data_alloc: 218103808 data_used: 77824
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:16.405655+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 68009984 unmapped: 22044672 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:17.405859+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 68009984 unmapped: 22044672 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 79 handle_osd_map epochs [80,80], i have 79, src has [1,80]
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 80 heartbeat osd_stat(store_statfs(0x4fcc65000/0x0/0x4ffc00000, data 0x10cebaf/0x1158000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e3f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:18.406027+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 22118400 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:19.406243+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 22118400 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ed315c00
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:20.406426+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 22011904 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _renew_subs
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 80 handle_osd_map epochs [81,81], i have 80, src has [1,81]
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 81 ms_handle_reset con 0x5613ed315c00 session 0x5613eac6d2c0
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 683009 data_alloc: 218103808 data_used: 77824
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:21.406663+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 81 heartbeat osd_stat(store_statfs(0x4fcc5d000/0x0/0x4ffc00000, data 0x10d16d0/0x1160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e3f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613edb41400
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 69173248 unmapped: 20881408 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _renew_subs
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 81 handle_osd_map epochs [82,82], i have 81, src has [1,82]
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 82 ms_handle_reset con 0x5613edb41400 session 0x5613eaf894a0
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 82 heartbeat osd_stat(store_statfs(0x4fcc5d000/0x0/0x4ffc00000, data 0x10d16d0/0x1160000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e3f9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:22.406796+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 82 heartbeat osd_stat(store_statfs(0x4fbab8000/0x0/0x4ffc00000, data 0x10d2c9a/0x1164000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ee379000
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 69246976 unmapped: 20807680 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _renew_subs
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 82 handle_osd_map epochs [83,83], i have 82, src has [1,83]
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 83 ms_handle_reset con 0x5613ee379000 session 0x5613ecd1ab40
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:23.406943+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ee379400
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 69312512 unmapped: 20742144 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _renew_subs
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 83 handle_osd_map epochs [84,84], i have 83, src has [1,84]
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 84 ms_handle_reset con 0x5613ee379400 session 0x5613eb003860
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ece8c800
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:24.407081+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 69410816 unmapped: 20643840 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _renew_subs
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 84 handle_osd_map epochs [85,85], i have 84, src has [1,85]
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.775205612s of 10.068033218s, submitted: 94
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 85 ms_handle_reset con 0x5613ece8c800 session 0x5613ea5e6960
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 85 heartbeat osd_stat(store_statfs(0x4fbaab000/0x0/0x4ffc00000, data 0x10d86e0/0x1171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:25.407245+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ed315c00
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 69435392 unmapped: 20619264 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 85 heartbeat osd_stat(store_statfs(0x4fbaab000/0x0/0x4ffc00000, data 0x10d86e0/0x1171000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _renew_subs
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 85 handle_osd_map epochs [86,86], i have 85, src has [1,86]
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 86 ms_handle_reset con 0x5613ed315c00 session 0x5613eb994960
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 710759 data_alloc: 218103808 data_used: 77824
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:26.407393+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 69419008 unmapped: 20635648 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613edb41400
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 86 ms_handle_reset con 0x5613edb41400 session 0x5613ecd1ad20
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:27.407545+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 69419008 unmapped: 20635648 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ee379000
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 86 ms_handle_reset con 0x5613ee379000 session 0x5613ecd1a780
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:28.407693+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 69419008 unmapped: 20635648 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613edb63000
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 86 ms_handle_reset con 0x5613edb63000 session 0x5613ecd1a000
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ece8c800
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ed315c00
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 86 heartbeat osd_stat(store_statfs(0x4fbaa6000/0x0/0x4ffc00000, data 0x10d9caa/0x1175000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:29.407813+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 69435392 unmapped: 20619264 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _renew_subs
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 86 handle_osd_map epochs [87,87], i have 86, src has [1,87]
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 87 ms_handle_reset con 0x5613ece8c800 session 0x5613ecd1b4a0
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:30.407891+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613edb41400
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 69476352 unmapped: 20578304 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _renew_subs
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 87 handle_osd_map epochs [88,88], i have 87, src has [1,88]
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 88 ms_handle_reset con 0x5613ed315c00 session 0x5613eada63c0
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 716900 data_alloc: 218103808 data_used: 77824
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 88 ms_handle_reset con 0x5613edb41400 session 0x5613eb0032c0
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:31.408107+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613edb63000
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ee379000
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613edb63400
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 69623808 unmapped: 20430848 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 88 handle_osd_map epochs [88,89], i have 88, src has [1,89]
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 89 ms_handle_reset con 0x5613edb63000 session 0x5613ec2e0b40
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 89 heartbeat osd_stat(store_statfs(0x4fba9d000/0x0/0x4ffc00000, data 0x10dd738/0x117c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:32.408265+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613edb63800
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 20406272 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _renew_subs
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 89 handle_osd_map epochs [90,90], i have 89, src has [1,90]
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 90 ms_handle_reset con 0x5613edb63800 session 0x5613eb982b40
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:33.408425+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ece8c800
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 20316160 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ed315c00
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _renew_subs
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 90 handle_osd_map epochs [91,91], i have 90, src has [1,91]
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 91 ms_handle_reset con 0x5613ece8c800 session 0x5613eac6cb40
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:34.408621+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613edb41400
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613edb63000
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 69804032 unmapped: 20250624 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _renew_subs
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 91 handle_osd_map epochs [92,92], i have 91, src has [1,92]
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.562761307s of 10.121302605s, submitted: 134
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 92 ms_handle_reset con 0x5613edb63000 session 0x5613eada6000
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 92 ms_handle_reset con 0x5613ed315c00 session 0x5613eac6cf00
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 92 ms_handle_reset con 0x5613edb41400 session 0x5613ebe063c0
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:35.408771+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 70975488 unmapped: 19079168 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 92 handle_osd_map epochs [93,93], i have 92, src has [1,93]
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613edb63c00
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 726803 data_alloc: 218103808 data_used: 90112
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 93 ms_handle_reset con 0x5613edb63c00 session 0x5613ebd4cd20
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fba9a000/0x0/0x4ffc00000, data 0x10e215e/0x1183000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:36.408944+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ece8c800
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 70868992 unmapped: 19185664 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 93 ms_handle_reset con 0x5613ece8c800 session 0x5613ea5e7c20
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ed315c00
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _renew_subs
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 93 handle_osd_map epochs [94,94], i have 93, src has [1,94]
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 94 ms_handle_reset con 0x5613ed315c00 session 0x5613ec2ade00
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:37.409133+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 70860800 unmapped: 19193856 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613edb41400
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:38.409281+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 70860800 unmapped: 19193856 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _renew_subs
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 94 handle_osd_map epochs [95,95], i have 94, src has [1,95]
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 95 ms_handle_reset con 0x5613edb41400 session 0x5613eb983860
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:39.409432+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 70877184 unmapped: 19177472 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613edb63000
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:40.409568+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _renew_subs
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 95 handle_osd_map epochs [96,96], i have 95, src has [1,96]
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 96 ms_handle_reset con 0x5613edb63000 session 0x5613ea3f03c0
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 70787072 unmapped: 19267584 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 738425 data_alloc: 218103808 data_used: 86016
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:41.409714+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 96 heartbeat osd_stat(store_statfs(0x4fba8f000/0x0/0x4ffc00000, data 0x10e67de/0x118d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 70787072 unmapped: 19267584 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 96 heartbeat osd_stat(store_statfs(0x4fba8f000/0x0/0x4ffc00000, data 0x10e67de/0x118d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:42.409888+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 70787072 unmapped: 19267584 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:43.410081+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613eb862400
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 96 ms_handle_reset con 0x5613eb862400 session 0x5613eaf12d20
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ece8c800
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 96 ms_handle_reset con 0x5613ece8c800 session 0x5613eaf12b40
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ed315c00
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 96 ms_handle_reset con 0x5613ed315c00 session 0x5613eaf125a0
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 70787072 unmapped: 19267584 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613edb41400
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 96 ms_handle_reset con 0x5613edb41400 session 0x5613ecd1a000
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613edb63000
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613edb4c000
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 96 ms_handle_reset con 0x5613edb4c000 session 0x5613ecd1ad20
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 96 ms_handle_reset con 0x5613edb63000 session 0x5613ecd1a780
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ece8c800
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 96 ms_handle_reset con 0x5613ece8c800 session 0x5613eb994960
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ed315c00
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 96 ms_handle_reset con 0x5613ed315c00 session 0x5613eb994780
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613edb41400
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 96 ms_handle_reset con 0x5613edb41400 session 0x5613eb002b40
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:44.410499+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613edb4c000
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 96 ms_handle_reset con 0x5613edb4c000 session 0x5613eb003860
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 18186240 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:45.410626+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 18186240 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 738018 data_alloc: 218103808 data_used: 86016
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ecba0000
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _renew_subs
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 96 handle_osd_map epochs [97,97], i have 96, src has [1,97]
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.961263657s of 11.402002335s, submitted: 155
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:46.410752+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 97 ms_handle_reset con 0x5613ecba0000 session 0x5613eac6cf00
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72114176 unmapped: 17940480 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ece8c800
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ed315c00
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:47.410957+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72114176 unmapped: 17940480 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 97 heartbeat osd_stat(store_statfs(0x4fba69000/0x0/0x4ffc00000, data 0x110bcb6/0x11b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:48.411118+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 17924096 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:49.411245+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 97 heartbeat osd_stat(store_statfs(0x4fba69000/0x0/0x4ffc00000, data 0x110bcb6/0x11b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 17924096 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 97 heartbeat osd_stat(store_statfs(0x4fba69000/0x0/0x4ffc00000, data 0x110bcb6/0x11b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:50.411380+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613edb41400
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 97 ms_handle_reset con 0x5613edb41400 session 0x5613ec2ad4a0
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613edb4c000
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72179712 unmapped: 17874944 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746531 data_alloc: 218103808 data_used: 102400
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:51.411526+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 97 handle_osd_map epochs [97,98], i have 97, src has [1,98]
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _renew_subs
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 97 handle_osd_map epochs [98,98], i have 98, src has [1,98]
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 17825792 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 98 ms_handle_reset con 0x5613edb4c000 session 0x5613eb994b40
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613edb63c00
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613edb62000
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 98 ms_handle_reset con 0x5613edb63c00 session 0x5613ea5e7680
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 98 ms_handle_reset con 0x5613edb62000 session 0x5613ecd1a000
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613edb62400
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 98 ms_handle_reset con 0x5613edb62400 session 0x5613eb983c20
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613edb41400
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:52.411676+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _renew_subs
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 98 handle_osd_map epochs [99,99], i have 98, src has [1,99]
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72433664 unmapped: 17620992 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 99 ms_handle_reset con 0x5613edb41400 session 0x5613ebd4d680
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613edb63c00
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:53.411844+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _renew_subs
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 99 handle_osd_map epochs [100,100], i have 99, src has [1,100]
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72458240 unmapped: 17596416 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 100 ms_handle_reset con 0x5613edb63c00 session 0x5613eada3680
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:54.412002+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ec2d8400
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 100 ms_handle_reset con 0x5613ec2d8400 session 0x5613eada25a0
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ec2d9400
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 100 ms_handle_reset con 0x5613ec2d9400 session 0x5613eada2f00
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72523776 unmapped: 17530880 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ee36e000
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 100 ms_handle_reset con 0x5613ee36e000 session 0x5613eada30e0
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ee36e000
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:55.412163+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 100 ms_handle_reset con 0x5613ee36e000 session 0x5613eaf89860
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 100 heartbeat osd_stat(store_statfs(0x4fba5e000/0x0/0x4ffc00000, data 0x110fe54/0x11bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 17506304 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ec2d8400
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 757056 data_alloc: 218103808 data_used: 110592
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:56.412396+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 17506304 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _renew_subs
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 100 handle_osd_map epochs [101,101], i have 100, src has [1,101]
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.367811203s of 10.547314644s, submitted: 63
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 101 ms_handle_reset con 0x5613ec2d8400 session 0x5613eb139860
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:57.412563+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 17489920 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:58.412720+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 101 ms_handle_reset con 0x5613ece8c800 session 0x5613ec2e1e00
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 101 ms_handle_reset con 0x5613ed315c00 session 0x5613ebd4de00
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72515584 unmapped: 17539072 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ec2d9400
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:59.412850+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 101 heartbeat osd_stat(store_statfs(0x4fba5e000/0x0/0x4ffc00000, data 0x111146e/0x11bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72515584 unmapped: 17539072 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _renew_subs
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 101 handle_osd_map epochs [102,102], i have 101, src has [1,102]
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 102 ms_handle_reset con 0x5613ec2d9400 session 0x5613eb994d20
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:00.413028+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 102 handle_osd_map epochs [102,103], i have 102, src has [1,103]
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72605696 unmapped: 17448960 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 763148 data_alloc: 218103808 data_used: 106496
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:01.413252+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72613888 unmapped: 17440768 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:02.413408+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72613888 unmapped: 17440768 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:03.413575+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 17424384 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 103 heartbeat osd_stat(store_statfs(0x4fba7b000/0x0/0x4ffc00000, data 0x10eff54/0x11a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:04.413750+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 17424384 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 103 ms_handle_reset con 0x5613ee379000 session 0x5613eada74a0
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 103 ms_handle_reset con 0x5613edb63400 session 0x5613eb983a40
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ec2d8400
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 103 ms_handle_reset con 0x5613ec2d8400 session 0x5613eaf16000
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:05.413913+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 103 handle_osd_map epochs [103,104], i have 103, src has [1,104]
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 17416192 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 764913 data_alloc: 218103808 data_used: 114688
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:06.414043+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 104 heartbeat osd_stat(store_statfs(0x4fba7a000/0x0/0x4ffc00000, data 0x10f13e5/0x11a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ece8c800
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 104 ms_handle_reset con 0x5613ece8c800 session 0x5613eac0f860
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ed315c00
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.541474342s of 10.000211716s, submitted: 137
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 17416192 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:07.414163+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 17408000 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 104 handle_osd_map epochs [104,105], i have 104, src has [1,105]
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 105 ms_handle_reset con 0x5613ed315c00 session 0x5613eac0e780
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 105 heartbeat osd_stat(store_statfs(0x4fba7c000/0x0/0x4ffc00000, data 0x10f13c2/0x11a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:08.414332+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:09.414461+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:10.414607+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 105 heartbeat osd_stat(store_statfs(0x4fba79000/0x0/0x4ffc00000, data 0x10f298e/0x11a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765885 data_alloc: 218103808 data_used: 110592
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:11.414793+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:12.414933+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:13.415060+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:14.415221+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 105 heartbeat osd_stat(store_statfs(0x4fba79000/0x0/0x4ffc00000, data 0x10f298e/0x11a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:15.415369+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765885 data_alloc: 218103808 data_used: 110592
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:16.415565+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:17.415697+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:18.415809+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 105 handle_osd_map epochs [106,106], i have 105, src has [1,106]
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.993864059s of 12.120968819s, submitted: 33
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:19.421968+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 17416192 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:20.422134+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 17416192 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:21.422304+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 17416192 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:22.422459+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 17416192 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:23.422572+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 17416192 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:24.422748+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 17416192 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:25.422897+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 17416192 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:26.423236+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 17408000 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:27.426599+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 17408000 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:28.428873+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 17408000 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:29.429936+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 17408000 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:30.431015+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 17408000 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:31.431339+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 17408000 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:32.432429+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 17408000 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:33.433059+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 17408000 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:34.433848+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 17408000 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:35.434420+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 17408000 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:36.434899+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:37.435322+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:38.435447+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:39.435676+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:40.435828+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:41.436025+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:42.436165+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:43.436646+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:44.437022+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:45.437274+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:46.437446+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:47.437622+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:48.438026+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:49.438322+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:50.438661+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:51.438864+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:52.439131+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:53.439320+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:54.439631+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:55.439920+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:56.440121+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:57.440260+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:58.440413+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:59.440537+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:00.440639+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:01.440826+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:02.440933+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:03.441083+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:04.441255+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:05.441435+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:06.441560+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:07.441733+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:08.441860+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:09.442010+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:10.442150+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:11.442363+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:12.442506+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:13.442659+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:14.442819+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:15.443005+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:16.443117+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:17.443216+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:18.443325+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:19.443453+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:20.443578+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 17399808 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:21.443717+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:22.443846+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:23.444103+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:24.444360+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:25.444524+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:26.444741+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:27.445005+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:28.445261+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:29.445486+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:30.445753+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:31.446410+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:32.451277+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:33.453751+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:34.457311+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:35.458674+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:36.459016+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:37.459263+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:38.459484+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:39.459699+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:40.459814+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:41.459947+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:42.460261+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:43.460473+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:44.460655+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:45.460879+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:46.461046+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:47.461276+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:48.461556+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:49.461945+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:50.462124+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:51.462270+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:52.462412+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:53.462581+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:54.462676+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:55.462780+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:56.462908+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:57.463016+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 17391616 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:58.463138+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 72859648 unmapped: 17195008 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: do_command 'config diff' '{prefix=config diff}'
Nov 26 00:06:40 compute-0 ceph-osd[90055]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 26 00:06:40 compute-0 ceph-osd[90055]: do_command 'config show' '{prefix=config show}'
Nov 26 00:06:40 compute-0 ceph-osd[90055]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 26 00:06:40 compute-0 ceph-osd[90055]: do_command 'counter dump' '{prefix=counter dump}'
Nov 26 00:06:40 compute-0 ceph-osd[90055]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 26 00:06:40 compute-0 ceph-osd[90055]: do_command 'counter schema' '{prefix=counter schema}'
Nov 26 00:06:40 compute-0 ceph-osd[90055]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:59.463303+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73211904 unmapped: 16842752 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:00.463409+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73277440 unmapped: 16777216 heap: 90054656 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: do_command 'log dump' '{prefix=log dump}'
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:01.463533+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 27811840 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: do_command 'perf dump' '{prefix=perf dump}'
Nov 26 00:06:40 compute-0 ceph-osd[90055]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Nov 26 00:06:40 compute-0 ceph-osd[90055]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Nov 26 00:06:40 compute-0 ceph-osd[90055]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 26 00:06:40 compute-0 ceph-osd[90055]: do_command 'perf schema' '{prefix=perf schema}'
Nov 26 00:06:40 compute-0 ceph-osd[90055]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:02.463654+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 27656192 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:03.463817+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 27656192 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:04.463975+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 27656192 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:05.464096+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 27656192 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:06.464269+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 27648000 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:07.464386+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 27648000 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:08.464549+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 27648000 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:09.464674+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 27648000 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:10.464806+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 27648000 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:11.464961+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 27648000 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:12.465065+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 27648000 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:13.465224+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 27648000 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:14.465374+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 27648000 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:15.465517+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 27648000 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:16.465655+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 27648000 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:17.465778+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 27648000 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:18.465918+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 27648000 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:19.466035+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 27648000 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:20.466124+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 27648000 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:21.466301+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 27648000 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:22.466494+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 27648000 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:23.466631+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 27648000 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:24.466791+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 27648000 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:25.466914+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 27648000 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:26.467056+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 27648000 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:27.467192+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 27648000 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:28.467344+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 27648000 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:29.467477+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 27648000 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:30.467612+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 27648000 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:31.467802+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 27648000 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:32.467981+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 27648000 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:33.468164+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 27648000 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:34.468408+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 27648000 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:35.468964+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 27648000 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:36.470481+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 27648000 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:37.471237+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 27648000 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:38.471363+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 27648000 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:39.471547+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 27648000 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:40.472187+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 27648000 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:41.472642+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 27648000 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:42.472884+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 27648000 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:43.473587+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 27648000 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:44.474208+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 27648000 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:45.474688+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 27648000 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:46.475033+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 27648000 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:47.475334+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 27648000 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:48.475584+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 27648000 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:49.475740+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 27648000 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:50.476082+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 27648000 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:51.476495+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 27639808 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:52.476913+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 27639808 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:53.477221+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 27639808 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:54.477534+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 27639808 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:55.477753+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 27639808 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:56.478566+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 27639808 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:57.478776+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 27639808 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:58.479094+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 27639808 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:59.479349+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 27639808 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:00.479534+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 27639808 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:01.479764+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 27639808 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:02.479954+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 27639808 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:03.480126+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 27639808 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:04.480344+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 27639808 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:05.480576+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 27639808 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:06.480788+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 27639808 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:07.480973+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 27639808 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:08.481464+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 27639808 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:09.481742+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 27639808 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:10.481933+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 27639808 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:11.482129+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 27639808 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:12.483841+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 27639808 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:13.485771+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 27639808 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:14.486747+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 27639808 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:15.487702+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 27639808 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:16.488873+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 27639808 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:17.489928+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 27639808 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:18.490869+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 27639808 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:19.491796+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 27639808 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:20.492383+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 27639808 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:21.493161+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 27639808 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:22.493724+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 27639808 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:23.494267+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 27639808 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:24.494578+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 27639808 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:25.494835+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 27639808 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:26.495147+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 27615232 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:27.495382+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 27615232 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:28.495646+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 27615232 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:29.495909+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 27615232 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:30.496153+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 27615232 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:31.496350+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 27615232 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:32.496461+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 27615232 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:33.496626+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 27615232 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:34.496831+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 27615232 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:35.497056+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 27615232 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:36.497313+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 27615232 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:37.497437+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 27615232 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:38.497631+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 27615232 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:39.497818+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 27615232 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:40.499404+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 27615232 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:41.502796+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 27623424 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:42.505789+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 27623424 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:43.508469+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 27623424 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:44.510770+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 27623424 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:45.512083+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 27623424 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:46.513615+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 27623424 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:47.515229+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 27623424 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:48.516618+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 27623424 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:49.517912+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 27623424 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:50.519003+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 27615232 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:51.520075+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 27615232 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:52.521102+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 27615232 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:53.522005+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 27615232 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:54.522738+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 27615232 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:55.523234+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 27615232 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:56.523957+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 27607040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:57.524260+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 27607040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:58.524429+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 27607040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:59.524942+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 27607040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:00.525130+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 27607040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:01.525650+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 27607040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:02.526082+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 27607040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:03.526499+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 27607040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:04.526790+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 27607040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:05.526952+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 27607040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:06.527137+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 27607040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:07.527314+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 27607040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:08.527681+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 27607040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:09.527982+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 27607040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:10.528305+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 27607040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:11.528616+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 27607040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:12.528790+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 27607040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:13.528936+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 27607040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:14.529077+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 27607040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:15.529263+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 27607040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:16.529433+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 27607040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:17.529568+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 27607040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:18.529724+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 27607040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:19.529846+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 27607040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:20.529979+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 27607040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:21.530145+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 27607040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:22.530407+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 27607040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:23.530549+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 27607040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:24.530686+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 27607040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:25.530850+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 27607040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:26.531020+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 27607040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:27.531152+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 27607040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:28.531317+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 27607040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:29.531442+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 27607040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:30.531566+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 27607040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:31.531937+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 27607040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:32.532027+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 27607040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:33.532217+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 27607040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:34.532368+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 27607040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:35.532543+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 27607040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:36.532669+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 27607040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:37.532794+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 27607040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:38.532918+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 27607040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:39.533074+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 27607040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:40.533232+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 27607040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:41.533379+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73498624 unmapped: 27598848 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:42.533559+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 27590656 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:43.533677+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 27590656 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:44.533841+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 27590656 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:45.533996+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 27590656 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:46.534226+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 27590656 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:47.534368+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 27590656 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:48.534865+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 27590656 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:49.535224+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 27590656 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:50.535544+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 27590656 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:51.535810+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 27590656 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:52.535941+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 27590656 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:53.536253+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 27590656 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:54.536542+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 27590656 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:55.536805+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 27590656 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:56.536997+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 27590656 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:57.537200+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 27590656 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:58.537355+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 27590656 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:59.537511+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 27590656 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:00.537665+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 27590656 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:01.537869+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 27590656 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:02.538012+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 27590656 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:03.538235+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 27590656 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:04.538438+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 27590656 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:05.538625+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 27590656 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:06.538909+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 27590656 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:07.539246+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 27590656 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:08.539555+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 27590656 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:09.539822+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 27590656 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:10.540033+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 27590656 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:11.540243+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 27590656 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:12.540414+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 27590656 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:13.540678+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 27590656 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:14.540878+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 27590656 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:15.541096+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 27590656 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:16.541284+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 27590656 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:17.541412+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 27582464 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:18.541602+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 27582464 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:19.541752+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 27582464 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:20.541917+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 27582464 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:21.542134+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 27582464 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:22.542259+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 27582464 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:23.542628+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 27582464 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:24.542765+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 27582464 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:25.542951+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 27582464 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:26.543092+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 27582464 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:27.543251+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 27582464 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:28.543397+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 27582464 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:29.543492+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 27582464 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:30.543621+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 27582464 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:31.543809+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 27582464 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:32.544110+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 27582464 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:33.544229+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 27582464 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:34.544396+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 27582464 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:35.544599+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 27582464 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:36.544736+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 27582464 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:37.544861+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 27582464 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:38.544976+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 27754496 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:39.545083+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 27754496 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:40.545220+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 27754496 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:41.545404+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 27754496 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:42.545525+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 27754496 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:43.545658+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 27754496 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:44.545794+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 27754496 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:45.545985+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 27754496 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:46.546109+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 27754496 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:47.546282+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 27754496 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:48.546409+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 27754496 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:49.546568+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 27754496 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:50.546702+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 27754496 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:51.546895+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 27754496 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:52.547043+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 27754496 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:53.547135+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 27754496 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:54.547264+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73351168 unmapped: 27746304 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:55.547394+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73351168 unmapped: 27746304 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:56.547529+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73351168 unmapped: 27746304 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:57.547690+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73351168 unmapped: 27746304 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:58.547819+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73351168 unmapped: 27746304 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:59.547975+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73351168 unmapped: 27746304 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:00.548110+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73351168 unmapped: 27746304 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:01.548263+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73351168 unmapped: 27746304 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:02.548432+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73351168 unmapped: 27746304 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:03.548613+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73351168 unmapped: 27746304 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:04.548772+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73351168 unmapped: 27746304 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:05.548936+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73351168 unmapped: 27746304 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:06.549081+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73351168 unmapped: 27746304 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:07.549225+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73351168 unmapped: 27746304 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:08.549379+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73351168 unmapped: 27746304 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:09.549563+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73351168 unmapped: 27746304 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:10.549705+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73351168 unmapped: 27746304 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:11.549887+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:12.550009+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73351168 unmapped: 27746304 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:13.550135+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73351168 unmapped: 27746304 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:14.550284+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73351168 unmapped: 27746304 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:15.550446+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73351168 unmapped: 27746304 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:16.550594+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73351168 unmapped: 27746304 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:17.550750+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73351168 unmapped: 27746304 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:18.550932+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73351168 unmapped: 27746304 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:19.551099+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73351168 unmapped: 27746304 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:20.551260+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73351168 unmapped: 27746304 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:21.551913+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73351168 unmapped: 27746304 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:22.552882+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73351168 unmapped: 27746304 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:23.553680+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73351168 unmapped: 27746304 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:24.554520+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73351168 unmapped: 27746304 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:25.554891+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73351168 unmapped: 27746304 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Cumulative writes: 6033 writes, 24K keys, 6033 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 6033 writes, 1137 syncs, 5.31 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1544 writes, 4445 keys, 1544 commit groups, 1.0 writes per commit group, ingest: 2.24 MB, 0.00 MB/s
                                           Interval WAL: 1544 writes, 647 syncs, 2.39 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:26.555282+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 27738112 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba76000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:27.555873+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 27738112 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 768683 data_alloc: 218103808 data_used: 110592
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:28.556442+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 27738112 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:29.556979+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 27738112 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 ms_handle_reset con 0x5613ea603000 session 0x5613ea3f05a0
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ec2d8400
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:30.557286+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73367552 unmapped: 27729920 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 ms_handle_reset con 0x5613eb896c00 session 0x5613ecd46f00
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613ee36dc00
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 ms_handle_reset con 0x5613eb896400 session 0x5613eb995c20
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: handle_auth_request added challenge on 0x5613eb896c00
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:31.557762+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 73375744 unmapped: 27721728 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 372.651794434s of 372.669342041s, submitted: 13
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:32.558064+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba39000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74383360 unmapped: 26714112 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 771835 data_alloc: 218103808 data_used: 188416
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:33.558464+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74383360 unmapped: 26714112 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:34.558678+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74383360 unmapped: 26714112 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:35.558859+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 26591232 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba39000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:36.559049+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 26591232 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:37.559391+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 26591232 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 771835 data_alloc: 218103808 data_used: 188416
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba39000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:38.559529+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 26591232 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:39.559658+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 26591232 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:40.559823+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 26591232 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:41.560108+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 26591232 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba39000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:42.560316+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 26591232 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 771835 data_alloc: 218103808 data_used: 188416
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:43.560521+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 26591232 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:44.560780+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 26591232 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:45.560938+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 26591232 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:46.561288+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 26591232 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:47.561549+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 26591232 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 771835 data_alloc: 218103808 data_used: 188416
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba39000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:48.561886+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba39000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:49.562079+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 26591232 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:50.562236+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 26591232 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:51.562549+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 26591232 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba39000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:52.562788+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 26591232 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 771835 data_alloc: 218103808 data_used: 188416
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:53.563502+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 26591232 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:54.564024+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 26591232 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:55.564159+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 26591232 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba39000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:56.564234+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 26591232 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:57.564348+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 26591232 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 771835 data_alloc: 218103808 data_used: 188416
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba39000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:58.564493+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 26591232 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:59.564805+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 26591232 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:00.565019+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 26591232 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:01.565196+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 26591232 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:02.565704+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 26591232 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 771835 data_alloc: 218103808 data_used: 188416
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:03.565919+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 26591232 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba39000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:04.566138+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 26591232 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:05.566355+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 26591232 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba39000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:06.566582+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 26591232 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:07.566724+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 26591232 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 771835 data_alloc: 218103808 data_used: 188416
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:08.567091+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 26591232 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba39000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:09.567384+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 26591232 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:10.567548+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 26591232 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:11.567820+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 26591232 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:12.568038+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 26591232 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba39000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 771835 data_alloc: 218103808 data_used: 188416
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:13.568273+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 26591232 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:14.568526+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 26591232 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:15.568685+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba39000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 26591232 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:16.569020+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 26591232 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba39000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:17.569245+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 26591232 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 771835 data_alloc: 218103808 data_used: 188416
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:18.569427+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 26591232 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:19.569626+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 26591232 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:20.570261+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 26591232 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:21.570579+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 26591232 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:22.570765+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba39000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 26591232 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 771835 data_alloc: 218103808 data_used: 188416
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:23.571008+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 26591232 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba39000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:24.571255+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 26591232 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:25.571423+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 26591232 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:26.571825+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74514432 unmapped: 26583040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:27.572079+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74514432 unmapped: 26583040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba39000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 771835 data_alloc: 218103808 data_used: 188416
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:28.572607+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74514432 unmapped: 26583040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:29.572950+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba39000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74514432 unmapped: 26583040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:30.573129+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba39000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74514432 unmapped: 26583040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:31.573489+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74514432 unmapped: 26583040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:32.573731+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74514432 unmapped: 26583040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 771835 data_alloc: 218103808 data_used: 188416
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:33.573869+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74514432 unmapped: 26583040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:34.574041+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba39000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74514432 unmapped: 26583040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:35.574286+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74514432 unmapped: 26583040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:36.574442+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74514432 unmapped: 26583040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:37.574615+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74514432 unmapped: 26583040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 771835 data_alloc: 218103808 data_used: 188416
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba39000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:38.574784+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74514432 unmapped: 26583040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:39.575038+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74514432 unmapped: 26583040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:40.575264+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74514432 unmapped: 26583040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:41.575451+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74514432 unmapped: 26583040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:42.575630+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74514432 unmapped: 26583040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 771835 data_alloc: 218103808 data_used: 188416
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:43.575892+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74514432 unmapped: 26583040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba39000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:44.576196+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74514432 unmapped: 26583040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:45.576365+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74514432 unmapped: 26583040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba39000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:46.576610+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74514432 unmapped: 26583040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:47.576837+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74514432 unmapped: 26583040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 771835 data_alloc: 218103808 data_used: 188416
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:48.577073+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74514432 unmapped: 26583040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:49.577254+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74514432 unmapped: 26583040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba39000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:50.577440+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74514432 unmapped: 26583040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:51.577769+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74514432 unmapped: 26583040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:52.578058+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74514432 unmapped: 26583040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 771835 data_alloc: 218103808 data_used: 188416
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba39000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:53.578230+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74514432 unmapped: 26583040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:54.578416+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74514432 unmapped: 26583040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:55.578653+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74514432 unmapped: 26583040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:56.578801+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74514432 unmapped: 26583040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:57.578967+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74514432 unmapped: 26583040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 771835 data_alloc: 218103808 data_used: 188416
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:58.579129+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74514432 unmapped: 26583040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba39000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:59.579296+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74514432 unmapped: 26583040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:00.579419+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74514432 unmapped: 26583040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:01.579610+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74514432 unmapped: 26583040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:02.579735+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba39000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74514432 unmapped: 26583040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 771835 data_alloc: 218103808 data_used: 188416
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:03.579962+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 26574848 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:04.580149+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 26574848 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:05.580341+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 26574848 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:06.580467+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 26574848 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba39000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:07.580653+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 26574848 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba39000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 771835 data_alloc: 218103808 data_used: 188416
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:08.580833+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 26574848 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:09.580965+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 26574848 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:10.581089+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 26574848 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:11.583392+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 26574848 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba39000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:12.583535+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 26574848 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 771835 data_alloc: 218103808 data_used: 188416
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba39000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:13.583717+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 26574848 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:14.583863+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 26574848 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:15.584067+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 26574848 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:16.584249+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 26574848 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:17.584421+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 26574848 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 771835 data_alloc: 218103808 data_used: 188416
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:18.584592+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 26574848 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba39000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:19.584763+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba39000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 26574848 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:20.584917+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 26574848 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:21.585142+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba39000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 26574848 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:22.585432+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 26574848 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 771835 data_alloc: 218103808 data_used: 188416
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:23.585555+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 26574848 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:24.585693+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 26574848 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:25.585839+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 26574848 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:26.585998+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 26574848 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba39000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:27.586151+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 26574848 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 771835 data_alloc: 218103808 data_used: 188416
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:28.586351+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 26574848 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:29.586480+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 26574848 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:30.587716+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 26574848 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba39000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:31.588111+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 26574848 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:32.588596+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 26574848 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 771835 data_alloc: 218103808 data_used: 188416
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:33.589082+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba39000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 26574848 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:34.589710+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 26574848 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:35.590051+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 26574848 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:36.590251+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74514432 unmapped: 26583040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba39000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:37.590910+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74514432 unmapped: 26583040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 771835 data_alloc: 218103808 data_used: 188416
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:38.591157+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74514432 unmapped: 26583040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:39.639076+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74514432 unmapped: 26583040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:40.639696+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74514432 unmapped: 26583040 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba39000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:41.640316+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 26574848 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:42.640461+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 26574848 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 771835 data_alloc: 218103808 data_used: 188416
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:43.640602+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 26574848 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:44.640904+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 26574848 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:45.641477+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba39000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 26574848 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:46.642004+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 26574848 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:47.642389+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 26574848 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 771835 data_alloc: 218103808 data_used: 188416
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:48.642645+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 26574848 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:49.643121+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 26574848 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:50.643521+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 26574848 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:51.643781+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba39000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 26574848 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:52.643997+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 26574848 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 771835 data_alloc: 218103808 data_used: 188416
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:53.644151+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 26574848 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:54.644465+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 26574848 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:55.644742+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 26574848 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba39000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:56.644912+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba39000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 26574848 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba39000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:57.645081+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 26574848 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 771835 data_alloc: 218103808 data_used: 188416
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:58.645280+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 26574848 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:59.645460+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 26574848 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:00.645708+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 26574848 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:01.645974+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba39000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 26574848 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:02.646128+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 26574848 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 771835 data_alloc: 218103808 data_used: 188416
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:03.646305+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 26574848 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:04.646467+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba39000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 26574848 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:05.646589+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 26574848 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:06.646786+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 26574848 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:07.646952+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 26574848 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 771835 data_alloc: 218103808 data_used: 188416
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:08.647078+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba39000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 26574848 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:09.647249+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 26574848 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:10.647421+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 26574848 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:11.647636+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba39000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 26574848 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:12.647833+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 26574848 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 771835 data_alloc: 218103808 data_used: 188416
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:13.647915+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba39000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 26574848 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:14.648064+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 26574848 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:15.648328+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 26574848 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:16.648507+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba39000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 26574848 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:17.648695+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 26574848 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 771835 data_alloc: 218103808 data_used: 188416
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:18.648835+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 26574848 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:19.649048+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 26574848 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:20.649257+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 26574848 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:21.649538+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 26574848 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:22.649848+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba39000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74530816 unmapped: 26566656 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 771835 data_alloc: 218103808 data_used: 188416
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:23.650061+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74530816 unmapped: 26566656 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:24.650280+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74530816 unmapped: 26566656 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:25.650484+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74547200 unmapped: 26550272 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:26.650646+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74547200 unmapped: 26550272 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:27.650835+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74547200 unmapped: 26550272 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 771835 data_alloc: 218103808 data_used: 188416
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba39000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:28.651000+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74555392 unmapped: 26542080 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:29.651262+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba39000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74555392 unmapped: 26542080 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:30.651476+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74555392 unmapped: 26542080 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:31.651718+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba39000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74555392 unmapped: 26542080 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:32.651940+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74555392 unmapped: 26542080 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 771835 data_alloc: 218103808 data_used: 188416
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:33.652112+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74555392 unmapped: 26542080 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:34.652355+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74555392 unmapped: 26542080 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:35.653472+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74555392 unmapped: 26542080 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:36.655802+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba39000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74555392 unmapped: 26542080 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:37.657886+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74555392 unmapped: 26542080 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 771835 data_alloc: 218103808 data_used: 188416
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:38.658896+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74555392 unmapped: 26542080 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:39.659415+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74555392 unmapped: 26542080 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:40.660125+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74555392 unmapped: 26542080 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:41.660448+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74555392 unmapped: 26542080 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba39000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:42.660861+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba39000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 26533888 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 771835 data_alloc: 218103808 data_used: 188416
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:43.661415+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 26533888 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:44.661847+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 26533888 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:45.662129+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 26533888 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:46.662488+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 26533888 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:47.662936+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba39000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 26533888 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 771835 data_alloc: 218103808 data_used: 188416
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:48.663470+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 26533888 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:49.663876+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 26533888 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1418604975' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:50.664320+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 26533888 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:51.664603+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 26533888 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:52.664740+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba39000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 26533888 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 771835 data_alloc: 218103808 data_used: 188416
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:53.664973+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 26533888 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:54.665290+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 26533888 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:55.665500+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 26533888 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:56.665877+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba39000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 26533888 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:57.666083+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba39000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 26533888 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 771835 data_alloc: 218103808 data_used: 188416
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:58.666223+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 26533888 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:59.666386+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 26533888 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:06:00.666603+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 26533888 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:06:01.666784+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 26533888 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:06:02.666913+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 26533888 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 771835 data_alloc: 218103808 data_used: 188416
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:06:03.667040+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: osd.1 106 heartbeat osd_stat(store_statfs(0x4fba39000/0x0/0x4ffc00000, data 0x10f3e2e/0x11a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 26533888 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:06:04.667229+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 26533888 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:06:05.667396+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74571776 unmapped: 26525696 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:06:06.667565+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: do_command 'config diff' '{prefix=config diff}'
Nov 26 00:06:40 compute-0 ceph-osd[90055]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 26 00:06:40 compute-0 ceph-osd[90055]: do_command 'config show' '{prefix=config show}'
Nov 26 00:06:40 compute-0 ceph-osd[90055]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74489856 unmapped: 26607616 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:06:07.667721+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: do_command 'counter dump' '{prefix=counter dump}'
Nov 26 00:06:40 compute-0 ceph-osd[90055]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 26 00:06:40 compute-0 ceph-osd[90055]: do_command 'counter schema' '{prefix=counter schema}'
Nov 26 00:06:40 compute-0 ceph-osd[90055]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:40 compute-0 ceph-osd[90055]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74743808 unmapped: 26353664 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: bluestore.MempoolThread(0x5613e96f5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 771835 data_alloc: 218103808 data_used: 188416
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:06:08.667896+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: prioritycache tune_memory target: 4294967296 mapped: 74588160 unmapped: 26509312 heap: 101097472 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: tick
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_tickets
Nov 26 00:06:40 compute-0 ceph-osd[90055]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:06:09.668025+0000)
Nov 26 00:06:40 compute-0 ceph-osd[90055]: do_command 'log dump' '{prefix=log dump}'
Nov 26 00:06:40 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Nov 26 00:06:40 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1976370413' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 26 00:06:40 compute-0 rsyslogd[1001]: imjournal from <np0005535838:ceph-osd>: begin to drop messages due to rate-limiting
Nov 26 00:06:40 compute-0 ceph-mon[75654]: pgmap v1138: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:06:40 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/4179959095' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 26 00:06:40 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/3591011433' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 26 00:06:40 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/1418604975' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 26 00:06:40 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/1976370413' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 26 00:06:40 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Nov 26 00:06:40 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2151486916' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 26 00:06:40 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Nov 26 00:06:40 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1645687231' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 26 00:06:40 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1139: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:06:40 compute-0 ovn_metadata_agent[160720]: 2025-11-26 00:06:40.773 160725 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 26 00:06:40 compute-0 ovn_metadata_agent[160720]: 2025-11-26 00:06:40.774 160725 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 26 00:06:40 compute-0 ovn_metadata_agent[160720]: 2025-11-26 00:06:40.774 160725 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 26 00:06:41 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0) v1
Nov 26 00:06:41 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2438789313' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 26 00:06:41 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 00:06:41 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0) v1
Nov 26 00:06:41 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2839311836' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 26 00:06:41 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Nov 26 00:06:41 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1819483446' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 26 00:06:41 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/2151486916' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 26 00:06:41 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/1645687231' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 26 00:06:41 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/2438789313' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 26 00:06:41 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/2839311836' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 26 00:06:41 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/1819483446' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 26 00:06:41 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0) v1
Nov 26 00:06:41 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2716800245' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 26 00:06:41 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd utilization"} v 0) v1
Nov 26 00:06:41 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1676271483' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 26 00:06:41 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.15133 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 26 00:06:42 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.15135 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 00:06:42 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.15137 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 26 00:06:42 compute-0 ceph-mon[75654]: pgmap v1139: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:06:42 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/2716800245' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 26 00:06:42 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/1676271483' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 26 00:06:42 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.15139 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 00:06:42 compute-0 podman[279839]: 2025-11-26 00:06:42.634581979 +0000 UTC m=+0.065331561 container health_status 9c33443d6f21f9126b440289b7d384a59ef92406382d0e40a2b11f1da5dc47d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 26 00:06:42 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.15141 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 26 00:06:42 compute-0 podman[279838]: 2025-11-26 00:06:42.673112235 +0000 UTC m=+0.102708116 container health_status 668c01d92074d087c6242cdfce3346e32e999c6cc9c9fbe994e08b189b1b5b58 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 26 00:06:42 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1140: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:06:43 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.15145 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 26 00:06:43 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.15149 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 26 00:06:43 compute-0 ceph-mon[75654]: from='client.15133 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 26 00:06:43 compute-0 ceph-mon[75654]: from='client.15135 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 00:06:43 compute-0 ceph-mon[75654]: from='client.15137 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 26 00:06:43 compute-0 ceph-mon[75654]: from='client.15139 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 00:06:43 compute-0 ceph-mon[75654]: from='client.15141 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 26 00:06:43 compute-0 ceph-mon[75654]: pgmap v1140: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:06:43 compute-0 ceph-mon[75654]: from='client.15145 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 26 00:06:43 compute-0 ceph-mon[75654]: from='client.15149 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 26 00:06:43 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "quorum_status"} v 0) v1
Nov 26 00:06:43 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1816757859' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 26 00:06:43 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.15151 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 26 00:06:43 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions"} v 0) v1
Nov 26 00:06:43 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2451013709' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 26 00:06:44 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.15155 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:24.530437+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  log_queue is 2 last_log 45 sent 43 num 2 unsent 2 sending 2
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:34:54.114355+0000 osd.0 (osd.0) 44 : cluster [DBG] 2.11 scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:34:54.128437+0000 osd.0 (osd.0) 45 : cluster [DBG] 2.11 scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 61808640 unmapped: 81920 heap: 61890560 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client handle_log_ack log(last 45) v1
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:34:54.114355+0000 osd.0 (osd.0) 44 : cluster [DBG] 2.11 scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:34:54.128437+0000 osd.0 (osd.0) 45 : cluster [DBG] 2.11 scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:25.530620+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  log_queue is 2 last_log 47 sent 45 num 2 unsent 2 sending 2
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:34:55.152844+0000 osd.0 (osd.0) 46 : cluster [DBG] 3.12 scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:34:55.166945+0000 osd.0 (osd.0) 47 : cluster [DBG] 3.12 scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 61808640 unmapped: 81920 heap: 61890560 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 7.f scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 7.f scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client handle_log_ack log(last 47) v1
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:34:55.152844+0000 osd.0 (osd.0) 46 : cluster [DBG] 3.12 scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:34:55.166945+0000 osd.0 (osd.0) 47 : cluster [DBG] 3.12 scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:26.530789+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  log_queue is 2 last_log 49 sent 47 num 2 unsent 2 sending 2
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:34:56.134334+0000 osd.0 (osd.0) 48 : cluster [DBG] 7.f scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:34:56.148404+0000 osd.0 (osd.0) 49 : cluster [DBG] 7.f scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 61816832 unmapped: 73728 heap: 61890560 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 415693 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client handle_log_ack log(last 49) v1
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:34:56.134334+0000 osd.0 (osd.0) 48 : cluster [DBG] 7.f scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:34:56.148404+0000 osd.0 (osd.0) 49 : cluster [DBG] 7.f scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:27.531003+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 32768 heap: 61890560 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:28.531122+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 61857792 unmapped: 32768 heap: 61890560 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:29.531267+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 24576 heap: 61890560 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:30.531395+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 24576 heap: 61890560 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:31.531510+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 61865984 unmapped: 24576 heap: 61890560 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 415693 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:32.531619+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 61874176 unmapped: 16384 heap: 61890560 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:33.531721+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 61874176 unmapped: 16384 heap: 61890560 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.039699554s of 10.055684090s, submitted: 6
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:34.531852+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  log_queue is 2 last_log 51 sent 49 num 2 unsent 2 sending 2
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:35:04.170064+0000 osd.0 (osd.0) 50 : cluster [DBG] 5.15 scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:35:04.184188+0000 osd.0 (osd.0) 51 : cluster [DBG] 5.15 scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 8192 heap: 61890560 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client handle_log_ack log(last 51) v1
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:35:04.170064+0000 osd.0 (osd.0) 50 : cluster [DBG] 5.15 scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:35:04.184188+0000 osd.0 (osd.0) 51 : cluster [DBG] 5.15 scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:35.532098+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  log_queue is 2 last_log 53 sent 51 num 2 unsent 2 sending 2
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:35:05.175605+0000 osd.0 (osd.0) 52 : cluster [DBG] 3.3 scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:35:05.189836+0000 osd.0 (osd.0) 53 : cluster [DBG] 3.3 scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 8192 heap: 61890560 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client handle_log_ack log(last 53) v1
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:35:05.175605+0000 osd.0 (osd.0) 52 : cluster [DBG] 3.3 scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:35:05.189836+0000 osd.0 (osd.0) 53 : cluster [DBG] 3.3 scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:36.532423+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 61882368 unmapped: 8192 heap: 61890560 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 417988 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:37.532617+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  log_queue is 2 last_log 55 sent 53 num 2 unsent 2 sending 2
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:35:07.221119+0000 osd.0 (osd.0) 54 : cluster [DBG] 3.6 scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:35:07.235238+0000 osd.0 (osd.0) 55 : cluster [DBG] 3.6 scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 61906944 unmapped: 1032192 heap: 62939136 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client handle_log_ack log(last 55) v1
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:35:07.221119+0000 osd.0 (osd.0) 54 : cluster [DBG] 3.6 scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:35:07.235238+0000 osd.0 (osd.0) 55 : cluster [DBG] 3.6 scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:38.532802+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 61906944 unmapped: 1032192 heap: 62939136 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:39.532973+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 61906944 unmapped: 1032192 heap: 62939136 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:40.533199+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 61915136 unmapped: 1024000 heap: 62939136 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:41.533358+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 61915136 unmapped: 1024000 heap: 62939136 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 419135 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 2.f scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 2.f scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:42.533531+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  log_queue is 2 last_log 57 sent 55 num 2 unsent 2 sending 2
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:35:12.069763+0000 osd.0 (osd.0) 56 : cluster [DBG] 2.f scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:35:12.083875+0000 osd.0 (osd.0) 57 : cluster [DBG] 2.f scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 61939712 unmapped: 999424 heap: 62939136 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client handle_log_ack log(last 57) v1
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:35:12.069763+0000 osd.0 (osd.0) 56 : cluster [DBG] 2.f scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:35:12.083875+0000 osd.0 (osd.0) 57 : cluster [DBG] 2.f scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:43.533715+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  log_queue is 2 last_log 59 sent 57 num 2 unsent 2 sending 2
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:35:13.029311+0000 osd.0 (osd.0) 58 : cluster [DBG] 2.18 scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:35:13.043334+0000 osd.0 (osd.0) 59 : cluster [DBG] 2.18 scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 61939712 unmapped: 999424 heap: 62939136 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client handle_log_ack log(last 59) v1
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:35:13.029311+0000 osd.0 (osd.0) 58 : cluster [DBG] 2.18 scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:35:13.043334+0000 osd.0 (osd.0) 59 : cluster [DBG] 2.18 scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:44.533908+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  log_queue is 2 last_log 61 sent 59 num 2 unsent 2 sending 2
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:35:14.009829+0000 osd.0 (osd.0) 60 : cluster [DBG] 7.9 scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:35:14.023897+0000 osd.0 (osd.0) 61 : cluster [DBG] 7.9 scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 61947904 unmapped: 991232 heap: 62939136 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client handle_log_ack log(last 61) v1
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:35:14.009829+0000 osd.0 (osd.0) 60 : cluster [DBG] 7.9 scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:35:14.023897+0000 osd.0 (osd.0) 61 : cluster [DBG] 7.9 scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:45.534247+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 61947904 unmapped: 991232 heap: 62939136 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:46.534355+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 61947904 unmapped: 991232 heap: 62939136 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 422577 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:47.534549+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 61956096 unmapped: 983040 heap: 62939136 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:48.534728+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 61956096 unmapped: 983040 heap: 62939136 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:49.534893+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 61964288 unmapped: 974848 heap: 62939136 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:50.535069+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.703536987s of 16.757547379s, submitted: 12
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 61964288 unmapped: 974848 heap: 62939136 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:51.535325+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  log_queue is 2 last_log 63 sent 61 num 2 unsent 2 sending 2
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:35:20.927526+0000 osd.0 (osd.0) 62 : cluster [DBG] 7.3 scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:35:20.941728+0000 osd.0 (osd.0) 63 : cluster [DBG] 7.3 scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 61980672 unmapped: 958464 heap: 62939136 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 424871 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client handle_log_ack log(last 63) v1
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:35:20.927526+0000 osd.0 (osd.0) 62 : cluster [DBG] 7.3 scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:35:20.941728+0000 osd.0 (osd.0) 63 : cluster [DBG] 7.3 scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:52.535591+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  log_queue is 2 last_log 65 sent 63 num 2 unsent 2 sending 2
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:35:21.952688+0000 osd.0 (osd.0) 64 : cluster [DBG] 7.4 scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:35:21.966794+0000 osd.0 (osd.0) 65 : cluster [DBG] 7.4 scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 61980672 unmapped: 958464 heap: 62939136 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client handle_log_ack log(last 65) v1
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:35:21.952688+0000 osd.0 (osd.0) 64 : cluster [DBG] 7.4 scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:35:21.966794+0000 osd.0 (osd.0) 65 : cluster [DBG] 7.4 scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:53.535952+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 61980672 unmapped: 958464 heap: 62939136 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:54.536108+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 61988864 unmapped: 950272 heap: 62939136 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:55.536301+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 61988864 unmapped: 950272 heap: 62939136 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:56.536542+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 3.1b scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 3.1b scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 61997056 unmapped: 942080 heap: 62939136 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 426019 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:57.536724+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  log_queue is 2 last_log 67 sent 65 num 2 unsent 2 sending 2
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:35:26.965564+0000 osd.0 (osd.0) 66 : cluster [DBG] 3.1b scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:35:26.979720+0000 osd.0 (osd.0) 67 : cluster [DBG] 3.1b scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 61997056 unmapped: 942080 heap: 62939136 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client handle_log_ack log(last 67) v1
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:35:26.965564+0000 osd.0 (osd.0) 66 : cluster [DBG] 3.1b scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:35:26.979720+0000 osd.0 (osd.0) 67 : cluster [DBG] 3.1b scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:58.536952+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 61997056 unmapped: 942080 heap: 62939136 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:34:59.537377+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62005248 unmapped: 933888 heap: 62939136 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:00.537700+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  log_queue is 2 last_log 69 sent 67 num 2 unsent 2 sending 2
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:35:29.956036+0000 osd.0 (osd.0) 68 : cluster [DBG] 7.6 scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:35:29.970085+0000 osd.0 (osd.0) 69 : cluster [DBG] 7.6 scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client handle_log_ack log(last 69) v1
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:35:29.956036+0000 osd.0 (osd.0) 68 : cluster [DBG] 7.6 scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:35:29.970085+0000 osd.0 (osd.0) 69 : cluster [DBG] 7.6 scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62005248 unmapped: 933888 heap: 62939136 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:01.538086+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62013440 unmapped: 925696 heap: 62939136 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 427166 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:02.538506+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62013440 unmapped: 925696 heap: 62939136 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:03.538660+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62013440 unmapped: 925696 heap: 62939136 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:04.538985+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62021632 unmapped: 917504 heap: 62939136 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:05.539305+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 3.9 deep-scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.938228607s of 14.966608047s, submitted: 8
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 3.9 deep-scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63078400 unmapped: 909312 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:06.539515+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  log_queue is 2 last_log 71 sent 69 num 2 unsent 2 sending 2
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:35:35.894235+0000 osd.0 (osd.0) 70 : cluster [DBG] 3.9 deep-scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:35:35.908357+0000 osd.0 (osd.0) 71 : cluster [DBG] 3.9 deep-scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client handle_log_ack log(last 71) v1
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:35:35.894235+0000 osd.0 (osd.0) 70 : cluster [DBG] 3.9 deep-scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:35:35.908357+0000 osd.0 (osd.0) 71 : cluster [DBG] 3.9 deep-scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62038016 unmapped: 1949696 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 429461 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:07.539768+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  log_queue is 2 last_log 73 sent 71 num 2 unsent 2 sending 2
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:35:36.846778+0000 osd.0 (osd.0) 72 : cluster [DBG] 7.1f scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:35:36.860678+0000 osd.0 (osd.0) 73 : cluster [DBG] 7.1f scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client handle_log_ack log(last 73) v1
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:35:36.846778+0000 osd.0 (osd.0) 72 : cluster [DBG] 7.1f scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:35:36.860678+0000 osd.0 (osd.0) 73 : cluster [DBG] 7.1f scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62038016 unmapped: 1949696 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:08.540017+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62046208 unmapped: 1941504 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:09.540257+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62054400 unmapped: 1933312 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:10.540617+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62054400 unmapped: 1933312 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:11.540776+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62062592 unmapped: 1925120 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 429461 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:12.540968+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62062592 unmapped: 1925120 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:13.541115+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62062592 unmapped: 1925120 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:14.541351+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62078976 unmapped: 1908736 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:15.541736+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  log_queue is 2 last_log 75 sent 73 num 2 unsent 2 sending 2
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:35:44.752147+0000 osd.0 (osd.0) 74 : cluster [DBG] 7.18 scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:35:44.765712+0000 osd.0 (osd.0) 75 : cluster [DBG] 7.18 scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62078976 unmapped: 1908736 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client handle_log_ack log(last 75) v1
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:35:44.752147+0000 osd.0 (osd.0) 74 : cluster [DBG] 7.18 scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:35:44.765712+0000 osd.0 (osd.0) 75 : cluster [DBG] 7.18 scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:16.541954+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62087168 unmapped: 1900544 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430609 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:17.542095+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62087168 unmapped: 1900544 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:18.542271+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62095360 unmapped: 1892352 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:19.542512+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62095360 unmapped: 1892352 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:20.542645+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62095360 unmapped: 1892352 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:21.542801+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62103552 unmapped: 1884160 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 430609 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:22.543261+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62111744 unmapped: 1875968 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:23.543597+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62128128 unmapped: 1859584 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:24.543760+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62128128 unmapped: 1859584 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:25.544438+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62128128 unmapped: 1859584 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:26.544594+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.821636200s of 20.845338821s, submitted: 6
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62136320 unmapped: 1851392 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 431757 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:27.544821+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  log_queue is 2 last_log 77 sent 75 num 2 unsent 2 sending 2
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:35:56.739510+0000 osd.0 (osd.0) 76 : cluster [DBG] 7.1b scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:35:56.753605+0000 osd.0 (osd.0) 77 : cluster [DBG] 7.1b scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62144512 unmapped: 1843200 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client handle_log_ack log(last 77) v1
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:35:56.739510+0000 osd.0 (osd.0) 76 : cluster [DBG] 7.1b scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:35:56.753605+0000 osd.0 (osd.0) 77 : cluster [DBG] 7.1b scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:28.545068+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  log_queue is 2 last_log 79 sent 77 num 2 unsent 2 sending 2
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:35:57.771527+0000 osd.0 (osd.0) 78 : cluster [DBG] 3.1f scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:35:57.785608+0000 osd.0 (osd.0) 79 : cluster [DBG] 3.1f scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62160896 unmapped: 1826816 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client handle_log_ack log(last 79) v1
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:35:57.771527+0000 osd.0 (osd.0) 78 : cluster [DBG] 3.1f scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:35:57.785608+0000 osd.0 (osd.0) 79 : cluster [DBG] 3.1f scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:29.545306+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62160896 unmapped: 1826816 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:30.545469+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62160896 unmapped: 1826816 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:31.545617+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62169088 unmapped: 1818624 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 432905 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:32.545783+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62177280 unmapped: 1810432 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:33.546045+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  log_queue is 2 last_log 81 sent 79 num 2 unsent 2 sending 2
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:36:02.754567+0000 osd.0 (osd.0) 80 : cluster [DBG] 3.15 scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:36:02.768605+0000 osd.0 (osd.0) 81 : cluster [DBG] 3.15 scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 3.a scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 3.a scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62177280 unmapped: 1810432 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client handle_log_ack log(last 81) v1
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:36:02.754567+0000 osd.0 (osd.0) 80 : cluster [DBG] 3.15 scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:36:02.768605+0000 osd.0 (osd.0) 81 : cluster [DBG] 3.15 scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:34.546509+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  log_queue is 2 last_log 83 sent 81 num 2 unsent 2 sending 2
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:36:03.745709+0000 osd.0 (osd.0) 82 : cluster [DBG] 3.a scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:36:03.759940+0000 osd.0 (osd.0) 83 : cluster [DBG] 3.a scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62193664 unmapped: 1794048 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client handle_log_ack log(last 83) v1
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:36:03.745709+0000 osd.0 (osd.0) 82 : cluster [DBG] 3.a scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:36:03.759940+0000 osd.0 (osd.0) 83 : cluster [DBG] 3.a scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:35.546723+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  log_queue is 2 last_log 85 sent 83 num 2 unsent 2 sending 2
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:36:04.741719+0000 osd.0 (osd.0) 84 : cluster [DBG] 5.7 scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:36:04.756200+0000 osd.0 (osd.0) 85 : cluster [DBG] 5.7 scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62193664 unmapped: 1794048 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client handle_log_ack log(last 85) v1
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:36:04.741719+0000 osd.0 (osd.0) 84 : cluster [DBG] 5.7 scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:36:04.756200+0000 osd.0 (osd.0) 85 : cluster [DBG] 5.7 scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:36.546957+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62201856 unmapped: 1785856 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 436347 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:37.547113+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62201856 unmapped: 1785856 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:38.547273+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.878055573s of 11.916460991s, submitted: 10
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62210048 unmapped: 1777664 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:39.547477+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  log_queue is 2 last_log 87 sent 85 num 2 unsent 2 sending 2
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:36:08.656138+0000 osd.0 (osd.0) 86 : cluster [DBG] 2.2 scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:36:08.670264+0000 osd.0 (osd.0) 87 : cluster [DBG] 2.2 scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62210048 unmapped: 1777664 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client handle_log_ack log(last 87) v1
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:36:08.656138+0000 osd.0 (osd.0) 86 : cluster [DBG] 2.2 scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:36:08.670264+0000 osd.0 (osd.0) 87 : cluster [DBG] 2.2 scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:40.547747+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62210048 unmapped: 1777664 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:41.548009+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62226432 unmapped: 1761280 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 437494 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:42.548229+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62226432 unmapped: 1761280 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:43.548479+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 5.4 scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 5.4 scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62234624 unmapped: 1753088 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:44.548652+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  log_queue is 2 last_log 89 sent 87 num 2 unsent 2 sending 2
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:36:13.555094+0000 osd.0 (osd.0) 88 : cluster [DBG] 5.4 scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:36:13.569223+0000 osd.0 (osd.0) 89 : cluster [DBG] 5.4 scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62234624 unmapped: 1753088 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client handle_log_ack log(last 89) v1
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:36:13.555094+0000 osd.0 (osd.0) 88 : cluster [DBG] 5.4 scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:36:13.569223+0000 osd.0 (osd.0) 89 : cluster [DBG] 5.4 scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:45.548940+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62234624 unmapped: 1753088 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:46.549151+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62242816 unmapped: 1744896 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 438641 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:47.549339+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62242816 unmapped: 1744896 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:48.549497+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  log_queue is 2 last_log 91 sent 89 num 2 unsent 2 sending 2
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:36:18.494319+0000 osd.0 (osd.0) 90 : cluster [DBG] 2.19 scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:36:18.508549+0000 osd.0 (osd.0) 91 : cluster [DBG] 2.19 scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62275584 unmapped: 1712128 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client handle_log_ack log(last 91) v1
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:36:18.494319+0000 osd.0 (osd.0) 90 : cluster [DBG] 2.19 scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:36:18.508549+0000 osd.0 (osd.0) 91 : cluster [DBG] 2.19 scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.816717148s of 10.837341309s, submitted: 6
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:49.549674+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  log_queue is 2 last_log 93 sent 91 num 2 unsent 2 sending 2
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:36:19.493594+0000 osd.0 (osd.0) 92 : cluster [DBG] 5.3 scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:36:19.507597+0000 osd.0 (osd.0) 93 : cluster [DBG] 5.3 scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62275584 unmapped: 1712128 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client handle_log_ack log(last 93) v1
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:36:19.493594+0000 osd.0 (osd.0) 92 : cluster [DBG] 5.3 scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:36:19.507597+0000 osd.0 (osd.0) 93 : cluster [DBG] 5.3 scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:50.549893+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  log_queue is 2 last_log 95 sent 93 num 2 unsent 2 sending 2
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:36:20.510540+0000 osd.0 (osd.0) 94 : cluster [DBG] 5.2 scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:36:20.524631+0000 osd.0 (osd.0) 95 : cluster [DBG] 5.2 scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62275584 unmapped: 1712128 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client handle_log_ack log(last 95) v1
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:36:20.510540+0000 osd.0 (osd.0) 94 : cluster [DBG] 5.2 scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:36:20.524631+0000 osd.0 (osd.0) 95 : cluster [DBG] 5.2 scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 2.8 deep-scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 2.8 deep-scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:51.550105+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  log_queue is 2 last_log 97 sent 95 num 2 unsent 2 sending 2
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:36:21.509040+0000 osd.0 (osd.0) 96 : cluster [DBG] 2.8 deep-scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:36:21.523090+0000 osd.0 (osd.0) 97 : cluster [DBG] 2.8 deep-scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62291968 unmapped: 1695744 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 443230 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client handle_log_ack log(last 97) v1
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:36:21.509040+0000 osd.0 (osd.0) 96 : cluster [DBG] 2.8 deep-scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:36:21.523090+0000 osd.0 (osd.0) 97 : cluster [DBG] 2.8 deep-scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:52.550388+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62300160 unmapped: 1687552 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:53.550586+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62308352 unmapped: 1679360 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:54.550771+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62308352 unmapped: 1679360 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:55.551001+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62316544 unmapped: 1671168 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:56.551207+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62316544 unmapped: 1671168 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 443230 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:57.551375+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62316544 unmapped: 1671168 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:58.551600+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62324736 unmapped: 1662976 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:35:59.551832+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62324736 unmapped: 1662976 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:00.552074+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62332928 unmapped: 1654784 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:01.554534+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 2.b scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.046909332s of 12.069302559s, submitted: 6
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 2.b scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62341120 unmapped: 1646592 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 444377 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:02.555412+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  log_queue is 4 last_log 101 sent 97 num 4 unsent 4 sending 4
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:36:31.562756+0000 osd.0 (osd.0) 98 : cluster [DBG] 2.b scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:36:31.577003+0000 osd.0 (osd.0) 99 : cluster [DBG] 2.b scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:36:32.517161+0000 osd.0 (osd.0) 100 : cluster [DBG] 2.1d scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:36:32.531390+0000 osd.0 (osd.0) 101 : cluster [DBG] 2.1d scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62365696 unmapped: 1622016 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client handle_log_ack log(last 101) v1
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:36:31.562756+0000 osd.0 (osd.0) 98 : cluster [DBG] 2.b scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:36:31.577003+0000 osd.0 (osd.0) 99 : cluster [DBG] 2.b scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:36:32.517161+0000 osd.0 (osd.0) 100 : cluster [DBG] 2.1d scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:36:32.531390+0000 osd.0 (osd.0) 101 : cluster [DBG] 2.1d scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:03.555817+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  log_queue is 2 last_log 103 sent 101 num 2 unsent 2 sending 2
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:36:33.535826+0000 osd.0 (osd.0) 102 : cluster [DBG] 5.5 scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:36:33.549663+0000 osd.0 (osd.0) 103 : cluster [DBG] 5.5 scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62382080 unmapped: 1605632 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client handle_log_ack log(last 103) v1
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:36:33.535826+0000 osd.0 (osd.0) 102 : cluster [DBG] 5.5 scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:36:33.549663+0000 osd.0 (osd.0) 103 : cluster [DBG] 5.5 scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:04.557111+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62382080 unmapped: 1605632 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:05.558156+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62382080 unmapped: 1605632 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:06.559003+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 446672 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62406656 unmapped: 1581056 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:07.559820+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  log_queue is 2 last_log 105 sent 103 num 2 unsent 2 sending 2
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:36:37.513508+0000 osd.0 (osd.0) 104 : cluster [DBG] 2.1c scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:36:37.527617+0000 osd.0 (osd.0) 105 : cluster [DBG] 2.1c scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62406656 unmapped: 1581056 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client handle_log_ack log(last 105) v1
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:36:37.513508+0000 osd.0 (osd.0) 104 : cluster [DBG] 2.1c scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:36:37.527617+0000 osd.0 (osd.0) 105 : cluster [DBG] 2.1c scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:08.560063+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62414848 unmapped: 1572864 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:09.560250+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62423040 unmapped: 1564672 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:10.561103+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62431232 unmapped: 1556480 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:11.561295+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 447820 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62431232 unmapped: 1556480 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.907520294s of 10.934170723s, submitted: 8
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:12.562947+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  log_queue is 2 last_log 107 sent 105 num 2 unsent 2 sending 2
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:36:42.496951+0000 osd.0 (osd.0) 106 : cluster [DBG] 2.1f scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:36:42.511098+0000 osd.0 (osd.0) 107 : cluster [DBG] 2.1f scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62431232 unmapped: 1556480 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 6.7 deep-scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 6.7 deep-scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:13.564848+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  log_queue is 4 last_log 109 sent 107 num 4 unsent 2 sending 2
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:36:43.458943+0000 osd.0 (osd.0) 108 : cluster [DBG] 6.7 deep-scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:36:43.476594+0000 osd.0 (osd.0) 109 : cluster [DBG] 6.7 deep-scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client handle_log_ack log(last 107) v1
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:36:42.496951+0000 osd.0 (osd.0) 106 : cluster [DBG] 2.1f scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:36:42.511098+0000 osd.0 (osd.0) 107 : cluster [DBG] 2.1f scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62439424 unmapped: 1548288 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:14.566303+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client handle_log_ack log(last 109) v1
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:36:43.458943+0000 osd.0 (osd.0) 108 : cluster [DBG] 6.7 deep-scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:36:43.476594+0000 osd.0 (osd.0) 109 : cluster [DBG] 6.7 deep-scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62439424 unmapped: 1548288 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:15.568866+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  log_queue is 2 last_log 111 sent 109 num 2 unsent 2 sending 2
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:36:45.393115+0000 osd.0 (osd.0) 110 : cluster [DBG] 6.3 scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:36:45.414149+0000 osd.0 (osd.0) 111 : cluster [DBG] 6.3 scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client handle_log_ack log(last 111) v1
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:36:45.393115+0000 osd.0 (osd.0) 110 : cluster [DBG] 6.3 scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:36:45.414149+0000 osd.0 (osd.0) 111 : cluster [DBG] 6.3 scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62447616 unmapped: 1540096 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:16.570120+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 451262 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62447616 unmapped: 1540096 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:17.570875+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62447616 unmapped: 1540096 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:18.572972+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62455808 unmapped: 1531904 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:19.573408+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62455808 unmapped: 1531904 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:20.573991+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62472192 unmapped: 1515520 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:21.574282+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  log_queue is 2 last_log 113 sent 111 num 2 unsent 2 sending 2
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:36:51.288689+0000 osd.0 (osd.0) 112 : cluster [DBG] 6.5 scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:36:51.309913+0000 osd.0 (osd.0) 113 : cluster [DBG] 6.5 scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client handle_log_ack log(last 113) v1
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:36:51.288689+0000 osd.0 (osd.0) 112 : cluster [DBG] 6.5 scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:36:51.309913+0000 osd.0 (osd.0) 113 : cluster [DBG] 6.5 scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 452409 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62480384 unmapped: 1507328 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:22.575125+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62480384 unmapped: 1507328 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:23.575364+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 6.9 deep-scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.736191750s of 11.767599106s, submitted: 8
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 6.9 deep-scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62480384 unmapped: 1507328 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:24.575532+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  log_queue is 2 last_log 115 sent 113 num 2 unsent 2 sending 2
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:36:54.264779+0000 osd.0 (osd.0) 114 : cluster [DBG] 6.9 deep-scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:36:54.278847+0000 osd.0 (osd.0) 115 : cluster [DBG] 6.9 deep-scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client handle_log_ack log(last 115) v1
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:36:54.264779+0000 osd.0 (osd.0) 114 : cluster [DBG] 6.9 deep-scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:36:54.278847+0000 osd.0 (osd.0) 115 : cluster [DBG] 6.9 deep-scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 6.a scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_channel(cluster) log [DBG] : 6.a scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62488576 unmapped: 1499136 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:25.575842+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  log_queue is 2 last_log 117 sent 115 num 2 unsent 2 sending 2
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:36:55.276921+0000 osd.0 (osd.0) 116 : cluster [DBG] 6.a scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  will send 2025-11-25T23:36:55.291016+0000 osd.0 (osd.0) 117 : cluster [DBG] 6.a scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client handle_log_ack log(last 117) v1
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:36:55.276921+0000 osd.0 (osd.0) 116 : cluster [DBG] 6.a scrub starts
Nov 26 00:06:44 compute-0 ceph-osd[89044]: log_client  logged 2025-11-25T23:36:55.291016+0000 osd.0 (osd.0) 117 : cluster [DBG] 6.a scrub ok
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62488576 unmapped: 1499136 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:26.576275+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62496768 unmapped: 1490944 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:27.576389+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62496768 unmapped: 1490944 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:28.576768+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62504960 unmapped: 1482752 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:29.576954+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62504960 unmapped: 1482752 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:30.577240+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62504960 unmapped: 1482752 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:31.578322+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62513152 unmapped: 1474560 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:32.578588+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62513152 unmapped: 1474560 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:33.578790+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62513152 unmapped: 1474560 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:34.579035+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62521344 unmapped: 1466368 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:35.579584+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62521344 unmapped: 1466368 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:36.579827+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62529536 unmapped: 1458176 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:37.580034+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62537728 unmapped: 1449984 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:38.580325+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62537728 unmapped: 1449984 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:39.580612+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62545920 unmapped: 1441792 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:40.580852+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62545920 unmapped: 1441792 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:41.581067+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62545920 unmapped: 1441792 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:42.581302+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62554112 unmapped: 1433600 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:43.581558+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62554112 unmapped: 1433600 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:44.582694+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62562304 unmapped: 1425408 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:45.582885+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62562304 unmapped: 1425408 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:46.583062+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62562304 unmapped: 1425408 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:47.583309+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62570496 unmapped: 1417216 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:48.583493+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62570496 unmapped: 1417216 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:49.583674+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62578688 unmapped: 1409024 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:50.583903+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62586880 unmapped: 1400832 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:51.584227+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62595072 unmapped: 1392640 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:52.584471+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62595072 unmapped: 1392640 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:53.584728+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62603264 unmapped: 1384448 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:54.585040+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62603264 unmapped: 1384448 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:55.585369+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62603264 unmapped: 1384448 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:56.585723+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62611456 unmapped: 1376256 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:57.586015+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62611456 unmapped: 1376256 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:58.586342+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62611456 unmapped: 1376256 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:36:59.586570+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62627840 unmapped: 1359872 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:00.586776+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62627840 unmapped: 1359872 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:01.587028+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62636032 unmapped: 1351680 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:02.587253+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62636032 unmapped: 1351680 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:03.587505+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62636032 unmapped: 1351680 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:04.587669+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62644224 unmapped: 1343488 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:05.588680+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62652416 unmapped: 1335296 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:06.589396+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62660608 unmapped: 1327104 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:07.590025+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:08.590248+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62660608 unmapped: 1327104 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:09.590416+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62660608 unmapped: 1327104 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:10.591227+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62668800 unmapped: 1318912 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:11.591379+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62668800 unmapped: 1318912 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:12.591782+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62676992 unmapped: 1310720 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:13.591950+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62676992 unmapped: 1310720 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:14.592114+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62676992 unmapped: 1310720 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:15.592281+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62685184 unmapped: 1302528 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:16.592449+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62685184 unmapped: 1302528 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:17.592645+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62693376 unmapped: 1294336 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:18.592833+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62693376 unmapped: 1294336 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:19.593016+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62701568 unmapped: 1286144 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:20.593162+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62709760 unmapped: 1277952 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:21.593336+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62709760 unmapped: 1277952 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:22.593519+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62709760 unmapped: 1277952 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:23.640930+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62709760 unmapped: 1277952 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:24.641070+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62717952 unmapped: 1269760 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:25.641250+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62717952 unmapped: 1269760 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:26.641483+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62726144 unmapped: 1261568 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:27.641637+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62734336 unmapped: 1253376 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:28.641984+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62742528 unmapped: 1245184 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:29.642488+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62742528 unmapped: 1245184 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:30.642747+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62750720 unmapped: 1236992 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:31.643007+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62750720 unmapped: 1236992 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:32.643276+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62758912 unmapped: 1228800 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:33.643493+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62758912 unmapped: 1228800 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:34.643717+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62758912 unmapped: 1228800 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:35.644010+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62767104 unmapped: 1220608 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:36.644246+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62767104 unmapped: 1220608 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:37.644699+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62767104 unmapped: 1220608 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:38.644861+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62775296 unmapped: 1212416 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:39.644998+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62775296 unmapped: 1212416 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:40.645158+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62783488 unmapped: 1204224 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:41.645351+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62791680 unmapped: 1196032 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:42.645522+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62791680 unmapped: 1196032 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:43.645742+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62799872 unmapped: 1187840 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:44.645944+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62799872 unmapped: 1187840 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:45.646148+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62808064 unmapped: 1179648 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:46.646291+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62808064 unmapped: 1179648 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:47.646412+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62816256 unmapped: 1171456 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:48.646539+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62816256 unmapped: 1171456 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:49.646658+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62816256 unmapped: 1171456 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:50.646772+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62824448 unmapped: 1163264 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:51.662539+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62824448 unmapped: 1163264 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:52.662697+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62824448 unmapped: 1163264 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:53.662864+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62832640 unmapped: 1155072 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:54.663006+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62832640 unmapped: 1155072 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:55.663211+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62840832 unmapped: 1146880 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:56.663380+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62840832 unmapped: 1146880 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:57.663664+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62840832 unmapped: 1146880 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:58.663926+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62849024 unmapped: 1138688 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:37:59.664153+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62849024 unmapped: 1138688 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:00.664388+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62849024 unmapped: 1138688 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:01.664525+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62857216 unmapped: 1130496 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:02.664765+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62857216 unmapped: 1130496 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:03.664944+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62865408 unmapped: 1122304 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:04.665113+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62865408 unmapped: 1122304 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:05.665374+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62865408 unmapped: 1122304 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:06.665541+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62873600 unmapped: 1114112 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:07.665761+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62873600 unmapped: 1114112 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:08.665935+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62881792 unmapped: 1105920 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:09.666094+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62881792 unmapped: 1105920 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:10.666243+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62881792 unmapped: 1105920 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:11.666438+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62889984 unmapped: 1097728 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:12.666595+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62889984 unmapped: 1097728 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:13.666772+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62898176 unmapped: 1089536 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:14.666998+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62898176 unmapped: 1089536 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:15.667234+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62906368 unmapped: 1081344 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:16.667431+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62906368 unmapped: 1081344 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:17.667593+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62906368 unmapped: 1081344 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:18.667773+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62914560 unmapped: 1073152 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:19.667961+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62914560 unmapped: 1073152 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:20.668099+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62922752 unmapped: 1064960 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:21.668306+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62922752 unmapped: 1064960 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:22.668451+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62922752 unmapped: 1064960 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:23.668591+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62930944 unmapped: 1056768 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:24.668755+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62930944 unmapped: 1056768 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:25.668961+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62930944 unmapped: 1056768 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:26.669232+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62939136 unmapped: 1048576 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:27.669378+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62939136 unmapped: 1048576 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:28.669532+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62939136 unmapped: 1048576 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:29.669664+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62947328 unmapped: 1040384 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:30.669812+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62947328 unmapped: 1040384 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:31.670112+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62947328 unmapped: 1040384 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:32.670278+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62955520 unmapped: 1032192 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:33.670451+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62955520 unmapped: 1032192 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:34.670637+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62963712 unmapped: 1024000 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:35.670852+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62963712 unmapped: 1024000 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:36.671003+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62971904 unmapped: 1015808 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:37.671190+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62971904 unmapped: 1015808 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:38.671349+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62971904 unmapped: 1015808 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:39.671538+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62980096 unmapped: 1007616 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:40.671734+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62980096 unmapped: 1007616 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:41.672082+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62980096 unmapped: 1007616 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:42.672249+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62988288 unmapped: 999424 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:43.672421+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62988288 unmapped: 999424 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:44.672535+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62996480 unmapped: 991232 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:45.672743+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 62996480 unmapped: 991232 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:46.672832+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63004672 unmapped: 983040 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:47.672979+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63004672 unmapped: 983040 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:48.673148+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63004672 unmapped: 983040 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:49.673324+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63012864 unmapped: 974848 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:50.673455+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63012864 unmapped: 974848 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:51.673606+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63012864 unmapped: 974848 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:52.673764+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63021056 unmapped: 966656 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:53.673888+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63021056 unmapped: 966656 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:54.674026+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63021056 unmapped: 966656 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:55.674238+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63029248 unmapped: 958464 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:56.674389+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63029248 unmapped: 958464 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:57.674548+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63029248 unmapped: 958464 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:58.674703+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63037440 unmapped: 950272 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:38:59.674832+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63037440 unmapped: 950272 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:00.675013+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63045632 unmapped: 942080 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:01.675166+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63045632 unmapped: 942080 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:02.675335+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63053824 unmapped: 933888 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:03.675497+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63053824 unmapped: 933888 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:04.675631+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63053824 unmapped: 933888 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:05.675809+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63062016 unmapped: 925696 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:06.675995+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63062016 unmapped: 925696 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:07.676134+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63070208 unmapped: 917504 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:08.676265+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63070208 unmapped: 917504 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:09.676421+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63070208 unmapped: 917504 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:10.676589+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63078400 unmapped: 909312 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:11.676787+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63078400 unmapped: 909312 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:12.676923+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63086592 unmapped: 901120 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:13.677110+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63086592 unmapped: 901120 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:14.677259+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63086592 unmapped: 901120 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:15.677467+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63086592 unmapped: 901120 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:16.677597+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63094784 unmapped: 892928 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:17.677778+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63094784 unmapped: 892928 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:18.678020+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63094784 unmapped: 892928 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:19.678223+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63102976 unmapped: 884736 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:20.678364+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63102976 unmapped: 884736 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:21.678569+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63111168 unmapped: 876544 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:22.678765+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63111168 unmapped: 876544 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:23.678964+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63111168 unmapped: 876544 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:24.679121+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63119360 unmapped: 868352 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:25.679404+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63119360 unmapped: 868352 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:26.679598+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63119360 unmapped: 868352 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:27.679762+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63127552 unmapped: 860160 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:28.679900+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63127552 unmapped: 860160 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:29.681018+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63127552 unmapped: 860160 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:30.681222+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63135744 unmapped: 851968 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:31.681393+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63135744 unmapped: 851968 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:32.681550+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63143936 unmapped: 843776 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:33.681702+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63143936 unmapped: 843776 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:34.681929+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63152128 unmapped: 835584 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:35.682138+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63152128 unmapped: 835584 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:36.682320+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63152128 unmapped: 835584 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:37.682513+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63160320 unmapped: 827392 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:38.682674+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63160320 unmapped: 827392 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:39.682859+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63160320 unmapped: 827392 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:40.683010+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63168512 unmapped: 819200 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:41.683141+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63168512 unmapped: 819200 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:42.683292+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63176704 unmapped: 811008 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:43.683475+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63176704 unmapped: 811008 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:44.683588+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63176704 unmapped: 811008 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:45.684017+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63184896 unmapped: 802816 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:46.684113+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63184896 unmapped: 802816 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:47.684284+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63184896 unmapped: 802816 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:48.684497+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63193088 unmapped: 794624 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:49.684622+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63193088 unmapped: 794624 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:50.684798+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63193088 unmapped: 794624 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:51.684999+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63201280 unmapped: 786432 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:52.685217+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63201280 unmapped: 786432 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:53.685384+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63201280 unmapped: 786432 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:54.685554+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63209472 unmapped: 778240 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:55.685731+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63209472 unmapped: 778240 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:56.685883+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63217664 unmapped: 770048 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:57.686027+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63217664 unmapped: 770048 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:58.686227+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63217664 unmapped: 770048 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:39:59.686392+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63225856 unmapped: 761856 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:00.686564+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63225856 unmapped: 761856 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:01.686718+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63234048 unmapped: 753664 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:02.686910+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63234048 unmapped: 753664 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:03.687056+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63234048 unmapped: 753664 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:04.687284+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63242240 unmapped: 745472 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:05.687468+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63242240 unmapped: 745472 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:06.687598+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63250432 unmapped: 737280 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:07.687758+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63250432 unmapped: 737280 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:08.687892+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63258624 unmapped: 729088 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:09.688072+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63258624 unmapped: 729088 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:10.688243+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63258624 unmapped: 729088 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:11.688389+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63258624 unmapped: 729088 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:12.688530+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63266816 unmapped: 720896 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:13.688737+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63266816 unmapped: 720896 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:14.688954+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63275008 unmapped: 712704 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:15.689212+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63275008 unmapped: 712704 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:16.689375+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63283200 unmapped: 704512 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:17.689509+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63291392 unmapped: 696320 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:18.690097+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63291392 unmapped: 696320 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:19.690298+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63291392 unmapped: 696320 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:20.691907+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63299584 unmapped: 688128 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:21.692095+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63299584 unmapped: 688128 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:22.692253+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63299584 unmapped: 688128 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:23.692829+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63307776 unmapped: 679936 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:24.693103+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63307776 unmapped: 679936 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:25.693439+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63315968 unmapped: 671744 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:26.693650+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63315968 unmapped: 671744 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:27.693822+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63324160 unmapped: 663552 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:28.694016+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63324160 unmapped: 663552 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:29.694337+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63324160 unmapped: 663552 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:30.694712+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63332352 unmapped: 655360 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:31.694960+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63332352 unmapped: 655360 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:32.695163+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63332352 unmapped: 655360 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:33.695562+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63340544 unmapped: 647168 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:34.695841+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63340544 unmapped: 647168 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:35.696033+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63348736 unmapped: 638976 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:36.696215+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63348736 unmapped: 638976 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:37.696359+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63348736 unmapped: 638976 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:38.696709+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63356928 unmapped: 630784 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:39.696890+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63356928 unmapped: 630784 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:40.697061+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63365120 unmapped: 622592 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:41.697269+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63365120 unmapped: 622592 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:42.697450+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63365120 unmapped: 622592 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:43.697624+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63365120 unmapped: 622592 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:44.697772+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63365120 unmapped: 622592 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:45.697928+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63373312 unmapped: 614400 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:46.698033+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63373312 unmapped: 614400 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:47.698149+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63381504 unmapped: 606208 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:48.698324+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63381504 unmapped: 606208 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:49.698590+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63381504 unmapped: 606208 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:50.698926+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63389696 unmapped: 598016 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:51.699211+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63389696 unmapped: 598016 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:52.700082+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63397888 unmapped: 589824 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:53.700666+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63397888 unmapped: 589824 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:54.701021+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63397888 unmapped: 589824 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:55.701394+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63406080 unmapped: 581632 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:56.701521+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63406080 unmapped: 581632 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:57.701802+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63414272 unmapped: 573440 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:58.702085+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63414272 unmapped: 573440 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:40:59.702292+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63414272 unmapped: 573440 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:00.702463+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63422464 unmapped: 565248 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:01.702632+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63422464 unmapped: 565248 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:02.702842+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63430656 unmapped: 557056 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:03.703078+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63430656 unmapped: 557056 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:04.703256+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63430656 unmapped: 557056 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:05.703506+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63438848 unmapped: 548864 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:06.703692+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63438848 unmapped: 548864 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:07.703823+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63438848 unmapped: 548864 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:08.704017+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63447040 unmapped: 540672 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:09.704262+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63447040 unmapped: 540672 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:10.704425+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63455232 unmapped: 532480 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:11.704592+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63455232 unmapped: 532480 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:12.704803+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63455232 unmapped: 532480 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:13.704989+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63463424 unmapped: 524288 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:14.705203+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63463424 unmapped: 524288 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:15.705367+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63471616 unmapped: 516096 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:16.705552+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63496192 unmapped: 491520 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:17.705750+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63496192 unmapped: 491520 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:18.706012+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63504384 unmapped: 483328 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:19.706160+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63504384 unmapped: 483328 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:20.706345+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63512576 unmapped: 475136 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:21.706542+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63512576 unmapped: 475136 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:22.706679+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63512576 unmapped: 475136 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:23.706863+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63520768 unmapped: 466944 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:24.707071+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63520768 unmapped: 466944 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:25.707289+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63528960 unmapped: 458752 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:26.707476+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63528960 unmapped: 458752 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:27.707664+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:28.708223+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63528960 unmapped: 458752 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:29.708386+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63537152 unmapped: 450560 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:30.708530+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63537152 unmapped: 450560 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:31.708670+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63537152 unmapped: 450560 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:32.708817+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63545344 unmapped: 442368 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:33.708988+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63545344 unmapped: 442368 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:34.709246+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63545344 unmapped: 442368 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:35.709416+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63553536 unmapped: 434176 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:36.709540+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63553536 unmapped: 434176 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:37.709724+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63578112 unmapped: 409600 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:38.709890+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63578112 unmapped: 409600 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:39.710159+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63578112 unmapped: 409600 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:40.710417+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63586304 unmapped: 401408 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:41.710623+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63586304 unmapped: 401408 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:42.710753+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63586304 unmapped: 401408 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:43.711078+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63594496 unmapped: 393216 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:44.711420+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63594496 unmapped: 393216 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:45.711653+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63602688 unmapped: 385024 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:46.711810+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63602688 unmapped: 385024 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:47.712145+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63602688 unmapped: 385024 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:48.712395+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63602688 unmapped: 385024 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:49.712590+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63610880 unmapped: 376832 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:50.712813+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63610880 unmapped: 376832 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:51.712989+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63619072 unmapped: 368640 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:52.713229+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63619072 unmapped: 368640 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:53.713365+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63627264 unmapped: 360448 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:54.713504+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63627264 unmapped: 360448 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:55.713633+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63627264 unmapped: 360448 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:56.713830+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63635456 unmapped: 352256 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:57.714025+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63651840 unmapped: 335872 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:58.714217+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63651840 unmapped: 335872 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:41:59.714337+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63660032 unmapped: 327680 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:00.714579+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63660032 unmapped: 327680 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:01.714693+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63660032 unmapped: 327680 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:02.714956+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63668224 unmapped: 319488 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:03.715221+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63668224 unmapped: 319488 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:04.715381+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63676416 unmapped: 311296 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:05.715914+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63676416 unmapped: 311296 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:06.716083+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63676416 unmapped: 311296 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:07.716308+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63684608 unmapped: 303104 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:08.717253+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63684608 unmapped: 303104 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:09.717502+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63684608 unmapped: 303104 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:10.717723+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63692800 unmapped: 294912 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:11.717899+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63692800 unmapped: 294912 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:12.718093+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63692800 unmapped: 294912 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:13.718340+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63700992 unmapped: 286720 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:14.718612+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63700992 unmapped: 286720 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:15.718985+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63709184 unmapped: 278528 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:16.719128+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63709184 unmapped: 278528 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:17.719282+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63709184 unmapped: 278528 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:18.719643+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63717376 unmapped: 270336 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:19.719868+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63717376 unmapped: 270336 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:20.720148+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63725568 unmapped: 262144 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 4366 writes, 20K keys, 4366 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 4366 writes, 458 syncs, 9.53 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4366 writes, 20K keys, 4366 commit groups, 1.0 writes per commit group, ingest: 16.39 MB, 0.03 MB/s
                                           Interval WAL: 4366 writes, 458 syncs, 9.53 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a4eba691f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a4eba691f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a4eba691f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a4eba691f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a4eba691f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a4eba691f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a4eba691f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a4eba69090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a4eba69090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a4eba69090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a4eba691f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a4eba691f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:21.720374+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63791104 unmapped: 196608 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:22.720540+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63791104 unmapped: 196608 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:23.720702+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63799296 unmapped: 188416 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:24.720847+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63799296 unmapped: 188416 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:25.721413+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63807488 unmapped: 180224 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:26.721558+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63807488 unmapped: 180224 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:27.721680+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63807488 unmapped: 180224 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:28.721834+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63815680 unmapped: 172032 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:29.721954+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63815680 unmapped: 172032 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:30.722080+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63815680 unmapped: 172032 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:31.722199+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63823872 unmapped: 163840 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:32.722425+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63823872 unmapped: 163840 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:33.722575+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63823872 unmapped: 163840 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:34.722723+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63832064 unmapped: 155648 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:35.722889+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63832064 unmapped: 155648 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:36.723075+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63832064 unmapped: 155648 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:37.723278+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63840256 unmapped: 147456 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:38.723422+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63840256 unmapped: 147456 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:39.723556+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63848448 unmapped: 139264 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:40.723689+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63848448 unmapped: 139264 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:41.723891+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63848448 unmapped: 139264 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:42.724058+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63856640 unmapped: 131072 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:43.724274+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63856640 unmapped: 131072 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:44.724431+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63864832 unmapped: 122880 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:45.724658+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63864832 unmapped: 122880 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:46.724821+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63864832 unmapped: 122880 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:47.725020+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63864832 unmapped: 122880 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:48.725209+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63864832 unmapped: 122880 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63873024 unmapped: 114688 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:50.370379+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63873024 unmapped: 114688 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:51.370521+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63873024 unmapped: 114688 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:52.370689+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63881216 unmapped: 106496 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:53.370866+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63881216 unmapped: 106496 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:54.371025+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63881216 unmapped: 106496 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:55.371214+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63889408 unmapped: 98304 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:56.371385+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63889408 unmapped: 98304 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:57.371554+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63897600 unmapped: 90112 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:58.371703+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63897600 unmapped: 90112 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:42:59.371869+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63897600 unmapped: 90112 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:00.372052+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63905792 unmapped: 81920 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:01.372239+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63905792 unmapped: 81920 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:02.372418+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63905792 unmapped: 81920 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:03.372619+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63913984 unmapped: 73728 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:04.372786+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63913984 unmapped: 73728 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:05.372947+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63913984 unmapped: 73728 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:06.373132+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63922176 unmapped: 65536 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:07.373302+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63922176 unmapped: 65536 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:08.373429+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63922176 unmapped: 65536 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:09.373579+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63930368 unmapped: 57344 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:10.373813+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63930368 unmapped: 57344 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:11.374014+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63938560 unmapped: 49152 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:12.374235+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63938560 unmapped: 49152 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:13.374457+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63938560 unmapped: 49152 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:14.374737+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63946752 unmapped: 40960 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:15.375029+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63946752 unmapped: 40960 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:16.375261+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63954944 unmapped: 32768 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:17.375442+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63954944 unmapped: 32768 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:18.375626+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63954944 unmapped: 32768 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:19.375844+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63963136 unmapped: 24576 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:20.376134+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63963136 unmapped: 24576 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:21.376333+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63963136 unmapped: 24576 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:22.376490+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63971328 unmapped: 16384 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:23.376789+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63971328 unmapped: 16384 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:24.377048+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63971328 unmapped: 16384 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:25.377248+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63979520 unmapped: 8192 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:26.377481+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63979520 unmapped: 8192 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:27.377613+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63987712 unmapped: 0 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:28.377854+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63987712 unmapped: 0 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:29.378067+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63987712 unmapped: 0 heap: 63987712 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:30.378242+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63995904 unmapped: 1040384 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:31.378400+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 63995904 unmapped: 1040384 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:32.378532+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64004096 unmapped: 1032192 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:33.378779+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64004096 unmapped: 1032192 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:34.379054+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64004096 unmapped: 1032192 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:35.379314+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64012288 unmapped: 1024000 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:36.379667+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64012288 unmapped: 1024000 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:37.379929+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64020480 unmapped: 1015808 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:38.380116+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64020480 unmapped: 1015808 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:39.380274+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64020480 unmapped: 1015808 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:40.380579+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64028672 unmapped: 1007616 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:41.380870+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64028672 unmapped: 1007616 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:42.381898+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64028672 unmapped: 1007616 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:43.382165+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64036864 unmapped: 999424 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:44.382527+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64036864 unmapped: 999424 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:45.382811+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64045056 unmapped: 991232 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:46.383067+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64045056 unmapped: 991232 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:47.383357+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64045056 unmapped: 991232 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:48.383608+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64053248 unmapped: 983040 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:49.383794+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64053248 unmapped: 983040 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:50.383983+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64053248 unmapped: 983040 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:51.384141+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64053248 unmapped: 983040 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:52.384847+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64061440 unmapped: 974848 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:53.385004+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64061440 unmapped: 974848 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:54.385147+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64069632 unmapped: 966656 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:55.385257+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:56.394089+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64069632 unmapped: 966656 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:57.394252+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64077824 unmapped: 958464 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:58.394413+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64077824 unmapped: 958464 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:43:59.394586+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64086016 unmapped: 950272 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:00.394752+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64086016 unmapped: 950272 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:01.394908+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64086016 unmapped: 950272 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:02.395041+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64094208 unmapped: 942080 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:03.395179+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64094208 unmapped: 942080 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:04.395323+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64094208 unmapped: 942080 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:05.395515+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64102400 unmapped: 933888 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:06.395659+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64102400 unmapped: 933888 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:07.395802+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64102400 unmapped: 933888 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:08.395909+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64110592 unmapped: 925696 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:09.396082+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64110592 unmapped: 925696 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:10.396208+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64118784 unmapped: 917504 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:11.396340+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64118784 unmapped: 917504 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:12.396467+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64118784 unmapped: 917504 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:13.396591+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64126976 unmapped: 909312 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:14.396749+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64126976 unmapped: 909312 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:15.396923+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64135168 unmapped: 901120 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:16.397167+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64135168 unmapped: 901120 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:17.397364+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64135168 unmapped: 901120 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:18.397490+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64143360 unmapped: 892928 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:19.397659+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64143360 unmapped: 892928 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:20.397779+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64143360 unmapped: 892928 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:21.397906+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64151552 unmapped: 884736 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:22.398084+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:23.398263+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:24.398395+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:25.398569+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:26.398821+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:27.398939+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:28.399102+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:29.399300+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:30.399501+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:31.399743+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:32.399929+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:33.400088+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:34.400299+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:35.400509+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:36.400746+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:37.400886+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:38.401096+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:39.401276+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:40.401418+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:41.401560+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:42.401669+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:43.401817+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:44.401995+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:45.402145+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:46.402397+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:47.402507+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:48.402680+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:49.402845+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:50.403004+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:51.403139+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:52.403224+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:53.403343+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:54.403460+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:55.403660+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:56.403847+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:57.404021+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:58.404244+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:44:59.404384+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:00.404507+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:01.404627+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:02.404774+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:03.404900+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:04.405025+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:05.405218+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:06.405398+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:07.405528+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:08.405689+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:09.405853+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:10.406008+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:11.406152+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:12.406294+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:13.406493+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:14.406677+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:15.406820+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:16.406963+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:17.407157+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:18.407379+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:19.407562+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:20.407734+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:21.407864+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:22.408029+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:23.408241+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:24.408404+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:25.408602+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:26.408815+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:27.409033+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:28.409167+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:29.409325+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:30.409542+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 868352 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:31.409744+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 868352 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:32.409861+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 868352 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:33.409978+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 868352 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:34.410107+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 868352 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:35.410232+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 868352 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:36.410433+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 868352 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:37.410551+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 868352 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:38.410726+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 868352 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:39.410889+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 868352 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:40.411077+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 868352 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:41.411274+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 868352 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:42.411430+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 868352 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:43.411704+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 868352 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:44.412062+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 868352 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:45.412264+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 868352 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:46.412515+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 868352 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:47.412775+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 868352 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:48.412958+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 868352 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:49.413391+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 868352 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:50.413579+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 868352 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:51.413813+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 868352 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:52.414101+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 868352 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:53.414362+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 868352 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:54.414591+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 868352 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:55.414768+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 868352 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:56.414936+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 868352 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:57.415068+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 868352 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:58.415230+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 868352 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:45:59.415342+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 868352 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:00.415451+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 868352 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:01.415597+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 868352 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:02.415746+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 868352 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:03.415896+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 868352 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:04.416043+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 868352 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:05.416212+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 868352 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:06.416368+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 868352 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:07.416517+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 868352 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:08.416702+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 868352 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:09.416861+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 868352 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:10.417087+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 868352 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:11.417227+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 868352 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:12.417361+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:13.417509+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:14.417681+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:15.418026+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:16.418238+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:17.418390+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:18.418542+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:19.418729+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:20.418883+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:21.419046+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:22.419190+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:23.419386+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:24.419538+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:25.419684+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:26.419851+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:27.419965+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:28.420100+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:29.420319+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:30.420480+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:31.420721+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:32.421057+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:33.421189+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:34.421359+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:35.421570+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:36.421795+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:37.421971+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:38.422111+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:39.422288+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:40.422461+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:41.422611+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:42.422751+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:43.422898+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:44.423116+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:45.423233+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:46.423763+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:47.424786+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:48.424913+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:49.425571+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:50.425979+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:51.426145+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:52.426315+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:53.427092+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:54.427237+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:55.427469+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:56.427661+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:57.427922+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:58.428167+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:46:59.428311+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:00.428529+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:01.428659+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:02.428792+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:03.428953+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:04.429115+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:05.429306+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:06.429518+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:07.429679+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:08.429889+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:09.430077+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:10.430262+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:11.430412+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:12.430551+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:13.430695+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:14.430839+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:15.430986+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:16.431216+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:17.431374+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:18.431483+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:19.431639+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:20.431782+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:21.431960+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64159744 unmapped: 876544 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:22.432116+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 868352 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:23.432282+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 868352 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:24.432460+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 868352 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:25.432630+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64167936 unmapped: 868352 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:26.432836+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: mgrc ms_handle_reset ms_handle_reset con 0x55a4ecca1c00
Nov 26 00:06:44 compute-0 ceph-osd[89044]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/855624559
Nov 26 00:06:44 compute-0 ceph-osd[89044]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/855624559,v1:192.168.122.100:6801/855624559]
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: get_auth_request con 0x55a4ecca0800 auth_method 0
Nov 26 00:06:44 compute-0 ceph-osd[89044]: mgrc handle_mgr_configure stats_period=5
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:27.432995+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:28.433130+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:29.433289+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:30.433407+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 ms_handle_reset con 0x55a4ecca1800 session 0x55a4ec84d4a0
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4ef0b4800
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:31.433538+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:32.433708+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:33.433873+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:34.434001+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:35.434118+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:36.434272+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:37.434425+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:38.434585+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:39.434737+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:40.434882+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:41.435052+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:42.435237+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:43.435375+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:44.435524+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:45.435702+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:46.435918+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:47.436066+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:48.436197+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:49.436346+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:50.436481+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:51.436649+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:52.436802+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:53.436941+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:54.437160+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:55.437350+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:56.437551+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:57.437691+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:58.437819+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:47:59.437939+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:00.438075+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:01.438375+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:02.438615+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:03.438812+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:04.439049+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:05.439207+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:06.439345+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:07.439504+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:08.439636+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:09.439848+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:10.440053+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:11.440326+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:12.440560+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:13.440731+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:14.440927+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:15.441127+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:16.441346+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:17.441564+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:18.441759+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:19.441983+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:20.442201+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:21.442391+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:22.442557+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:23.442732+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:24.443117+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:25.443279+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:26.443741+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:27.443864+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:28.444027+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:29.444205+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:30.444349+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:31.444578+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:32.444875+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:33.445101+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:34.445311+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:35.445533+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:36.445810+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:37.446091+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:38.446282+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:39.446418+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:40.446554+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:41.446721+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:42.446915+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:43.447146+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:44.447321+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:45.447469+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:46.447663+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:47.447791+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:48.447957+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:49.448298+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:50.448558+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:51.448749+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:52.448997+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:53.449260+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:54.449495+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:55.449737+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:56.449952+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:57.450253+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:58.450532+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:48:59.450801+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:00.450974+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:01.451355+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:02.451545+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:03.452114+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:04.452325+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:05.452609+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:06.452786+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:07.452947+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:08.453270+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:09.453440+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:10.453593+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:11.453731+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:12.453906+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:13.454048+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:14.454242+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:15.454678+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:16.454835+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:17.454983+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:18.455105+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:19.455280+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:20.455474+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:21.455646+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:22.455866+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:23.456076+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:24.456245+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:25.456395+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:26.456549+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64405504 unmapped: 630784 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:27.456737+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:28.456893+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:29.457072+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:30.457219+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:31.457368+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:32.457577+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:33.457799+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:34.457980+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:35.458114+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:36.458274+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:37.458410+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:38.458548+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:39.458714+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:40.458863+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:41.459094+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:42.459287+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:43.459529+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:44.459717+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:45.459893+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:46.460059+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:47.460251+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:48.460407+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:49.460559+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:50.460807+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:51.460989+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:52.461195+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:53.461334+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:54.461511+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:55.461612+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:56.461783+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:57.461969+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:58.462137+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:49:59.462239+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:00.462380+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:01.462518+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:02.463214+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:03.464007+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:04.464163+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:05.464403+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:06.464773+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:07.464888+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:08.465014+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:09.465228+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:10.465337+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:11.465452+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:12.465553+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:13.465651+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:14.465771+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:15.465888+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:16.466045+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:17.466180+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:18.466326+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:19.466462+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:20.466709+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:21.466867+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:22.466980+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:23.467138+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:24.467384+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:25.467545+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:26.467713+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:27.467867+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:28.468019+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:29.468149+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:30.468272+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:31.468419+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:32.468560+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:33.468693+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:34.468804+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:35.468932+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:36.469080+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:37.469200+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:38.469387+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:39.469514+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:40.469666+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:41.469797+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:42.469976+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:43.470231+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:44.470377+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:45.470578+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:46.470785+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:47.470966+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:48.471103+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0) v1
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:49.471271+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1131843593' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:50.471379+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:51.471496+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:52.471596+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:53.471726+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:54.471897+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:55.472058+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:56.472304+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:57.472452+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:58.472639+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:50:59.472783+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:00.472919+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:01.473049+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:02.473190+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:03.473373+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:04.473682+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:05.473952+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:06.474235+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:07.474383+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:08.474995+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:09.475508+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:10.475955+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:11.476293+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:12.476545+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:13.476837+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:14.477106+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:15.477387+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:16.477581+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:17.477740+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:18.477901+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:19.478120+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:20.478344+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:21.478538+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:22.478746+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:23.478907+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:24.479053+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64323584 unmapped: 712704 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:25.479242+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 704512 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:26.479428+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 704512 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:27.479652+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 704512 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:28.479792+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 704512 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:29.479970+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 704512 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:30.480155+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 704512 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:31.480378+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 704512 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:32.480561+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 704512 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:33.480660+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 704512 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:34.480806+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 704512 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:35.480970+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 704512 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:36.481140+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:37.481297+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 704512 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:38.481480+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 704512 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:39.481648+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 704512 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:40.481828+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 704512 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:41.481996+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 704512 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:42.482156+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 704512 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:43.482342+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 704512 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:44.482423+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 704512 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:45.482511+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 704512 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:46.482643+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 704512 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:47.482779+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 704512 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:48.482908+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 704512 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:49.483108+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 704512 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:50.483292+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 704512 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:51.483466+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 704512 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:52.483569+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 704512 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:53.483718+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 704512 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:54.483890+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 704512 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:55.484045+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 704512 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:56.484226+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 704512 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:57.484357+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 704512 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:58.484497+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 704512 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:51:59.484691+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 704512 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:00.484903+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 704512 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:01.485014+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 704512 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:02.485119+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 704512 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:03.485267+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 704512 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:04.485416+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 704512 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:05.485562+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 704512 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:06.485772+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 704512 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:07.486075+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 704512 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:08.486369+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64331776 unmapped: 704512 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:09.487254+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64339968 unmapped: 696320 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:10.487594+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64339968 unmapped: 696320 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:11.487985+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64339968 unmapped: 696320 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:12.488291+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64339968 unmapped: 696320 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:13.488493+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64339968 unmapped: 696320 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:14.488801+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64339968 unmapped: 696320 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:15.489231+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64339968 unmapped: 696320 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:16.489530+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64339968 unmapped: 696320 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:17.489869+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64339968 unmapped: 696320 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:18.490088+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64339968 unmapped: 696320 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:19.490346+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64339968 unmapped: 696320 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:20.490537+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64339968 unmapped: 696320 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:21.490934+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 4366 writes, 20K keys, 4366 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 4366 writes, 458 syncs, 9.53 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a4eba691f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a4eba691f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a4eba691f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a4eba691f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a4eba691f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a4eba691f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a4eba691f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a4eba69090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a4eba69090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a4eba69090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a4eba691f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a4eba691f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:22.491113+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:23.491296+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:24.491507+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:25.491825+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:26.492083+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:27.492329+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:28.492510+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:29.492723+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:30.492921+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:31.493147+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:32.493359+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:33.493575+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:34.493773+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:35.493941+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:36.494374+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:37.494570+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:38.494745+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:39.494885+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:40.495046+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:41.495284+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:42.495487+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:43.495659+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:44.495807+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:45.495967+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:46.496448+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:47.496601+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:48.496777+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:49.496977+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:50.497149+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:51.497328+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:52.497489+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:53.497725+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:54.497890+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:55.498084+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:56.498299+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:57.498460+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:58.498642+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:52:59.498950+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:00.499213+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:01.499404+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:02.499571+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:03.499778+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:04.499955+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:05.500166+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:06.500437+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:07.500611+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:08.500760+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:09.500888+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:10.501080+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:11.501288+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:12.501420+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:13.501595+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:14.501793+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:15.502013+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:16.502199+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:17.502418+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:18.502573+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:19.502806+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:20.502989+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:21.503267+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:22.503433+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:23.503612+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:24.503766+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:25.503927+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:26.504096+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:27.504248+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:28.504400+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:29.504556+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:30.504764+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:31.504912+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:32.505091+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:33.505244+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:34.505401+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:35.505608+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:36.505798+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:37.505980+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:38.506126+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:39.506302+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:40.506496+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:41.506687+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:42.506873+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:43.507134+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:44.507406+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:45.507610+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:46.507808+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:47.508019+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:48.508220+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:49.508374+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:50.508528+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:51.508687+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:52.508799+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:53.508923+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:54.509039+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:55.509199+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:56.509354+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:57.509473+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:58.509645+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:53:59.509891+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:00.510107+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:01.510391+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:02.510597+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:03.510875+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:04.511036+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:05.511266+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:06.511590+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:07.511787+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:08.511989+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:09.512239+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:10.512359+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64372736 unmapped: 663552 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:11.512550+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 454703 data_alloc: 218103808 data_used: 40960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 heartbeat osd_stat(store_statfs(0x4fe120000/0x0/0x4ffc00000, data 0x4820b/0xae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 63 handle_osd_map epochs [63,64], i have 63, src has [1,64]
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 1067.654418945s of 1067.670898438s, submitted: 4
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64389120 unmapped: 647168 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:12.512673+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64389120 unmapped: 647168 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:13.512830+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 64 handle_osd_map epochs [65,65], i have 64, src has [1,65]
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64471040 unmapped: 565248 heap: 65036288 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4ef0b4000
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:14.513028+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 65 heartbeat osd_stat(store_statfs(0x4fe118000/0x0/0x4ffc00000, data 0x4add3/0xb4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64552960 unmapped: 9797632 heap: 74350592 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 65 ms_handle_reset con 0x55a4ef0b4000 session 0x55a4ee2e6d20
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:15.513266+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64552960 unmapped: 9797632 heap: 74350592 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 65 heartbeat osd_stat(store_statfs(0x4fdca9000/0x0/0x4ffc00000, data 0x4badd3/0x524000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 65 handle_osd_map epochs [66,66], i have 65, src has [1,66]
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 65 handle_osd_map epochs [66,66], i have 66, src has [1,66]
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:16.513473+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 496467 data_alloc: 218103808 data_used: 49152
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64626688 unmapped: 9723904 heap: 74350592 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:17.514015+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4ef0b4c00
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 66 heartbeat osd_stat(store_statfs(0x4fdca5000/0x0/0x4ffc00000, data 0x4bc3a9/0x527000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64651264 unmapped: 9699328 heap: 74350592 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:18.514461+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 18087936 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 66 ms_handle_reset con 0x55a4ef0b4c00 session 0x55a4ee2e6960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 66 handle_osd_map epochs [67,67], i have 66, src has [1,67]
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:19.514712+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 18087936 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:20.514865+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 18087936 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:21.515156+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 556101 data_alloc: 218103808 data_used: 49152
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64659456 unmapped: 18087936 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:22.515366+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64675840 unmapped: 18071552 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:23.515814+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 67 heartbeat osd_stat(store_statfs(0x4fd4a1000/0x0/0x4ffc00000, data 0xcbd98f/0xd2b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 67 handle_osd_map epochs [68,68], i have 67, src has [1,68]
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.846909523s of 11.953395844s, submitted: 16
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64675840 unmapped: 18071552 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:24.516030+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64675840 unmapped: 18071552 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:25.516326+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64692224 unmapped: 18055168 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:26.516535+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 558865 data_alloc: 218103808 data_used: 65536
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64692224 unmapped: 18055168 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:27.517105+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fd49f000/0x0/0x4ffc00000, data 0xcbee2f/0xd2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64692224 unmapped: 18055168 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:28.517313+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64692224 unmapped: 18055168 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:29.517697+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64692224 unmapped: 18055168 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:30.517871+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64692224 unmapped: 18055168 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:31.518109+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 558865 data_alloc: 218103808 data_used: 65536
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64692224 unmapped: 18055168 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:32.518257+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64692224 unmapped: 18055168 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fd49f000/0x0/0x4ffc00000, data 0xcbee2f/0xd2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:33.518533+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64692224 unmapped: 18055168 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:34.518655+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64692224 unmapped: 18055168 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:35.518879+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64692224 unmapped: 18055168 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:36.519049+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 558865 data_alloc: 218103808 data_used: 65536
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64692224 unmapped: 18055168 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fd49f000/0x0/0x4ffc00000, data 0xcbee2f/0xd2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:37.519241+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64692224 unmapped: 18055168 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:38.519408+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64692224 unmapped: 18055168 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:39.519567+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64692224 unmapped: 18055168 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:40.519725+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64692224 unmapped: 18055168 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:41.520029+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 558865 data_alloc: 218103808 data_used: 65536
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64692224 unmapped: 18055168 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:42.520367+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64692224 unmapped: 18055168 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fd49f000/0x0/0x4ffc00000, data 0xcbee2f/0xd2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:43.520559+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64692224 unmapped: 18055168 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:44.520746+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64692224 unmapped: 18055168 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:45.520901+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64692224 unmapped: 18055168 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:46.521208+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 558865 data_alloc: 218103808 data_used: 65536
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64692224 unmapped: 18055168 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:47.521432+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fd49f000/0x0/0x4ffc00000, data 0xcbee2f/0xd2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64692224 unmapped: 18055168 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:48.521590+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64692224 unmapped: 18055168 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:49.521788+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64692224 unmapped: 18055168 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fd49f000/0x0/0x4ffc00000, data 0xcbee2f/0xd2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:50.522279+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64692224 unmapped: 18055168 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:51.522414+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 558865 data_alloc: 218103808 data_used: 65536
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64692224 unmapped: 18055168 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:52.522635+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4ef0b4400
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 29.079715729s of 29.092357635s, submitted: 9
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64716800 unmapped: 18030592 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 68 handle_osd_map epochs [68,69], i have 68, src has [1,69]
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 69 ms_handle_reset con 0x55a4ef0b4400 session 0x55a4ee2e7e00
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:53.522758+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64716800 unmapped: 18030592 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:54.523029+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 64716800 unmapped: 18030592 heap: 82747392 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:55.523492+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 69 heartbeat osd_stat(store_statfs(0x4fd49a000/0x0/0x4ffc00000, data 0xcc041c/0xd33000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4eeeb2000
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 65232896 unmapped: 25911296 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:56.523654+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 677841 data_alloc: 218103808 data_used: 65536
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 69 heartbeat osd_stat(store_statfs(0x4fc49a000/0x0/0x4ffc00000, data 0x1cc0442/0x1d34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 65339392 unmapped: 25804800 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:57.523799+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 69 heartbeat osd_stat(store_statfs(0x4fc49a000/0x0/0x4ffc00000, data 0x1cc0442/0x1d34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 65445888 unmapped: 25698304 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:58.523980+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 69 handle_osd_map epochs [69,70], i have 69, src has [1,70]
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _renew_subs
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 70 handle_osd_map epochs [70,70], i have 70, src has [1,70]
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 70 ms_handle_reset con 0x55a4eeeb2000 session 0x55a4ee2e7a40
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 65298432 unmapped: 25845760 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4f07f7800
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:54:59.524126+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 65363968 unmapped: 25780224 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:00.524263+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _renew_subs
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 70 handle_osd_map epochs [71,71], i have 70, src has [1,71]
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 71 ms_handle_reset con 0x55a4f07f7800 session 0x55a4ed444d20
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 65454080 unmapped: 25690112 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:01.524417+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4eeeb2000
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 577509 data_alloc: 218103808 data_used: 77824
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 65454080 unmapped: 25690112 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:02.524627+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 71 handle_osd_map epochs [71,72], i have 71, src has [1,72]
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 72 ms_handle_reset con 0x55a4eeeb2000 session 0x55a4ee2f32c0
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 72 heartbeat osd_stat(store_statfs(0x4fd493000/0x0/0x4ffc00000, data 0xcc45bf/0xd3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 65339392 unmapped: 25804800 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:03.524988+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 72 heartbeat osd_stat(store_statfs(0x4fd493000/0x0/0x4ffc00000, data 0xcc45bf/0xd3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4f07c3c00
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.891912460s of 11.310493469s, submitted: 105
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 65290240 unmapped: 25853952 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:04.525122+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _renew_subs
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 72 handle_osd_map epochs [73,73], i have 72, src has [1,73]
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 73 ms_handle_reset con 0x55a4f07c3c00 session 0x55a4ee2e7860
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 73 heartbeat osd_stat(store_statfs(0x4fd493000/0x0/0x4ffc00000, data 0xcc45bf/0xd3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 73 heartbeat osd_stat(store_statfs(0x4fd48f000/0x0/0x4ffc00000, data 0xcc5ba5/0xd3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 65331200 unmapped: 25812992 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4f07c2400
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:05.525314+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 73 ms_handle_reset con 0x55a4f07c2400 session 0x55a4ed6b2960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4f07c1c00
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 73 ms_handle_reset con 0x55a4f07c1c00 session 0x55a4ed6963c0
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 73 heartbeat osd_stat(store_statfs(0x4fd48f000/0x0/0x4ffc00000, data 0xcc5ba5/0xd3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 73 handle_osd_map epochs [74,74], i have 73, src has [1,74]
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 65363968 unmapped: 25780224 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:06.525524+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4ef0b4c00
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 590680 data_alloc: 218103808 data_used: 86016
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 65323008 unmapped: 25821184 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:07.525743+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _renew_subs
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 74 handle_osd_map epochs [75,75], i have 74, src has [1,75]
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 75 ms_handle_reset con 0x55a4ef0b4c00 session 0x55a4ed696780
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 65331200 unmapped: 25812992 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:08.525940+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 75 heartbeat osd_stat(store_statfs(0x4fd489000/0x0/0x4ffc00000, data 0xcc8637/0xd44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4eeeb2000
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4f07c1c00
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 65404928 unmapped: 25739264 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:09.526078+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 75 handle_osd_map epochs [75,76], i have 75, src has [1,76]
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 76 ms_handle_reset con 0x55a4f07c1c00 session 0x55a4ee32e000
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 76 ms_handle_reset con 0x55a4eeeb2000 session 0x55a4ed5e90e0
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 65478656 unmapped: 25665536 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4f07c2400
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:10.526229+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4f07c3c00
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _renew_subs
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 76 handle_osd_map epochs [77,77], i have 76, src has [1,77]
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 77 ms_handle_reset con 0x55a4f07c3c00 session 0x55a4ec84d860
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 77 ms_handle_reset con 0x55a4f07c2400 session 0x55a4ed562780
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 65691648 unmapped: 25452544 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:11.526422+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 600747 data_alloc: 218103808 data_used: 90112
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 77 heartbeat osd_stat(store_statfs(0x4fd484000/0x0/0x4ffc00000, data 0xccb2cb/0xd49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 65691648 unmapped: 25452544 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:12.526573+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4ef0b4400
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 77 ms_handle_reset con 0x55a4ef0b4400 session 0x55a4ee17d860
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4eeeb2000
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 65724416 unmapped: 25419776 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:13.526766+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 65724416 unmapped: 25419776 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 77 heartbeat osd_stat(store_statfs(0x4fd484000/0x0/0x4ffc00000, data 0xccb2cb/0xd49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 77 handle_osd_map epochs [78,78], i have 77, src has [1,78]
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 77 handle_osd_map epochs [78,78], i have 78, src has [1,78]
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.694375038s of 10.103998184s, submitted: 108
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:14.526987+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 78 handle_osd_map epochs [78,79], i have 78, src has [1,79]
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 79 ms_handle_reset con 0x55a4eeeb2000 session 0x55a4ee17c000
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4f07c1c00
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 79 ms_handle_reset con 0x55a4f07c1c00 session 0x55a4ee2faf00
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 65748992 unmapped: 25395200 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:15.527254+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 79 handle_osd_map epochs [79,80], i have 79, src has [1,80]
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 66797568 unmapped: 24346624 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:16.527541+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 611007 data_alloc: 218103808 data_used: 90112
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 66813952 unmapped: 24330240 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:17.527731+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 80 heartbeat osd_stat(store_statfs(0x4fd479000/0x0/0x4ffc00000, data 0xccf407/0xd52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 66813952 unmapped: 24330240 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:18.527986+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 80 heartbeat osd_stat(store_statfs(0x4fd479000/0x0/0x4ffc00000, data 0xccf407/0xd52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a2f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 66813952 unmapped: 24330240 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:19.528261+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4f07c2400
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 66813952 unmapped: 24330240 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:20.528470+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 80 handle_osd_map epochs [80,81], i have 80, src has [1,81]
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 81 ms_handle_reset con 0x55a4f07c2400 session 0x55a4ee2fba40
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 66871296 unmapped: 24272896 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:21.528641+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4f07c3c00
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 81 handle_osd_map epochs [81,82], i have 81, src has [1,82]
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _renew_subs
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 82 handle_osd_map epochs [82,82], i have 82, src has [1,82]
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 82 ms_handle_reset con 0x55a4f07c3c00 session 0x55a4ee2fa000
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 621716 data_alloc: 218103808 data_used: 98304
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 66879488 unmapped: 24264704 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4ef0b4000
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:22.528747+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 82 ms_handle_reset con 0x55a4ef0b4000 session 0x55a4ee2fab40
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 66953216 unmapped: 24190976 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:23.528914+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4eeeb2000
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _renew_subs
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 82 handle_osd_map epochs [83,83], i have 82, src has [1,83]
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 83 handle_osd_map epochs [83,84], i have 83, src has [1,84]
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _renew_subs
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 84 handle_osd_map epochs [84,84], i have 84, src has [1,84]
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 84 ms_handle_reset con 0x55a4eeeb2000 session 0x55a4ee2f2000
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 84 heartbeat osd_stat(store_statfs(0x4fd059000/0x0/0x4ffc00000, data 0xcd58fd/0xd63000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e3f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4ef0b4000
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 67076096 unmapped: 24068096 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:24.529097+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 84 ms_handle_reset con 0x55a4ef0b4000 session 0x55a4ed4eda40
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 84 handle_osd_map epochs [85,85], i have 84, src has [1,85]
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.696181297s of 10.913640022s, submitted: 75
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 67100672 unmapped: 24043520 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:25.529238+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4f07c1c00
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _renew_subs
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 85 handle_osd_map epochs [86,86], i have 85, src has [1,86]
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 86 ms_handle_reset con 0x55a4f07c1c00 session 0x55a4ed4ec960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 67182592 unmapped: 23961600 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:26.529419+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 86 heartbeat osd_stat(store_statfs(0x4fd051000/0x0/0x4ffc00000, data 0xcd947f/0xd6c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e3f9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 648340 data_alloc: 218103808 data_used: 163840
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 67182592 unmapped: 23961600 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:27.529591+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 67182592 unmapped: 23961600 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:28.529804+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4f07c2400
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 23855104 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:29.529956+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 86 handle_osd_map epochs [86,87], i have 86, src has [1,87]
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _renew_subs
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 86 handle_osd_map epochs [87,87], i have 87, src has [1,87]
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 87 ms_handle_reset con 0x55a4f07c2400 session 0x55a4ee2faf00
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 23650304 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4f07c3c00
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:30.530074+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 87 heartbeat osd_stat(store_statfs(0x4fbeae000/0x0/0x4ffc00000, data 0xcdabdf/0xd6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4f07f7c00
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _renew_subs
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 87 handle_osd_map epochs [88,88], i have 87, src has [1,88]
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 88 ms_handle_reset con 0x55a4f07f7c00 session 0x55a4ec84d4a0
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 88 ms_handle_reset con 0x55a4f07c3c00 session 0x55a4edd1f4a0
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 88 heartbeat osd_stat(store_statfs(0x4fbeab000/0x0/0x4ffc00000, data 0xcdb7af/0xd70000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4eeeb2000
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 67624960 unmapped: 23519232 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4ef0b4000
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:31.530316+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 654437 data_alloc: 218103808 data_used: 180224
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _renew_subs
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 88 handle_osd_map epochs [89,89], i have 88, src has [1,89]
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 89 ms_handle_reset con 0x55a4eeeb2000 session 0x55a4ed4ed860
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4f07c1c00
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 67780608 unmapped: 23363584 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:32.530498+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 89 handle_osd_map epochs [89,90], i have 89, src has [1,90]
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 90 ms_handle_reset con 0x55a4f07c1c00 session 0x55a4ed2e41e0
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 69885952 unmapped: 21258240 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4f07c2400
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:33.530669+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 90 handle_osd_map epochs [90,91], i have 90, src has [1,91]
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _renew_subs
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 91 handle_osd_map epochs [91,91], i have 91, src has [1,91]
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4f07f7000
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 91 ms_handle_reset con 0x55a4f07c2400 session 0x55a4ed6cef00
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 70000640 unmapped: 21143552 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:34.530764+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4f07f7400
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4f07c2000
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _renew_subs
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 91 handle_osd_map epochs [92,92], i have 91, src has [1,92]
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 92 ms_handle_reset con 0x55a4f07c2000 session 0x55a4ed2d30e0
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 92 ms_handle_reset con 0x55a4f07f7000 session 0x55a4ed5a4f00
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 92 ms_handle_reset con 0x55a4f07f7400 session 0x55a4ed2d21e0
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 70123520 unmapped: 21020672 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:35.530928+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 92 handle_osd_map epochs [93,93], i have 92, src has [1,93]
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.749933243s of 10.585572243s, submitted: 226
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4eeeb2000
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 93 ms_handle_reset con 0x55a4eeeb2000 session 0x55a4ed2e4960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 20987904 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fbe9f000/0x0/0x4ffc00000, data 0xce152f/0xd7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:36.531057+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4f07c1c00
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 93 ms_handle_reset con 0x55a4f07c1c00 session 0x55a4ee2e7680
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4f07c2400
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 673052 data_alloc: 218103808 data_used: 192512
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 93 handle_osd_map epochs [93,94], i have 93, src has [1,94]
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 94 ms_handle_reset con 0x55a4f07c2400 session 0x55a4ed6970e0
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 20987904 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:37.531270+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 94 heartbeat osd_stat(store_statfs(0x4fbe9b000/0x0/0x4ffc00000, data 0xce2b17/0xd7f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4eeeb2000
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 70066176 unmapped: 21078016 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:38.531423+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 94 handle_osd_map epochs [94,95], i have 94, src has [1,95]
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _renew_subs
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 95 handle_osd_map epochs [95,95], i have 95, src has [1,95]
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 95 ms_handle_reset con 0x55a4eeeb2000 session 0x55a4ed4ec000
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 70033408 unmapped: 21110784 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:39.531555+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4f07c1c00
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 70033408 unmapped: 21110784 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:40.531694+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _renew_subs
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 95 handle_osd_map epochs [96,96], i have 95, src has [1,96]
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 96 ms_handle_reset con 0x55a4f07c1c00 session 0x55a4edcab2c0
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 70041600 unmapped: 21102592 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:41.531836+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 681428 data_alloc: 218103808 data_used: 208896
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 70041600 unmapped: 21102592 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:42.531998+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 70041600 unmapped: 21102592 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 96 heartbeat osd_stat(store_statfs(0x4fbe98000/0x0/0x4ffc00000, data 0xce5794/0xd85000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:43.532135+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4f07f7000
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 96 ms_handle_reset con 0x55a4f07f7000 session 0x55a4edcaa3c0
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4f07f7400
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 96 ms_handle_reset con 0x55a4f07f7400 session 0x55a4edcaa1e0
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4f07c3c00
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 96 ms_handle_reset con 0x55a4f07c3c00 session 0x55a4edcaa000
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4eeeb2000
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 96 ms_handle_reset con 0x55a4eeeb2000 session 0x55a4ee16a000
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4f07c1c00
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4f07f7000
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 96 ms_handle_reset con 0x55a4f07f7000 session 0x55a4ee32eb40
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 96 ms_handle_reset con 0x55a4f07c1c00 session 0x55a4ed5a4d20
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4f07f7400
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 96 ms_handle_reset con 0x55a4f07f7400 session 0x55a4ed5a45a0
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4ef5c4400
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 96 ms_handle_reset con 0x55a4ef5c4400 session 0x55a4ed5a41e0
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4eeeb2000
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 96 ms_handle_reset con 0x55a4eeeb2000 session 0x55a4edda74a0
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4f07c1c00
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 70033408 unmapped: 21110784 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 96 ms_handle_reset con 0x55a4f07c1c00 session 0x55a4ed4ec000
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:44.532283+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4f07f7000
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 96 ms_handle_reset con 0x55a4f07f7000 session 0x55a4ee2f3c20
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 70033408 unmapped: 21110784 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:45.532436+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4f07f7400
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _renew_subs
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 96 handle_osd_map epochs [97,97], i have 96, src has [1,97]
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 97 ms_handle_reset con 0x55a4f07f7400 session 0x55a4ee2e7680
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4f07c1800
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 97 ms_handle_reset con 0x55a4f07c1800 session 0x55a4ee2e65a0
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4eeeb2000
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.030328751s of 10.426925659s, submitted: 139
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 97 ms_handle_reset con 0x55a4eeeb2000 session 0x55a4ee2e6b40
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 70049792 unmapped: 21094400 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:46.532607+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4f07c1c00
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 689930 data_alloc: 218103808 data_used: 221184
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 97 heartbeat osd_stat(store_statfs(0x4fbe93000/0x0/0x4ffc00000, data 0xce6cbe/0xd8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 70049792 unmapped: 21094400 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:47.532796+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 97 heartbeat osd_stat(store_statfs(0x4fbe92000/0x0/0x4ffc00000, data 0xce6ce1/0xd8b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 70049792 unmapped: 21094400 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:48.533023+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 70049792 unmapped: 21094400 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:49.533239+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 97 heartbeat osd_stat(store_statfs(0x4fbe92000/0x0/0x4ffc00000, data 0xce6ce1/0xd8b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 70049792 unmapped: 21094400 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:50.533418+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4f07f7000
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 97 ms_handle_reset con 0x55a4f07f7000 session 0x55a4ee32f4a0
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4f07f7400
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 21061632 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:51.533598+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _renew_subs
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 97 handle_osd_map epochs [98,98], i have 97, src has [1,98]
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 98 ms_handle_reset con 0x55a4f07f7400 session 0x55a4ee2fa960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4f07f6400
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4f07f6000
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 699664 data_alloc: 218103808 data_used: 229376
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 98 ms_handle_reset con 0x55a4f07f6000 session 0x55a4ed5e90e0
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 98 ms_handle_reset con 0x55a4f07f6400 session 0x55a4ed54d0e0
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4f07f6400
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 98 ms_handle_reset con 0x55a4f07f6400 session 0x55a4ee2e7680
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4eeeb2000
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 70262784 unmapped: 20881408 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:52.533745+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 98 handle_osd_map epochs [98,99], i have 98, src has [1,99]
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 99 ms_handle_reset con 0x55a4eeeb2000 session 0x55a4ed5a41e0
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4f07f6000
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 70328320 unmapped: 20815872 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:53.533958+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 99 handle_osd_map epochs [99,100], i have 99, src has [1,100]
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _renew_subs
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 100 handle_osd_map epochs [100,100], i have 100, src has [1,100]
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 100 ms_handle_reset con 0x55a4f07f6000 session 0x55a4edcaa000
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 70393856 unmapped: 20750336 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:54.534148+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4f07f7000
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 100 ms_handle_reset con 0x55a4f07f7000 session 0x55a4ed699e00
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4f07f7400
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 100 ms_handle_reset con 0x55a4f07f7400 session 0x55a4ed698b40
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4eeeb2000
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 100 ms_handle_reset con 0x55a4eeeb2000 session 0x55a4ed5a41e0
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 100 heartbeat osd_stat(store_statfs(0x4fbe85000/0x0/0x4ffc00000, data 0xceaf13/0xd97000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4f07f6000
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 100 ms_handle_reset con 0x55a4f07f6000 session 0x55a4ee17da40
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 70443008 unmapped: 20701184 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:55.534356+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4f07f6400
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 70516736 unmapped: 20627456 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:56.534513+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _renew_subs
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 100 handle_osd_map epochs [101,101], i have 100, src has [1,101]
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.019435883s of 10.487161636s, submitted: 76
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 708356 data_alloc: 218103808 data_used: 249856
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 101 ms_handle_reset con 0x55a4f07f6400 session 0x55a4ed2e4f00
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71557120 unmapped: 19587072 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:57.534648+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 101 ms_handle_reset con 0x55a4f07c1c00 session 0x55a4ee32e960
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71557120 unmapped: 19587072 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:58.534787+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4f07f7000
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 101 heartbeat osd_stat(store_statfs(0x4fbe84000/0x0/0x4ffc00000, data 0xcec4a9/0xd97000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [0,0,1])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71565312 unmapped: 19578880 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:55:59.534977+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 101 handle_osd_map epochs [101,102], i have 101, src has [1,102]
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 102 ms_handle_reset con 0x55a4f07f7000 session 0x55a4ed6b2780
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71573504 unmapped: 19570688 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:00.535194+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71573504 unmapped: 19570688 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:01.535499+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 705861 data_alloc: 218103808 data_used: 233472
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71573504 unmapped: 19570688 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:02.535647+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71573504 unmapped: 19570688 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:03.535816+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 102 handle_osd_map epochs [103,103], i have 102, src has [1,103]
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:04.535970+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71598080 unmapped: 19546112 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 103 ms_handle_reset con 0x55a4ef0b4000 session 0x55a4edda7860
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 103 heartbeat osd_stat(store_statfs(0x4fbe84000/0x0/0x4ffc00000, data 0xceeef7/0xd99000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4eeeb2000
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 103 ms_handle_reset con 0x55a4eeeb2000 session 0x55a4ed2d21e0
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:05.536119+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71581696 unmapped: 19562496 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:06.536331+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71581696 unmapped: 19562496 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 103 handle_osd_map epochs [104,104], i have 103, src has [1,104]
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4f07c1c00
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 104 ms_handle_reset con 0x55a4f07c1c00 session 0x55a4ed699680
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 104 heartbeat osd_stat(store_statfs(0x4fbe85000/0x0/0x4ffc00000, data 0xceeed4/0xd98000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4f07f6000
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.650251389s of 10.128237724s, submitted: 119
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 712528 data_alloc: 218103808 data_used: 237568
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:07.536483+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71598080 unmapped: 19546112 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _renew_subs
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 104 handle_osd_map epochs [105,105], i have 104, src has [1,105]
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 105 ms_handle_reset con 0x55a4f07f6000 session 0x55a4ee2fa780
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:08.536588+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71614464 unmapped: 19529728 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 105 heartbeat osd_stat(store_statfs(0x4fbe7f000/0x0/0x4ffc00000, data 0xcf1982/0xd9e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:09.536750+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71614464 unmapped: 19529728 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:10.536911+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71614464 unmapped: 19529728 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:11.537029+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71614464 unmapped: 19529728 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 715326 data_alloc: 218103808 data_used: 237568
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:12.537162+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 19521536 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:13.537364+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 19521536 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:14.537508+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 19521536 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 105 heartbeat osd_stat(store_statfs(0x4fbe7f000/0x0/0x4ffc00000, data 0xcf1982/0xd9e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 105 heartbeat osd_stat(store_statfs(0x4fbe7f000/0x0/0x4ffc00000, data 0xcf1982/0xd9e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:15.537644+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 19521536 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 105 handle_osd_map epochs [105,106], i have 105, src has [1,106]
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:16.537812+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 19513344 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 718300 data_alloc: 218103808 data_used: 237568
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:17.537964+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 19513344 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:18.538147+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 19513344 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:19.538303+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 19513344 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:20.538471+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 19513344 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:21.538655+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 19513344 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 718300 data_alloc: 218103808 data_used: 237568
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:22.538811+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 19513344 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:23.539011+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 19513344 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:24.539220+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 19513344 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:25.539409+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 19513344 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:26.539629+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 19513344 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:27.539889+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 19513344 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:28.540081+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 19513344 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:29.540241+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 19513344 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:30.541890+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 19513344 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:31.542548+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 19513344 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:32.543901+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 19513344 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:33.544104+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 19513344 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:34.545275+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 19513344 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:35.545576+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71639040 unmapped: 19505152 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:36.546247+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71639040 unmapped: 19505152 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:37.547089+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71639040 unmapped: 19505152 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:38.547261+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71639040 unmapped: 19505152 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:39.547421+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71639040 unmapped: 19505152 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:40.547548+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71639040 unmapped: 19505152 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:41.547727+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71639040 unmapped: 19505152 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:42.547861+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71639040 unmapped: 19505152 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:43.547988+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71639040 unmapped: 19505152 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:44.548220+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71639040 unmapped: 19505152 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:45.548395+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71639040 unmapped: 19505152 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:46.548586+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71639040 unmapped: 19505152 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:47.548763+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71639040 unmapped: 19505152 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:48.548889+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71639040 unmapped: 19505152 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:49.549042+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71639040 unmapped: 19505152 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:50.549203+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71639040 unmapped: 19505152 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:51.549430+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71639040 unmapped: 19505152 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:52.549619+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71639040 unmapped: 19505152 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:53.549812+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71639040 unmapped: 19505152 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:54.550003+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71639040 unmapped: 19505152 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:55.550232+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71639040 unmapped: 19505152 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:56.550425+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71639040 unmapped: 19505152 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:57.550581+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71647232 unmapped: 19496960 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:58.550738+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71647232 unmapped: 19496960 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:56:59.550867+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71647232 unmapped: 19496960 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:00.551016+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71647232 unmapped: 19496960 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:01.551115+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71647232 unmapped: 19496960 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:02.551307+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71647232 unmapped: 19496960 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:03.551508+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71647232 unmapped: 19496960 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:04.551695+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71647232 unmapped: 19496960 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:05.551882+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71647232 unmapped: 19496960 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:06.552088+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71647232 unmapped: 19496960 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:07.552256+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 19488768 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:08.552364+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 19488768 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:09.552521+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 19488768 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:10.552669+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 19488768 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:11.552831+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 19488768 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:12.553017+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 19488768 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:13.553213+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 19488768 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:14.553379+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 19488768 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:15.553507+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 19488768 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:16.553689+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 19488768 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:17.553798+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 19488768 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:18.553931+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 19488768 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:19.554103+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 19488768 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:20.554318+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 19488768 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:21.554439+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 19488768 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:22.554596+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 19488768 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:23.554752+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 19488768 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:24.554928+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 19488768 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:25.555084+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 19488768 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:26.555308+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 19488768 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:27.555472+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 19488768 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:28.555617+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 19488768 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:29.555827+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 19488768 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:30.556029+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 19488768 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:31.556469+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 19488768 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:32.558037+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 19488768 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:33.559114+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 19488768 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:34.560736+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 19488768 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:35.561226+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 19488768 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:36.561484+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 19488768 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:37.562375+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 19488768 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:38.563037+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 19488768 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:39.563213+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 19488768 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:40.563389+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71663616 unmapped: 19480576 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:41.563599+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71663616 unmapped: 19480576 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:42.563908+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71671808 unmapped: 19472384 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:43.564121+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71671808 unmapped: 19472384 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:44.564334+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71671808 unmapped: 19472384 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:45.564597+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71671808 unmapped: 19472384 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:46.564877+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71671808 unmapped: 19472384 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:47.565047+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71671808 unmapped: 19472384 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:48.565165+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71671808 unmapped: 19472384 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:49.565345+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71671808 unmapped: 19472384 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:50.565498+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71671808 unmapped: 19472384 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:51.565633+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71671808 unmapped: 19472384 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:52.565745+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71671808 unmapped: 19472384 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:53.565885+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71671808 unmapped: 19472384 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:54.566012+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71671808 unmapped: 19472384 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:55.566160+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71671808 unmapped: 19472384 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:56.566344+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71671808 unmapped: 19472384 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:57.566455+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71671808 unmapped: 19472384 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:58.566584+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71671808 unmapped: 19472384 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:57:59.566724+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71671808 unmapped: 19472384 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:00.566879+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71671808 unmapped: 19472384 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:01.566991+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 71671808 unmapped: 19472384 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:02.567101+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: do_command 'config diff' '{prefix=config diff}'
Nov 26 00:06:44 compute-0 ceph-osd[89044]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 26 00:06:44 compute-0 ceph-osd[89044]: do_command 'config show' '{prefix=config show}'
Nov 26 00:06:44 compute-0 ceph-osd[89044]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 19046400 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: do_command 'counter dump' '{prefix=counter dump}'
Nov 26 00:06:44 compute-0 ceph-osd[89044]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 26 00:06:44 compute-0 ceph-osd[89044]: do_command 'counter schema' '{prefix=counter schema}'
Nov 26 00:06:44 compute-0 ceph-osd[89044]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:03.567218+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72409088 unmapped: 18735104 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:04.567343+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 18915328 heap: 91144192 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: do_command 'log dump' '{prefix=log dump}'
Nov 26 00:06:44 compute-0 ceph-osd[89044]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:05.567462+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: do_command 'perf dump' '{prefix=perf dump}'
Nov 26 00:06:44 compute-0 ceph-osd[89044]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Nov 26 00:06:44 compute-0 ceph-osd[89044]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Nov 26 00:06:44 compute-0 ceph-osd[89044]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72597504 unmapped: 29589504 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: do_command 'perf schema' '{prefix=perf schema}'
Nov 26 00:06:44 compute-0 ceph-osd[89044]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:06.567627+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72679424 unmapped: 29507584 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:07.567768+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72679424 unmapped: 29507584 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:08.567882+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72679424 unmapped: 29507584 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:09.567984+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72679424 unmapped: 29507584 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:10.568098+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72679424 unmapped: 29507584 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:11.568216+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72679424 unmapped: 29507584 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:12.568344+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72679424 unmapped: 29507584 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:13.568450+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72679424 unmapped: 29507584 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:14.568564+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72679424 unmapped: 29507584 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:15.568709+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72679424 unmapped: 29507584 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:16.568864+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72679424 unmapped: 29507584 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:17.568985+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72679424 unmapped: 29507584 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:18.569136+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72679424 unmapped: 29507584 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:19.569272+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72679424 unmapped: 29507584 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:20.569419+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72679424 unmapped: 29507584 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:21.569531+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72679424 unmapped: 29507584 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:22.569668+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72679424 unmapped: 29507584 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:23.569787+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72679424 unmapped: 29507584 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:24.569914+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72679424 unmapped: 29507584 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:25.570030+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72679424 unmapped: 29507584 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:26.570180+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72679424 unmapped: 29507584 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:27.570701+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 29499392 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:28.570832+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 29499392 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:29.570945+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 29499392 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:30.571138+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 29499392 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:31.571270+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 29499392 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:32.571399+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 29499392 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:33.571626+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 29499392 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:34.571760+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 29499392 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:35.571909+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 29499392 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:36.572289+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 29499392 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:37.572484+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 29499392 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:38.572611+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 29499392 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:39.572790+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 29499392 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:40.573203+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 29499392 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:41.573372+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 29499392 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:42.573533+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 29499392 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:43.573725+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 29499392 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:44.573883+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 29499392 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:45.574082+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 29499392 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:46.574290+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 29499392 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:47.574440+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 29499392 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:48.574580+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 29499392 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:49.574759+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 29499392 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:50.575016+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 29499392 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:51.575319+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 29499392 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:52.575494+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 29499392 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:53.575717+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 29499392 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:54.575917+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 29499392 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:55.576146+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 29499392 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:56.576392+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 29499392 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:57.576635+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 29499392 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:58.576806+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 29499392 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:58:59.576948+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 29499392 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:00.577117+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 29499392 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:01.577391+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 29499392 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:02.577537+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 29499392 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:03.577660+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 29499392 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:04.577839+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 29499392 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:05.578048+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 29499392 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:06.578305+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 29499392 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:07.578476+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 29499392 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:08.579027+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 29499392 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:09.579306+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 29499392 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:10.580899+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 29499392 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:11.581638+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72695808 unmapped: 29491200 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:12.582033+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72695808 unmapped: 29491200 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:13.582549+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72695808 unmapped: 29491200 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:14.582893+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72695808 unmapped: 29491200 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:15.583847+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72695808 unmapped: 29491200 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:16.584426+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72695808 unmapped: 29491200 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:17.584620+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72695808 unmapped: 29491200 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:18.585021+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:19.585254+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72695808 unmapped: 29491200 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:20.585502+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72695808 unmapped: 29491200 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:21.585785+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72695808 unmapped: 29491200 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:22.586063+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72695808 unmapped: 29491200 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:23.586241+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72695808 unmapped: 29491200 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:24.586470+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72695808 unmapped: 29491200 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:25.586619+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72695808 unmapped: 29491200 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:26.586796+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72695808 unmapped: 29491200 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:27.586934+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72695808 unmapped: 29491200 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:28.587111+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72695808 unmapped: 29491200 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:29.587293+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72695808 unmapped: 29491200 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:30.587430+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72695808 unmapped: 29491200 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:31.587536+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72695808 unmapped: 29491200 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:32.587670+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72695808 unmapped: 29491200 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:33.587866+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72695808 unmapped: 29491200 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:34.588033+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72695808 unmapped: 29491200 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:35.588240+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72695808 unmapped: 29491200 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:36.588571+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72695808 unmapped: 29491200 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:37.588729+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72695808 unmapped: 29491200 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:38.588891+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72695808 unmapped: 29491200 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:39.589053+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72695808 unmapped: 29491200 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:40.589231+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72695808 unmapped: 29491200 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:41.590775+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72695808 unmapped: 29491200 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:42.591796+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72695808 unmapped: 29491200 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:43.592204+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72695808 unmapped: 29491200 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:44.593140+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72695808 unmapped: 29491200 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:45.593872+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72695808 unmapped: 29491200 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:46.594578+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72695808 unmapped: 29491200 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:47.595299+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72695808 unmapped: 29491200 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:48.595683+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72695808 unmapped: 29491200 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:49.596125+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72695808 unmapped: 29491200 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:50.596377+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72695808 unmapped: 29491200 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:51.596659+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72695808 unmapped: 29491200 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:52.597131+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72695808 unmapped: 29491200 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:53.597458+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72695808 unmapped: 29491200 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:54.597710+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72695808 unmapped: 29491200 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:55.597982+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72695808 unmapped: 29491200 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:56.598259+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72695808 unmapped: 29491200 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:57.598445+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 29483008 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:58.598664+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 29483008 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-25T23:59:59.598875+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 29483008 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:00.599077+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 29483008 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:01.599280+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 29483008 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:02.599483+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 29483008 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:03.599665+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 29483008 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:04.599857+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 29483008 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:05.600067+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 29483008 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:06.600319+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 29483008 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:07.600516+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 29483008 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:08.600707+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 29483008 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:09.600846+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 29483008 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:10.601011+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 29483008 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:11.601201+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 29483008 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:12.601273+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 29483008 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:13.601392+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 29483008 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:14.601530+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 29483008 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:15.601674+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 29483008 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:16.601847+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 29483008 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:17.601953+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 29483008 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:18.602101+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 29483008 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:19.602266+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 29483008 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:20.602424+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 29483008 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:21.602544+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 29483008 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:22.602696+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 29483008 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:23.602836+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 29483008 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:24.602983+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 29483008 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:25.603109+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 29483008 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:26.603284+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 29483008 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:27.603395+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 29483008 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:28.603567+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 29483008 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:29.603700+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 29483008 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:30.603830+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 29483008 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:31.603941+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 29483008 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:32.604149+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 29483008 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:33.604346+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 29483008 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:34.604504+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 29483008 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:35.604672+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 29474816 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:36.604891+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 29474816 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:37.605056+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 29474816 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:38.605229+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 29474816 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:39.605355+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 29474816 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:40.605496+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 29474816 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:41.605628+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 29474816 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:42.605755+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 29474816 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:43.605901+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 29474816 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:44.606088+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 29474816 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:45.606403+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 29474816 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:46.606684+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 29474816 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:47.606994+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 29474816 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:48.607300+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 29474816 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:49.607451+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 29474816 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:50.608034+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 29474816 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:51.608262+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 29474816 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:52.608868+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 29474816 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:53.609262+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 29474816 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:54.609861+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 29474816 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:55.610280+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 29474816 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:56.610560+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 29474816 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:57.610715+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 29474816 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:58.611038+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 29474816 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:00:59.611247+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 29474816 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:00.611636+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 29474816 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:01.611856+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 29474816 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:02.612249+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 29474816 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:03.612516+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 29474816 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:04.612694+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 29474816 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:05.612966+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 29474816 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:06.613268+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 29474816 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:07.613423+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 29474816 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:08.613612+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 29474816 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:09.613861+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 29474816 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:10.614060+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 29474816 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:11.614270+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 29474816 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:12.614494+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 29474816 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:13.614693+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 29474816 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:14.614927+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 29474816 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:15.615267+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72720384 unmapped: 29466624 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:16.615470+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72720384 unmapped: 29466624 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:17.615661+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72720384 unmapped: 29466624 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:18.615777+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72720384 unmapped: 29466624 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:19.615917+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72720384 unmapped: 29466624 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:20.616043+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72720384 unmapped: 29466624 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:21.616254+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72720384 unmapped: 29466624 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:22.616385+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72720384 unmapped: 29466624 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:23.616527+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72720384 unmapped: 29466624 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:24.616694+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72720384 unmapped: 29466624 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:25.616829+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72720384 unmapped: 29466624 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:26.617005+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72720384 unmapped: 29466624 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:27.617113+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72720384 unmapped: 29466624 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:28.617281+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72720384 unmapped: 29466624 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:29.617416+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72720384 unmapped: 29466624 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:30.617571+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72720384 unmapped: 29466624 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:31.617767+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72720384 unmapped: 29466624 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:32.617918+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72720384 unmapped: 29466624 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:33.618017+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72720384 unmapped: 29466624 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:34.618154+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72720384 unmapped: 29466624 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:35.618356+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72720384 unmapped: 29466624 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:36.618540+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72720384 unmapped: 29466624 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:37.618677+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72720384 unmapped: 29466624 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:38.618806+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72720384 unmapped: 29466624 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:39.618922+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72720384 unmapped: 29466624 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:40.619050+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72720384 unmapped: 29466624 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:41.619290+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72720384 unmapped: 29466624 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:42.619440+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72720384 unmapped: 29466624 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:43.619560+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72720384 unmapped: 29466624 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:44.620880+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72720384 unmapped: 29466624 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:45.621186+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72720384 unmapped: 29466624 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:46.621348+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72720384 unmapped: 29466624 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:47.621513+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72720384 unmapped: 29466624 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:48.621660+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72720384 unmapped: 29466624 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:49.621811+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72720384 unmapped: 29466624 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:50.622082+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72720384 unmapped: 29466624 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:51.622243+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72720384 unmapped: 29466624 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:52.622408+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72728576 unmapped: 29458432 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:53.622544+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72728576 unmapped: 29458432 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:54.622683+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72728576 unmapped: 29458432 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:55.622805+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72728576 unmapped: 29458432 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:56.622980+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72728576 unmapped: 29458432 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:57.623140+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72728576 unmapped: 29458432 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:58.623245+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72728576 unmapped: 29458432 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:01:59.623412+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72728576 unmapped: 29458432 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:00.623572+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72728576 unmapped: 29458432 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:01.623667+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72728576 unmapped: 29458432 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:02.623800+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72728576 unmapped: 29458432 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:03.623966+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72728576 unmapped: 29458432 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:04.624135+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72728576 unmapped: 29458432 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:05.624268+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72728576 unmapped: 29458432 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:06.624438+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72728576 unmapped: 29458432 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:07.624635+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72728576 unmapped: 29458432 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:08.624779+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72728576 unmapped: 29458432 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:09.624934+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72728576 unmapped: 29458432 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:10.625132+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72728576 unmapped: 29458432 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:11.625334+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72728576 unmapped: 29458432 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:12.625674+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72728576 unmapped: 29458432 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:13.625771+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72728576 unmapped: 29458432 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:14.626001+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72728576 unmapped: 29458432 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:15.626277+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72728576 unmapped: 29458432 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:16.626439+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72728576 unmapped: 29458432 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:17.626628+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72728576 unmapped: 29458432 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:18.626781+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72728576 unmapped: 29458432 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:19.626916+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72728576 unmapped: 29458432 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:20.627063+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72728576 unmapped: 29458432 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 5992 writes, 24K keys, 5992 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 5992 writes, 1182 syncs, 5.07 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1626 writes, 4173 keys, 1626 commit groups, 1.0 writes per commit group, ingest: 2.37 MB, 0.00 MB/s
                                           Interval WAL: 1626 writes, 724 syncs, 2.25 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:21.627622+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72728576 unmapped: 29458432 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:22.627825+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72728576 unmapped: 29458432 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:23.628273+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72728576 unmapped: 29458432 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:24.628605+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72728576 unmapped: 29458432 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:25.628858+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72728576 unmapped: 29458432 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:26.629153+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: mgrc ms_handle_reset ms_handle_reset con 0x55a4ecca0800
Nov 26 00:06:44 compute-0 ceph-osd[89044]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/855624559
Nov 26 00:06:44 compute-0 ceph-osd[89044]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/855624559,v1:192.168.122.100:6801/855624559]
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: get_auth_request con 0x55a4f07f7000 auth_method 0
Nov 26 00:06:44 compute-0 ceph-osd[89044]: mgrc handle_mgr_configure stats_period=5
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 29278208 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:27.629425+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 29278208 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:28.629572+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 29278208 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:29.629770+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 29278208 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:30.629911+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 ms_handle_reset con 0x55a4ef0b4800 session 0x55a4ee32ed20
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4eeeb2000
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 29278208 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:31.630083+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 29270016 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:32.630255+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 29270016 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:33.630414+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 29270016 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:34.630677+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 29270016 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:35.630920+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 29261824 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:36.631266+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 29261824 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:37.631551+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 29261824 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:38.631803+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 29261824 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:39.632072+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 29261824 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:40.632281+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 29261824 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:41.632450+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 29261824 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:42.632629+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 29261824 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:43.632755+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 29261824 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:44.633001+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 29261824 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:45.633291+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 29261824 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:46.633504+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 29261824 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:47.633687+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 29261824 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:48.633866+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 29261824 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:49.634012+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 29261824 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:50.634268+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 29261824 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:51.634416+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 29261824 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:52.634647+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 29261824 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:53.635252+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 29261824 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:54.635596+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 29261824 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:55.635769+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 29261824 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:56.636015+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 29261824 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:57.636202+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 29261824 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:58.636323+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 29261824 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:02:59.636434+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 29261824 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:00.636612+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 29261824 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:01.636806+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 29261824 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:02.637008+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 29261824 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:03.637438+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 29261824 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:04.637599+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 29261824 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:05.637775+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 29261824 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:06.637988+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 29261824 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:07.638117+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 29261824 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:08.638226+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 29261824 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:09.638432+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 ms_handle_reset con 0x55a4ed46f800 session 0x55a4ed452f00
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: handle_auth_request added challenge on 0x55a4ef0b4000
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 29261824 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:10.638644+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 29261824 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:11.638788+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 29261824 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:12.638929+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 29261824 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:13.639131+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:14.639293+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 29261824 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:15.639414+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 29261824 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:16.639736+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 29261824 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:17.639994+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 29261824 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:18.640162+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 29261824 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:19.640367+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 29261824 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:20.640508+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 29261824 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:21.640769+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 29261824 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:22.640965+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 29261824 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:23.641135+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 29261824 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:24.641333+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 29253632 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:25.641490+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 29253632 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:26.641806+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 29253632 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:27.642050+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 29253632 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:28.642268+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 29253632 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:29.642445+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 29253632 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:30.642773+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 29253632 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:31.642948+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 29253632 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:32.643363+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 29253632 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:33.643602+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 29253632 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:34.644022+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 29253632 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:35.644309+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 29253632 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:36.644542+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 29253632 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:37.644746+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 29253632 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:38.644961+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 29253632 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:39.645116+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 29253632 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:40.645309+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 29253632 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:41.645491+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 29253632 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:42.645603+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 29253632 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:43.645737+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 29253632 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:44.645868+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 29253632 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:45.646031+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 29253632 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:46.646255+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 29253632 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:47.646398+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 29253632 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:48.646512+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 29253632 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:49.646717+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 29253632 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:50.646829+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 29253632 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:51.646951+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 29253632 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:52.647077+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 29253632 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:53.647217+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 29253632 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:54.647340+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 29253632 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:55.647469+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 29253632 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:56.647641+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 29253632 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:57.647845+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 29245440 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:58.648009+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 29245440 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:03:59.648132+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 29245440 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:00.648344+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 29245440 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:01.648524+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 29245440 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:02.648715+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 29245440 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:03.648868+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 29245440 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:04.648987+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 29245440 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:05.649129+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 29245440 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:06.649317+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 29245440 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:07.649488+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 29245440 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:08.649623+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 29245440 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:09.649759+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 29245440 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:10.649904+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 29245440 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:11.650021+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 29245440 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:12.650256+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 29245440 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:13.650455+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 29245440 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:14.650628+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 29245440 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:15.650803+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 29245440 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:16.651022+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 29245440 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:17.651225+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 29245440 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:18.651359+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 29245440 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:19.651533+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 29245440 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:20.651669+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 29245440 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:21.651853+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 29245440 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:22.652053+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 29245440 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:23.652233+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 29245440 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:24.652381+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 29245440 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:25.652527+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 29245440 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:26.652696+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 29245440 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:27.652812+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 29245440 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:28.652948+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 29245440 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:29.653095+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 29245440 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:30.653598+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 29245440 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:31.653929+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 29245440 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:32.654234+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 29245440 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:33.654409+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 29245440 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:34.654522+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 29245440 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:35.655041+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 29245440 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:36.655589+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 29245440 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:37.656241+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 29245440 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:38.656493+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 29245440 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:39.656745+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 29245440 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:40.656938+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 29245440 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:41.657085+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 29245440 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:42.657415+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 29245440 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:43.657630+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 29245440 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:44.657783+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 29245440 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:45.657990+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 29245440 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:46.658301+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 29245440 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:47.658711+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 29245440 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:48.658922+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 29245440 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:49.659086+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 29245440 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:50.659335+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 29245440 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:51.659508+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 29245440 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:52.659665+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 29245440 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:53.659895+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 29245440 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:54.660093+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 29245440 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:55.660345+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 29245440 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:56.660554+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 29245440 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:57.660723+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 29245440 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:58.660893+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 29237248 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:04:59.661090+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 29237248 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:00.661253+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 29237248 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:01.661397+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 29237248 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:02.661528+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 29237248 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:03.661670+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 29237248 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:04.661832+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 29237248 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:05.662051+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 29237248 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:06.662245+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 29237248 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:07.662392+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 29237248 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:08.662537+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 29237248 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:09.662667+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 29237248 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:10.662765+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 29237248 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:11.662910+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 29237248 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:12.663055+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 29237248 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:13.663512+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 29237248 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:14.663640+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 29237248 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:15.663948+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 29237248 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:16.664126+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 29237248 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:17.664287+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 29237248 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:18.664481+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 29237248 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:19.664674+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 29237248 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:20.664841+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 29237248 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:21.664976+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 29237248 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:22.665230+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 29237248 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:23.665353+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 29237248 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:24.665476+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 29237248 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:25.665629+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 29237248 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:26.665813+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 29237248 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:27.665970+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 29237248 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:28.666165+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 29237248 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:29.666423+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 29237248 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:30.666666+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 29237248 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:31.666811+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 29237248 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:32.666962+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 29237248 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:33.667090+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 29237248 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:34.667301+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 29237248 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:35.668287+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 29237248 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:36.668599+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 29237248 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:37.669052+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 29237248 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:38.669701+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 29237248 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:39.669945+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 29237248 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:40.670154+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 29237248 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:41.670379+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 29237248 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:42.670897+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 29237248 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:43.671295+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 29237248 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:44.671653+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 29237248 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:45.671993+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 29237248 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:46.672254+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 29237248 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:47.672417+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 29237248 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:48.672788+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 29237248 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:49.673266+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 29237248 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:50.673670+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 29237248 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:51.673894+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 29237248 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:52.674095+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 29237248 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/1816757859' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 26 00:06:44 compute-0 ceph-mon[75654]: from='client.15151 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 26 00:06:44 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/2451013709' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 26 00:06:44 compute-0 ceph-mon[75654]: from='client.15155 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 26 00:06:44 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/1131843593' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:53.674384+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 29237248 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:54.674594+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 29237248 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:55.674813+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 29237248 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:56.675142+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 29237248 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:57.675337+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 29237248 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:58.675561+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 29237248 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:05:59.675678+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 29237248 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:06:00.675850+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 29237248 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:06:01.676000+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 29237248 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:06:02.676194+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72957952 unmapped: 29229056 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:06:03.676421+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72957952 unmapped: 29229056 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:06:04.676610+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72957952 unmapped: 29229056 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:06:05.676805+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72957952 unmapped: 29229056 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:06:06.677062+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72957952 unmapped: 29229056 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:06:07.677267+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72957952 unmapped: 29229056 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:06:08.677448+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 26 00:06:44 compute-0 ceph-osd[89044]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 26 00:06:44 compute-0 ceph-osd[89044]: bluestore.MempoolThread(0x55a4ebb47b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720540 data_alloc: 218103808 data_used: 294912
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72957952 unmapped: 29229056 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:06:09.677655+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72957952 unmapped: 29229056 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:06:10.677844+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72957952 unmapped: 29229056 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:06:11.677979+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: do_command 'config diff' '{prefix=config diff}'
Nov 26 00:06:44 compute-0 ceph-osd[89044]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: do_command 'config show' '{prefix=config show}'
Nov 26 00:06:44 compute-0 ceph-osd[89044]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 26 00:06:44 compute-0 ceph-osd[89044]: do_command 'counter dump' '{prefix=counter dump}'
Nov 26 00:06:44 compute-0 ceph-osd[89044]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 26 00:06:44 compute-0 ceph-osd[89044]: do_command 'counter schema' '{prefix=counter schema}'
Nov 26 00:06:44 compute-0 ceph-osd[89044]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 73056256 unmapped: 29130752 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:06:12.678161+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: prioritycache tune_memory target: 4294967296 mapped: 72810496 unmapped: 29376512 heap: 102187008 old mem: 2845415832 new mem: 2845415832
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: tick
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_tickets
Nov 26 00:06:44 compute-0 ceph-osd[89044]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-26T00:06:13.678339+0000)
Nov 26 00:06:44 compute-0 ceph-osd[89044]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fbe7c000/0x0/0x4ffc00000, data 0xcf2e22/0xda1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 26 00:06:44 compute-0 ceph-osd[89044]: do_command 'log dump' '{prefix=log dump}'
Nov 26 00:06:44 compute-0 rsyslogd[1001]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 26 00:06:44 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1141: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:06:44 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0) v1
Nov 26 00:06:44 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1643515862' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 26 00:06:44 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 26 00:06:44 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 26 00:06:45 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump"} v 0) v1
Nov 26 00:06:45 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/906152799' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 26 00:06:45 compute-0 ceph-mon[75654]: pgmap v1141: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:06:45 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/1643515862' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 26 00:06:45 compute-0 ceph-mon[75654]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 26 00:06:45 compute-0 ceph-mon[75654]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 26 00:06:45 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/906152799' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 26 00:06:45 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.15167 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 00:06:46 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0) v1
Nov 26 00:06:46 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/107626537' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 26 00:06:46 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 00:06:46 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df"} v 0) v1
Nov 26 00:06:46 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3544207781' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 26 00:06:46 compute-0 ceph-mon[75654]: from='client.15167 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 00:06:46 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/107626537' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 26 00:06:46 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/3544207781' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 26 00:06:46 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1142: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:06:46 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump"} v 0) v1
Nov 26 00:06:46 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3654842759' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Nov 26 00:06:47 compute-0 systemd[1]: Starting Hostname Service...
Nov 26 00:06:47 compute-0 systemd[1]: Started Hostname Service.
Nov 26 00:06:47 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs ls"} v 0) v1
Nov 26 00:06:47 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2906412416' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Nov 26 00:06:47 compute-0 ceph-mon[75654]: pgmap v1142: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:06:47 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/3654842759' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Nov 26 00:06:47 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/2906412416' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Nov 26 00:06:47 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.15177 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 00:06:48 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds stat"} v 0) v1
Nov 26 00:06:48 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/808130556' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Nov 26 00:06:48 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump"} v 0) v1
Nov 26 00:06:48 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1951429874' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Nov 26 00:06:48 compute-0 ceph-mon[75654]: from='client.15177 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 00:06:48 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/808130556' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Nov 26 00:06:48 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/1951429874' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Nov 26 00:06:48 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1143: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:06:49 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.15183 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 00:06:49 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd blocklist ls"} v 0) v1
Nov 26 00:06:49 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1973247838' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Nov 26 00:06:49 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.15187 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 00:06:49 compute-0 ceph-mon[75654]: pgmap v1143: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:06:49 compute-0 ceph-mon[75654]: from='client.15183 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 00:06:49 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/1973247838' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Nov 26 00:06:50 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.15189 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 00:06:50 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd dump"} v 0) v1
Nov 26 00:06:50 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/326868980' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Nov 26 00:06:50 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd numa-status"} v 0) v1
Nov 26 00:06:50 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1524208782' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Nov 26 00:06:50 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1144: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:06:50 compute-0 ceph-mon[75654]: from='client.15187 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 00:06:50 compute-0 ceph-mon[75654]: from='client.15189 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 00:06:50 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/326868980' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Nov 26 00:06:50 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/1524208782' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Nov 26 00:06:51 compute-0 ceph-mon[75654]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 26 00:06:51 compute-0 ceph-mon[75654]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Nov 26 00:06:51 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/26-00:06:51.081649) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 26 00:06:51 compute-0 ceph-mon[75654]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Nov 26 00:06:51 compute-0 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764115611081682, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 1252, "num_deletes": 507, "total_data_size": 901504, "memory_usage": 923784, "flush_reason": "Manual Compaction"}
Nov 26 00:06:51 compute-0 ceph-mon[75654]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Nov 26 00:06:51 compute-0 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764115611091785, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 878219, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 22700, "largest_seqno": 23951, "table_properties": {"data_size": 872513, "index_size": 2461, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 16488, "raw_average_key_size": 19, "raw_value_size": 858721, "raw_average_value_size": 1016, "num_data_blocks": 109, "num_entries": 845, "num_filter_entries": 845, "num_deletions": 507, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764115536, "oldest_key_time": 1764115536, "file_creation_time": 1764115611, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cf57a6b1-796f-4cfa-b350-53eb10a4554d", "db_session_id": "Q7VS70283MEZ1V621ZPR", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Nov 26 00:06:51 compute-0 ceph-mon[75654]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 10209 microseconds, and 3515 cpu microseconds.
Nov 26 00:06:51 compute-0 ceph-mon[75654]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 00:06:51 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/26-00:06:51.091856) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 878219 bytes OK
Nov 26 00:06:51 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/26-00:06:51.091874) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Nov 26 00:06:51 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/26-00:06:51.093153) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Nov 26 00:06:51 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/26-00:06:51.093164) EVENT_LOG_v1 {"time_micros": 1764115611093160, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 26 00:06:51 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/26-00:06:51.093201) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 26 00:06:51 compute-0 ceph-mon[75654]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 894489, prev total WAL file size 894489, number of live WAL files 2.
Nov 26 00:06:51 compute-0 ceph-mon[75654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 00:06:51 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/26-00:06:51.093687) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353032' seq:72057594037927935, type:22 .. '6C6F676D00373535' seq:0, type:0; will stop at (end)
Nov 26 00:06:51 compute-0 ceph-mon[75654]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 26 00:06:51 compute-0 ceph-mon[75654]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(857KB)], [53(6728KB)]
Nov 26 00:06:51 compute-0 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764115611093734, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 7768010, "oldest_snapshot_seqno": -1}
Nov 26 00:06:51 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.15195 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 00:06:51 compute-0 ceph-mon[75654]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 4268 keys, 5596912 bytes, temperature: kUnknown
Nov 26 00:06:51 compute-0 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764115611168360, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 5596912, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 5567690, "index_size": 17464, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10693, "raw_key_size": 104814, "raw_average_key_size": 24, "raw_value_size": 5490293, "raw_average_value_size": 1286, "num_data_blocks": 734, "num_entries": 4268, "num_filter_entries": 4268, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764113467, "oldest_key_time": 0, "file_creation_time": 1764115611, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cf57a6b1-796f-4cfa-b350-53eb10a4554d", "db_session_id": "Q7VS70283MEZ1V621ZPR", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Nov 26 00:06:51 compute-0 ceph-mon[75654]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 26 00:06:51 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/26-00:06:51.168567) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 5596912 bytes
Nov 26 00:06:51 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/26-00:06:51.173981) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 104.0 rd, 74.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 6.6 +0.0 blob) out(5.3 +0.0 blob), read-write-amplify(15.2) write-amplify(6.4) OK, records in: 5295, records dropped: 1027 output_compression: NoCompression
Nov 26 00:06:51 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/26-00:06:51.174001) EVENT_LOG_v1 {"time_micros": 1764115611173993, "job": 28, "event": "compaction_finished", "compaction_time_micros": 74687, "compaction_time_cpu_micros": 19647, "output_level": 6, "num_output_files": 1, "total_output_size": 5596912, "num_input_records": 5295, "num_output_records": 4268, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 26 00:06:51 compute-0 ceph-mon[75654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 00:06:51 compute-0 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764115611174256, "job": 28, "event": "table_file_deletion", "file_number": 55}
Nov 26 00:06:51 compute-0 ceph-mon[75654]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 26 00:06:51 compute-0 ceph-mon[75654]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764115611175265, "job": 28, "event": "table_file_deletion", "file_number": 53}
Nov 26 00:06:51 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/26-00:06:51.093608) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 00:06:51 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/26-00:06:51.175322) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 00:06:51 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/26-00:06:51.175329) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 00:06:51 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/26-00:06:51.175331) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 00:06:51 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/26-00:06:51.175333) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 00:06:51 compute-0 ceph-mon[75654]: rocksdb: (Original Log Time 2025/11/26-00:06:51.175335) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 26 00:06:51 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.15197 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 00:06:51 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 00:06:51 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 26 00:06:51 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 00:06:51 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 26 00:06:51 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 00:06:51 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 26 00:06:51 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 00:06:51 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 00:06:51 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 00:06:51 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Nov 26 00:06:51 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 00:06:51 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 26 00:06:51 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 26 00:06:51 compute-0 ceph-mgr[75954]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 26 00:06:51 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0) v1
Nov 26 00:06:51 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1419627900' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Nov 26 00:06:52 compute-0 ceph-mon[75654]: pgmap v1144: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:06:52 compute-0 ceph-mon[75654]: from='client.15195 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 00:06:52 compute-0 ceph-mon[75654]: from='client.15197 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 00:06:52 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/1419627900' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Nov 26 00:06:52 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd stat"} v 0) v1
Nov 26 00:06:52 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3694185155' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Nov 26 00:06:52 compute-0 sudo[281136]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 00:06:52 compute-0 sudo[281136]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:06:52 compute-0 sudo[281136]: pam_unix(sudo:session): session closed for user root
Nov 26 00:06:52 compute-0 sudo[281182]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 00:06:52 compute-0 sudo[281182]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:06:52 compute-0 sudo[281182]: pam_unix(sudo:session): session closed for user root
Nov 26 00:06:52 compute-0 sudo[281218]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 00:06:52 compute-0 sudo[281218]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:06:52 compute-0 sudo[281218]: pam_unix(sudo:session): session closed for user root
Nov 26 00:06:52 compute-0 sudo[281275]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 26 00:06:52 compute-0 sudo[281275]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:06:52 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.15203 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 00:06:52 compute-0 ceph-mgr[75954]: log_channel(cluster) log [DBG] : pgmap v1145: 177 pgs: 177 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 26 00:06:52 compute-0 sudo[281275]: pam_unix(sudo:session): session closed for user root
Nov 26 00:06:52 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 00:06:52 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 00:06:52 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 26 00:06:52 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 00:06:52 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 26 00:06:52 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 26 00:06:52 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev aaeab7eb-baa1-44e1-9abc-d6b3692e3e50 does not exist
Nov 26 00:06:52 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev 21e07694-94f1-4fb2-b6ec-d53c76d71227 does not exist
Nov 26 00:06:52 compute-0 ceph-mgr[75954]: [progress WARNING root] complete: ev 6d4afbbf-af1a-475c-b864-aefe0b803f5f does not exist
Nov 26 00:06:52 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 26 00:06:52 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 00:06:52 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 26 00:06:52 compute-0 ceph-mon[75654]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 00:06:52 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 26 00:06:52 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 00:06:52 compute-0 ceph-mgr[75954]: log_channel(audit) log [DBG] : from='client.15205 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 26 00:06:52 compute-0 sudo[281470]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 00:06:52 compute-0 sudo[281470]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:06:53 compute-0 sudo[281470]: pam_unix(sudo:session): session closed for user root
Nov 26 00:06:53 compute-0 sudo[281515]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 26 00:06:53 compute-0 sudo[281515]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:06:53 compute-0 sudo[281515]: pam_unix(sudo:session): session closed for user root
Nov 26 00:06:53 compute-0 ceph-mon[75654]: from='client.? 192.168.122.100:0/3694185155' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Nov 26 00:06:53 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 00:06:53 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 26 00:06:53 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' 
Nov 26 00:06:53 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 26 00:06:53 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 26 00:06:53 compute-0 ceph-mon[75654]: from='mgr.14132 192.168.122.100:0/2229340376' entity='mgr.compute-0.gwqfsl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 26 00:06:53 compute-0 sudo[281551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 26 00:06:53 compute-0 sudo[281551]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:06:53 compute-0 sudo[281551]: pam_unix(sudo:session): session closed for user root
Nov 26 00:06:53 compute-0 sudo[281608]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/101922db-575f-58e2-980f-928050464f69/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 101922db-575f-58e2-980f-928050464f69 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 26 00:06:53 compute-0 sudo[281608]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 26 00:06:53 compute-0 ceph-mon[75654]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Nov 26 00:06:53 compute-0 ceph-mon[75654]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/953580267' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
